accession_id
stringlengths
9
11
pmid
stringlengths
1
8
introduction
stringlengths
0
134k
methods
stringlengths
0
208k
results
stringlengths
0
357k
discussion
stringlengths
0
357k
conclusion
stringlengths
0
58.3k
front
stringlengths
0
30.9k
body
stringlengths
0
573k
back
stringlengths
0
126k
license
stringclasses
4 values
retracted
stringclasses
2 values
last_updated
stringlengths
19
19
citation
stringlengths
14
94
package_file
stringlengths
0
35
PMC3014797
20572785
Introduction Insect ovaries undergo periodic changes during the course of their development. Polytrophic ovaries of flies pass through 14 developmental stages based on the deposition of yolk in their oocytes ( King 1970 ). From the 1 st to the 6 th stage they contain no yolk; in stages 7–10 the growing oocytes continuously fill with yolk filling up to half of the egg chamber volume. In this process, the follicular cells that form a single layer surrounding the oocyte and nutritive cells play an important role. Nutrients (proteins, peptides, lipidic substances, and carbohydrates) are synthesized and stored in the fat body from which they are transported to the growing ovaries by the hemolymph. During previtellogenesis, the tightly packed cells of the follicular epithelium shrink under hormonal influence ( Sevala and Davey 1993 ; Davey 2000 ; Pszczolkowski et al. 2005 ) and form intercellular spaces where cytoskeletal structures and protuberancies appear ( Telfer et al. 1982 ; Fleig 2001 ). This patency of follicular cells enables the transport of nutrients to the growing oocyte and takes part in the synthesis of yolk components in the oocyte cytoplasm ( Huebner et al. 1975 ; Kelly and Telfer 1979 ; Brennan et al. 1982 ; Fausto et al. 2005 ). The activity of follicular cells may vary with the demand of particular nutrients by the oocyte during a reproductive cycle ( Davey 1996 ). A study using the trypsin modulating oostatic factor (TMOF) of the mosquito, Aedes aegypti L. (Diptera: Culicidae), and the flesh fly, Neobellieria bullata (Parker) (Diptera: Sarcophagidae), for the investigation of their sterilizing effect on a partly autogenous strain of N. bullata had negative results, and, consequently, a new study was done of peptides with the C-terminal shortened sequence of Aed-TMOF (H-Tyr-Asp-Pro-Ala-PrO6-OH; Borovsky et al. 1990 , 1994 ). The evaluation of morphological changes was done on the structures of the first and second egg chambers of N. bullata during the reproductive cycle. The greatest effects on developing ovaries were found after injection of the respective pentapeptide 5P (H-Tyr-Asp-Pro-Ala-Pro-OH) or tetrapeptide 4P (H-Tyr-Asp-Pro-Ala-OH) ( Slaninová et al. 2004 ). In studies of the effects of 5P on vitellogenic stages ( Hlaváček et al. 1997 , 1998 ; Bennettová et al. 2002 ; Slaninová et al. 2004 ) differences were found between morphological changes of both egg chambers: in the first one there were no visible effects, while in the second, proliferation of the follicular epithelium into the inner space of egg chamber, followed by resorption was observed ( Figure 1 ). Therefore, its 3 H-labeled forms were used for further studies on radioactivity accumulation and degradation of the oostatic peptides in the flesh fly N. bullata and other insects ( Tykva et al. 1999 , 2007 ; Slaninová et al. 2004 ; Hlaváček et al. 2007 ). On the other hand, the native TMOF of N. bullata , hexapeptide H-Asn-Pro-Thr-Asn-Leu-His-OH ( Bylemans et al. 1994 ; de Loof et al. 1995 ), has no structural similarity to the above oostatic peptides and lacks any oostatic effects, nor does its isosteric analogue H-Asn-Proψ[CH 2 O]-D-Thr-Asn-Leu-His-OH, (Bennettová, unpublished results; Hlaváček et al. 2004). Oostatic peptides represent an effective tool for insect control by inhibiting egg development ( Slaninová et al. 2004 ; Tykva et al. 2007 ). Relative to other biologically active substances (e.g. pesticides, fungicides or juvenogens), oostatic peptides are relatively simple to synthesize and have no negative environmental impact ( Tykva et al. 2004 ). They are also soluble in water. However, their mode of action is not understood. In this study, the uptake of an in vivo injected oostatic pentapeptide (5P) into ovaries of N. bullata was estimated at different stages of vitellogenesis (7–10) using radiolabeled peptide. Isolated ovaries were also incubated with radiolabeled peptide in vitro. The radioactive metabolites in the ovaries after in vivo and in vitro uptake of radiolabeled peptide were compared.
Materials and Methods Radiolabeled peptide and developmental stages Tritiated oostatic pentapeptide (5P) H-Tyr-Asp-[ 3 H]Pro-Ala-Pro-OH, [ 3 HPrO 3 ] 5P, 1.44TBq/mmol, with radiochemical purity > 98%, as described earlier Hlaváček et al. ( 2007 ) was used in the study. Different stages of yolk deposition were classified according to the scale developed by King ( 1970 ). For in vivo applications the ovaries were divided into 2 groups (one with eggs in the 7 th and 8 th stages of development and the second with eggs in the 9 th and 10 th stages of development). For the experiments in vitro , each of these four tested stages of egg development were evaluated individually. In vivo experiments [ 3 HPro 3 ]5P was injected in 5 μl (37 kBq) of physiological solution into left upper part of the thoraces of ether-anesthetized female N. bullata . Flies were then dissected at given time intervals, and 12 pairs of ovaries of the same developmental stage (7, 8, 9 and 10) were selected for each interval. Each pair of ovaries was placed into a separate scintillation vial and covered with 0.5 ml of tissue solubilizer (NCS II, Amersham International, www.gelifesciences.com ). After six days, 10 ml of liquid scintillator EcoLite (ICN Biochemicals Inc.) were added and the radioactivity determined in the spectrometer Beckman 6500. The highest and lowest values of the sets were eliminated and the 10 remaining samples were used for evaluation of the mean values and their standard deviations. Three such experiments were carried out independently and from all of them the total mean values with their standard deviations (from ± 16% to ± 25%) were calculated. For metabolite determination by radio-HPLC, in each time interval 10–15 pairs of ovaries were dissected from the injected N. Bullata , pooled into groups according to developmental stage and frozen at -70° C until extraction. In vitro experiments Twelve dissected pairs of ovaries of identical stage and appearance were placed into a solution of [ 3 HPro 3 ]5P (555 kBq in 450 μl of physiological solution) in small embryo dishes at room temperature. At time periods equal to those for the in vivo experiments, the ovaries were removed, washed twice in physiological solution, and each pair was placed into individual scintillation vials for determination of total radioactivity and treated as described for the in vivo experiment. From three independent experiments, the total S.D. was calculated from ± 13% to ± 24%. Simultaneously, incubation for the selected time period was done for metabolite determination, and the sample was frozen as for the in vivo experiments. Extraction for metabolite determination An ice-cold solution (0.4 ml) of protease inhibitor cocktail Complete Mini (Roche Applied Sciences, www.roche-applied-science.com ) (1 tablet dissolved in 3.5 ml of 50 mM HEPES buffer pH 7.6) was added to the frozen pooled ovaries in an Eppendorf tube, and the contents were homogenized for 1 min using a Teflon pestel. After centrifugation, the supernatant was removed and either immediately analyzed by radio-HPLC or frozen and analyzed later. Analysis of 5P metabolites All radio-HPLC analyses were performed using a Waters liquid Chromatograph (Waters, www.waters.com ). A programmable UV detector was connected on-line to a radiometric flow-through detection system (Beckman 171, www.beckman.com ). The stainless steel analytical column (250 × 4 mm) LiChroCART (Merck, www.merck.com ), packed with LiChrosphere WP-300, with a particle size of 5 μm was used. The column was protected with a (4 mm × 4 mm) guard column packed with LiChrosphere 100 RP-18, particle size 5 μm (Merck). The mobile phase was composed of the aqueous phase (0.035% TFA in redistilled water) and the organic phase (0.05% in acetonitrile). After passing through the UV detector, the eluent was continuously mixed with the liquid scintillator Ready Safe (Beckman Coulter) with a ratio of 1:2.5 (v/v) in an on-line mixer. The mixture was run through a 500 μl detection cell. The radiometric detector threshold was set at 0.02%. The UV detector was set at 230 nm, 0.05 AUFS. Separation was performed at ambient temperature using a 30 min linear gradient from 0% to 30% organic phase using a flow rate of 0.8 m1/min and continuous degassing with helium. A sample volume of 20 to 80 μl was used. The area of each peak was evaluated as the ratio of its counting rate to the totally measured counting rate in all peaks of the appropriate radiochromatogram (relative concentration crel in percentage). The stability of the [ 3 HPro 3 ]5P was checked before each experiment. Samples were centrifuged for 5 min, and an aliquot of the supernatant was analyzed. Standard unlabeled peptides ( Hlaváček et al. 2007 ) as well as non-active proline were detected by UV, and their retention times were compared to that of the peaks in the radiochromatogram. The retention times of radioactive fractions were corrected to the time delay between the UV and the radiometric detector (0.55 min). The precision of the method was expressed as the coefficient of variation in percentage that varied from 1.3 to 7.7%). It was determined by analyzing five replicates of the same biological sample within one day. The extraction procedure recovery was evaluated using [ 3 HPro 3 ]5P calibration solutions of three different concentrations (42, 150 and 370 kBq/ml) by comparison of the extracted and the applied radioactivity. The average recovery was between 86.6% and 97.2% ( n = 4) with precision range from 3.3 to 5.2 (% coefficient of variation). The linearity of the radiometric detector response was verified using [ 3 HPro 3 ]5P calibration solutions in the range of 0.2–10 kBq with an average correlation coefficient of 0.997 ( n = 4). The absolute detection limit in the system, defined by a signal-to-noise ratio of 3, was assigned for the 5P in the range of 85–150 Bq that corresponds to 59–104 (rounded 60–100) fmol.
Results and Discussion In this study, attention was focused on the uptake of the oostatic pentapeptide 5P (H-Tyr-Asp-Pro-Ala-Pro-OH) by ovaries of N. bullata . As a continuation of previous studies ( Tykva et al. 2007 ), the uptake was monitored in relation to the stage of egg development (vitellogenesis or yolk deposition) and tested in vitro . The total radioactivity in the ovaries ( Figure 2 ) was determined, and the radioactive components in extracts of ovaries were analyzed. Radiolabeled metabolites of [ 3 H-Pro 3 ]5P found in ovaries were identified using synthetic standards of non-labeled sequences ( Hlaváček et al. 2007 ) as illustrated in Figure 3 . As can be seen in Figure 2 , the time course of radioactivity uptake was different for in vitro and in vivo experiments. In the latter case, the radioactivity increased until 10 min after application, then decreased, and after 30 min it was almost constant. No statistically significant differences were found between the two experimental groups (curves 1 and 2 in Figure 2 ). Such a time coarse could be explained by a single injection of [ 3 HPro 3 ]5P. On the contrary, for the in vitro experiments, radioactivity continuously increased because there was permanent contact of 5P with the ovaries. Nevertheless, there were statistically significant differences between the two following groups. With small yolk deposition ( Figure 2 , curves 3 and 4), the radioactivity increased slowly and continuously during the measured time interval. In the later developmental stages ( Figure 2 , curves 5 and 6), a rapid increase was found until 30 min, at which time the radioactivity was practically continuous, which was similar to the results of the in vivo experiment. In this time interval of practically stable radioactivity, the concentration of the 5P metabolites seemed to reach its maximum. Independently of application method, the metabolites were qualitatively identical, and no 5P was found by 30 s after application ( Table 1 ). This finding suggests an effective enzymatic system for peptide degradation. Such system may be located in the interfollicular spaces once their patency was evoked by a juvenile hormone or some other hormonal action ( Davey 2000 ), on the oocyte membrane adjacent to the apical part of the follicular cells ( Telfer et al. 1982 ), or possibly in the oocyte cytoplasm. The process of follicular cell patency is finished prior to the onset of vitellogenesis ( Telfer at al. 1982 ). The active peptide intake increases with continuous yolk deposition even though a receptor responsible for oostatic peptide transport was not previously found ( Slaninová et al. 2004 ). Regarding the metabolites, the composition of radioactive substances was qualitatively almost identical during the entire period followed (30 s-180 min), both after incubation with the peptide in vitro and also after administration of the peptide in vivo . As can be seen in Table 1 , there was a very rapid degradation of the 5P, which was not detectable in either case after 30 s. The same results were found for all tested stages during egg development. These results suggest that there is an extremely rapid metabolic cleavage of the N-terminal Tyr 1 , and the remaining 4P (H-Asp-[ 3 H]Pro-Ala-Pro-OH) probably is then degraded into two dipeptides from which H-Asp-[ 3 HPro]-OH then gives rise very quickly to [ 3 H]Pro. The differences between the in vivo and in vitro experiments may be explained by the partly metabolized 5P ( in vivo ), which is incorporated into the fat body. Then, as with other nutrients, it is delivered by hemolymph to the ovaries ( Tykva et al. 2004 ). On the contrary, the in vitro intake of non-metabolized 5P begins in the incubated ovaries, where only the 5P alone is available and the metabolism of ovaries takes place without any nutrient. Very interesting is the fact that isolated ovaries with no connection to the tracheal system, the nervous system and/or the hemolymph were still able to use the 5P by a way other than simple diffusion and and to metabolize it to qualitatively identical metabolites as occured in intact ovaries. Previous studies revealed rich innervation of ovaries by branches of the median nerve that originate in thoracic ganglium, and, via the lateral oviduct, innervate each ovariole along the ovarian sheath ( Bennettová-Řežábová 1971 ). Neurosecretory cells were located in close connection with follicular cells and granules of neurosecretory material were present ( Bennettová and Mazzini 1989 ). Such findings might be responsible for the independent functioning of ovaries in vitro , at least for a limited amount of time. Conclusions These results showed that the uptake of the oostatic 5P metabolites into the ovaries of N. bullata depended on the stage of the vitellogenic ovaries. Unlike the case after in vivo injection, the in vitro experiments show that ovaries that have low amounts of yolk also accumulate the metabolites at a slower rate. On the other hand, in later vitellogenic stages radioactivity quickly reached a maximum and then stayed almost constant, which was similar to the in vivo assay. In all tested ovaries, no 5P was found by 30 s after application, and the same 5P radiometabolites were detected. The results of these analyses of [ 3 HPro]5P and its metabolites seem to point towards the existence of an enzymatic system that very effectively degrades the 5P TMOF analogue during the transport into the egg chamber. An active intake of the analyzed sequences into the intercellular spaces between follicular cells may be assumed.
Results and Discussion In this study, attention was focused on the uptake of the oostatic pentapeptide 5P (H-Tyr-Asp-Pro-Ala-Pro-OH) by ovaries of N. bullata . As a continuation of previous studies ( Tykva et al. 2007 ), the uptake was monitored in relation to the stage of egg development (vitellogenesis or yolk deposition) and tested in vitro . The total radioactivity in the ovaries ( Figure 2 ) was determined, and the radioactive components in extracts of ovaries were analyzed. Radiolabeled metabolites of [ 3 H-Pro 3 ]5P found in ovaries were identified using synthetic standards of non-labeled sequences ( Hlaváček et al. 2007 ) as illustrated in Figure 3 . As can be seen in Figure 2 , the time course of radioactivity uptake was different for in vitro and in vivo experiments. In the latter case, the radioactivity increased until 10 min after application, then decreased, and after 30 min it was almost constant. No statistically significant differences were found between the two experimental groups (curves 1 and 2 in Figure 2 ). Such a time coarse could be explained by a single injection of [ 3 HPro 3 ]5P. On the contrary, for the in vitro experiments, radioactivity continuously increased because there was permanent contact of 5P with the ovaries. Nevertheless, there were statistically significant differences between the two following groups. With small yolk deposition ( Figure 2 , curves 3 and 4), the radioactivity increased slowly and continuously during the measured time interval. In the later developmental stages ( Figure 2 , curves 5 and 6), a rapid increase was found until 30 min, at which time the radioactivity was practically continuous, which was similar to the results of the in vivo experiment. In this time interval of practically stable radioactivity, the concentration of the 5P metabolites seemed to reach its maximum. Independently of application method, the metabolites were qualitatively identical, and no 5P was found by 30 s after application ( Table 1 ). This finding suggests an effective enzymatic system for peptide degradation. Such system may be located in the interfollicular spaces once their patency was evoked by a juvenile hormone or some other hormonal action ( Davey 2000 ), on the oocyte membrane adjacent to the apical part of the follicular cells ( Telfer et al. 1982 ), or possibly in the oocyte cytoplasm. The process of follicular cell patency is finished prior to the onset of vitellogenesis ( Telfer at al. 1982 ). The active peptide intake increases with continuous yolk deposition even though a receptor responsible for oostatic peptide transport was not previously found ( Slaninová et al. 2004 ). Regarding the metabolites, the composition of radioactive substances was qualitatively almost identical during the entire period followed (30 s-180 min), both after incubation with the peptide in vitro and also after administration of the peptide in vivo . As can be seen in Table 1 , there was a very rapid degradation of the 5P, which was not detectable in either case after 30 s. The same results were found for all tested stages during egg development. These results suggest that there is an extremely rapid metabolic cleavage of the N-terminal Tyr 1 , and the remaining 4P (H-Asp-[ 3 H]Pro-Ala-Pro-OH) probably is then degraded into two dipeptides from which H-Asp-[ 3 HPro]-OH then gives rise very quickly to [ 3 H]Pro. The differences between the in vivo and in vitro experiments may be explained by the partly metabolized 5P ( in vivo ), which is incorporated into the fat body. Then, as with other nutrients, it is delivered by hemolymph to the ovaries ( Tykva et al. 2004 ). On the contrary, the in vitro intake of non-metabolized 5P begins in the incubated ovaries, where only the 5P alone is available and the metabolism of ovaries takes place without any nutrient. Very interesting is the fact that isolated ovaries with no connection to the tracheal system, the nervous system and/or the hemolymph were still able to use the 5P by a way other than simple diffusion and and to metabolize it to qualitatively identical metabolites as occured in intact ovaries. Previous studies revealed rich innervation of ovaries by branches of the median nerve that originate in thoracic ganglium, and, via the lateral oviduct, innervate each ovariole along the ovarian sheath ( Bennettová-Řežábová 1971 ). Neurosecretory cells were located in close connection with follicular cells and granules of neurosecretory material were present ( Bennettová and Mazzini 1989 ). Such findings might be responsible for the independent functioning of ovaries in vitro , at least for a limited amount of time. Conclusions These results showed that the uptake of the oostatic 5P metabolites into the ovaries of N. bullata depended on the stage of the vitellogenic ovaries. Unlike the case after in vivo injection, the in vitro experiments show that ovaries that have low amounts of yolk also accumulate the metabolites at a slower rate. On the other hand, in later vitellogenic stages radioactivity quickly reached a maximum and then stayed almost constant, which was similar to the in vivo assay. In all tested ovaries, no 5P was found by 30 s after application, and the same 5P radiometabolites were detected. The results of these analyses of [ 3 HPro]5P and its metabolites seem to point towards the existence of an enzymatic system that very effectively degrades the 5P TMOF analogue during the transport into the egg chamber. An active intake of the analyzed sequences into the intercellular spaces between follicular cells may be assumed.
Conclusions These results showed that the uptake of the oostatic 5P metabolites into the ovaries of N. bullata depended on the stage of the vitellogenic ovaries. Unlike the case after in vivo injection, the in vitro experiments show that ovaries that have low amounts of yolk also accumulate the metabolites at a slower rate. On the other hand, in later vitellogenic stages radioactivity quickly reached a maximum and then stayed almost constant, which was similar to the in vivo assay. In all tested ovaries, no 5P was found by 30 s after application, and the same 5P radiometabolites were detected. The results of these analyses of [ 3 HPro]5P and its metabolites seem to point towards the existence of an enzymatic system that very effectively degrades the 5P TMOF analogue during the transport into the egg chamber. An active intake of the analyzed sequences into the intercellular spaces between follicular cells may be assumed.
The uptake and metabolism of the oostatic pentapeptide analogue of trypsin modulating oostatic factor (TMOF), H-Tyr-Asp-Pro-Ala-Pro-OH (5P), in ovaries of Neobellieria bullata (Parker) (Diptera: Sarcophagidae) were analyzed during their developmental stages. During selected stages of yolk deposition, the fate of [ 3 HPro 3 ]5P after its in vivo injection was compared to its uptake after in vitro incubation of dissected ovaries. The ovaries were analyzed from 30 s to 180 min after incubation. A detection sensitivity of 60–100 fmol of the labeled 5P was achieved using radio-high performance liquid chromatography. While the uptake of the applied radioactivity strongly depended on the stage of vitellogenesis, especially for the in vitro experiment, degradation of 5P was very quick and independent of whether the label was injected or incubated with the ovaries, regardless of the developmental stage of ovaries. No tracers of 5P were detected at 30 s after applying the labeled 5P in all tests. Keywords
Acknowledgement The study was performed as project No. 203/06/1272 of the Czech Science Foundation and project Z4 055 0506 of the Institute of Organic Chemistry and Biochemistry, Academy of Sciences of the Czech Republic. Abbreviations high performance liquid chromatography; trypsin modulating oostatic factor
CC BY
no
2022-01-12 16:13:46
J Insect Sci. 2010 May 17; 10:48
oa_package/bf/ab/PMC3014797.tar.gz
PMC3014798
20569138
Introduction Seminatural grassland is one of the most species rich habitats in Europe's open landscapes. Long continuity of grazing or mowing, without application of fertilisers and pesticides ( Pärt and Söderström 1998 ) has built up a high diversity of plants and insects ( Appelqvist et al. 2001 ). For example, Swedish grasslands can harbour up to 60 species of vascular plants per m 2 ( Eriksson and Eriksson 1997 ). During the 20 th century, agriculture became more intensive in Europe and traditional land use practices such as grazing, mowing, and burning were abandoned. Many areas of grassland have thus become overgrown, while for many organisms the farmland that remains has been rendered unsuitable by the use of fertilisers and pesticides ( Anthelme et al. 2001 ; Watkinson and Ormerod 2001 ; Firbank 2005 ; Hole et al. 2005 ; Schmidt et al. 2005 ). Most countries in Western Europe have lost more than 95% of their original grassland areas (e.g. Statistiska Centralbyrån 1990 ; Nature Conservancy Council 1984 ; Kumm 2003 ). Abandonment of grazed fields has been identified as an important cause of the decline of grassland biodiversity ( Karlsson 1984 ; Fuller 1987 ). As a consequence, large numbers of red-listed species are associated with these habitats ( Gärdenfors 2000 ). Most temperate grasslands are dependent on regular disturbances that counteract the succession towards scrubland and eventually forest. The nature of this disturbance, for example in terms of type, timing, and intensity, is essential for the grasslands' biodiversity. In most European countries, the grasslands have a long management history with grazing and mowing as the dominating disturbance regimes ( Poschlod and Bonn 1998 ; Söderström et al. 2001 ; Eriksson et al. 2002 ). Therefore, grassland biodiversity should be favoured by management that is as similar as possible to the local historical management regimes ( Lennartsson and Ostermeijer 2001 ). However, the present management methods often differ considerably from traditional management ( Gustavsson 2007 ; Dahlström et al. 2006 , 2008 ). One important change in management is the decreased use of late-season management ( García 1992 ; Beaufoy et al. 1995 ; Ihse and Lindahl 2000 ). Earlier, about 20–30% of the semi-natural grassland area was subject to late season (in Sweden from mid-July at the earliest) management. Now, only approximately 3% is late grazed ( Dahlström et al. 2008 ). Management experiments have shown that type and timing of management have profound effects on grassland biodiversity ( Morris 2000 ) and have also indicated that the present management may not provide sufficient conditions for grassland biodiversity ( Zobel 1992 ; Poschlod et al. 2005 ). The main ecological effect of late management by mowing or late grazing is that the vegetation is left undisturbed in the early summer. This is advantageous for seed production, especially in plants with early reproduction ( Karlsson 1984 ; Zopfi 1993 ; Lennartsson and Svensson 1996 ; Simán and Lennartsson 1998 ). Timing of grazing affects the vegetation structure and has been shown to also affect the species composition and abundance of ants ( Boulton et al. 2005 ), beetles ( McFerran et al. 1994 ) and spiders ( Dennis et al 2001 ; Schwab et al. 2002 ). Studies of ecological effects of grazing and other grassland management often use one or a few species, usually vascular plants, as indicators for the grassland condition ( Lennartsson and Oostermeijer 2001 ), but few studies analyse effects on different taxa or taxonomic groups. Some studies have indicated that different species groups in semi-natural grasslands may differ considerably regarding which management regime is optimal (e.g. Söderstöm et al. 2001 ; Vessby et al. 2002 ). In this study, conventional grassland management, i.e. grazing from May to September, was compared with an experimentally applied traditional management, grazing from mid-July. The effects of grazing regime were analyzed regarding abundance, species richness, and species composition of different groups of predator arthropods: ants, Carabid beetles and spiders.
Materials and Methods Study sites The study was conducted in two seminatural pastures in south-central Sweden: Pustnäs, 2 hectares, 59° 45′ N, 17° 45′ E; and Harpsund, 12 hectares, 59° 05′ N, 16° 29′ E. In both pastures, the mean annual precipitation was about 600–700 mm, and the mean annual temperature was about 7° C. The pasture at Pustnäs is located in a flat area, whereas the Harpsund pasture consists of a low east-west stretched ridge. The vegetation type at both sites was mainly dry to mesic herb-rich Agrostis capillaris L. (Poales: Poaceae) meadow ( Påhlsson 1994 ). Other dominating species were Poa pratensis L. (Poaceae), Filipendula vulgaris Sturm (Rosales: Rosaceae), Leontodon autumnalis L. (Asterales: Asteraceae), Leucanthemum vulgaris Lam. (Asteraceae), Lotus corniculatus L. (Fabales: Fabaceae), Prunella vulgaris L. (Lamiales: Lamiaceae), Ranunculus spp., and Trifolium spp. Apart from the experimental areas (see below) both sites were grazed annually from May to September by about 1.8 (Pustnäs) and 1.2 (Harpsund) steers or heifers per hectare. Experimental design Two homogenous (by means of vegetation) areas in each pasture were chosen, and an alternative grazing regime was established by separating, by fencing, one area of 1 hectare (Pustnäs) and one of 4 hectares (Harpsund) from the continuously grazed pastures. Data sampling was performed in these exclosures and in the continuously grazed grassland adjacent to the exclosures. The exclosures were not grazed until 27 July in Harpsund and 18 July in Pustnäs, when the fence was opened and the grazers were allowed to utilize the whole pasture. The alternative grazing regimes were initiated in 1997 in Pustnäs and in 2001 in Harpsund and were applied each year until 2005. The difference in time of opening was due to practical reasons related to the farmers' cattle management and arrangement of grazing. Vegetation height and litter depth Vegetation height was measured using a rising plate ( Sanderson et al. 2001 ) at 30 random sampling points per grazing treatment at 6–9 occasions from late May to late September 2001–2003. Litter layer thickness, from the litter surface to the mineral soil, was measured at 30 random points on the first sampling occasion, using a mm-graded stick. Temperature data Temperature data on the mean of each 24 hour-period throughout the study period were provided by the Ultuna Climate and Bioclimate station (see http://www.grodden.evp.slu.se/slu_klimat/station.html ). Species composition and abundance of ants, carabid beetles, and spiders Arthropods were sampled by using pitfall traps: 850 ml plastic jars, 12 cm in diameter buried to the level of the ground surface. The traps were filled 1/3 with water and a drop of detergent to reduce the surface tension. In Harpsund, 28 traps were installed in each grazing treatment in a spatial arrangement that covered the environmental variation within each treatment area. In each grazing treatment, 7 traps were located uphill and 7 downhill on the north-facing slope of the ridge, and 7 traps were located uphill and 7 downhill on the south-facing slope. Hereafter, a group of 7 traps is called a block. The distance between the traps was at least 10 m, and the distance between the blocks was at least 20 m. The grassland in Pustnäs was smaller than in Harpsund, and, therefore, only 7 traps per grazing treatment were randomly established. The traps operated for 10 periods during 7 days from 13 May to 28 August 2002 in Harpsund and nine periods from 30 May to 29 August 2002 in Pustnäs. The traps operated during 7 successive days, followed by 7 days of non-operation. Animals were collected from each trap after each operation sequence and preserved in 50% propylenglycol. All beetle samples from Pustnäs before 10 July were accidentally destroyed in the lab, leaving seven sampling periods for carabids from that site. Ants were identified to species level based on Seifert ( 1996 ), and beetles to species level based on Lindroth ( 1985 , 1986 ). Due to resource limitation, spiders were collected only at five (Harpsund) and four (Pustnäs) sampling events. Spiders were identified to species, genus, or family level using Roberts ( 1995 ) and Jones-Walters ( 1994 ). Ant mounds To investigate the effect of grazing regime on density and persistence of ant nests, and to investigate the occurrence of Lasius flavus Foerster (Hymenopera: Formicidae), which is not easily caught in pitfall traps, all hillocks taller than 10 cm were mapped. In Pustnäs, mapping was performed over the whole 1 hectare treatment areas. In Harpsund, nests were mapped in one 0.04 ha area per treatment, placed 10 m from each other, at opposite sides of the fence. Ants inhabiting the mounds were collected and determined to species level, and mounds without ants were assigned abandoned. Mapping was done in July in 2002, 2003, and 2004 in Pustnäs and in 2002 and 2003 in Harpsund. Height and diameter of the mounds was measured in 2002. Statistical analyses The sampling design in Pustnäs and Harpsund was not identical, and the two sites were therefore analyzed separately. Pitfall traps For the data from Harpsund, variation in capture efficiency attributable to the individual locations of the traps was avoided by pooling the catches from the 7 traps in each block at each trapping occasion. Thus, the estimate of species richness was based on the number of species found in each block at one occasion in one treatment. In Pustnäs, no blocks were used and species richness was based on individual traps. The species of Carabid beetles and spiders were also analyzed in terms of functional groups, and since body size can be assumed to be important for several aspects of a species' ecology (see discussion), the grouping was partly based on size. Based on the size frequencies beetles were classified in three size classes, < 5 mm, 5–8 mm, and > 8 mm and according to life-cycle, habitat preference, and food preference ( Table 1 ) ( Lindroth 1992 ). Life-cycle refers to which stage that hibernates and was used since this can be assumed to influence the phenology of the species, in turn potentially important for the species response to grazing season. Spiders were classified in six classes according to a combination of size and foraging behavior (web-builders < 3 mm, web-builders 3–6 mm, runners < 6 mm, runners 6–10 mm, runners > 10 mm, and “sit-and-wait-species”, Roberts 1995 ), and in three taxonomic groups of wolf spiders (Lycosidae), Paradosa spp, Alopecosa spp, and Trochosa spp. In order to meet assumptions of normality, all data on spiders and beetles were log (n + 0.1) transformed before analysis. Ants are social insects, while spiders and beetles are not and this affectd the numbers of individuals trapped. Worker ants often follow one another so the actual number of individuals trapped in pitfalls is not related to the density of ant colonies. Therefore, colony density of a species in a block at one sampling occasion in Harpsund was estimated as the proportion of traps in the block that contained the species. Due to the chosen distance between traps, this proportion provides a good approximation of the colony density of small species with limited movement ranges (e.g. Myrmica spp., Lasius spp.) and of the activity density of bigger species (e.g. Formica spp.) with larger movement ranges ( Savolainen et al. 1989 ). Activity density of smaller species was estimated for both Harpsund and Pustnäs as the number of individuals trapped (per trap in Pustnäs, per block in Harpsund). In order to meet assumptions of normality, data on colony density were arcsine transformed ( Fowler et al. 1998 ) and those on activity were log (n + 0.1) transformed. It was assumed that the difference in population size of the arthropods between the two treatment areas varied over time and that the similarities at different times before onset of late grazing were larger than the similarities between before and after the onset of late grazing. Therefore, the data sets of Harpsund and Pustnäs were divided into (1) those that includes all observations before onset of late grazing, (2) those including all observations after onset of late grazing, and (3) those including observations on 26 July, one sampling day before, and 12 August, the sampling day after, the onset of late grazing. The Shapiro-Wilk's test for normality was used to test the appropriateness of the statistical model. For Harpund, repeated measures data on arthropods were analysed using a mixed effects model with grazing regime as fixed factor, block as random factor, and sampling time as repeated factor (Littell et al. 2006). For Pustnäs, repeated measures data on arthropods were analysed using a mixed effects model with grazing regime as fixed factor, trap as random factor, and sampling time as repeated factor. Bonferroni or similar corrections for multiple tests were not applied, but instead the results were interpreted with care, focusing on single results instead of the number of significant differences (e.g. Lindberg and Bengtsson 2006 ). Ant mounds The total, not mean numbers of ant mounds per treatment, was counted. Therefore, contingency tests, G-tests, were used to test for differences between grazing treatments in number of inhabited and abandoned ant mounds, respectively. Each year and site was analyzed separately. The height of inhabited ant mounds was analyzed for each site separately by two-way ANOVA with ant species ( Lasius niger and L. flavus (Formicidae)) and grazing regime as factors. Vegetation data Due to non-normality, tests on vegetation height and litter depth were performed using non-parametric tests.
Results Vegetation height and litter depth In both pastures the vegetation in continuous grazing was reduced to 3–5 cm in early June, and this height was kept rather constant until late July, when vegetation height was further reduced, to a height of 2–3 cm in late September ( Figure 1 ). In late grazing, vegetation grew to a height of 8–9 cm until onset of late grazing. After onset of late grazing, vegetation was rapidly reduced but remained 0.5–2 cm taller than in continuous grazing throughout the season ( Figure 1 ). By the end of the season, vegetation height in Harpsund differed about 2 cm between the grazing regimes; in Pustnäs vegetation height differed less than 1cm. Thickness of litter layer in early June in Harpsund was 4.6–6.8 mm in continuous grazing from 2001 to 2003 and 8.7–11.0 in late grazing from 2002 to 2003. This difference between treatments was significant in both years (Mann-Whitney U-test, p < 0.05). In 2001, i.e. at the beginning of the first experimental season, litter thickness did not differ between treatments in Harpsund (6.8 ± 3.0 and 6.2 ± 3.2 in continuous and late grazing, respectively). In Pustnäs, litter layer varied between 4.2 and 5.5 mm during 2001–2003 in continuous grazing and between 8.8 ± 2.8 and 11.3 ± 3.4 in late grazing, and the difference between treatments was significant for all three years (p < 0.05). Ants Number of individuals: In Harpsund, 8750 individuals belonging to 15 different species were trapped, and, in Pustnäs, 2204 individuals of 11 species. Pitfall traps detected a significant overall effect of grazing regime on the number of individuals in Pustnäs but not in Harpsund. Before onset of late grazing, more individuals were found in continuous grazing in Pustnäs (repeated measures ANOVA; F 1,112 = 6.94, p = 0.02), but the magnitude of difference varied over time ( Figure 2 ). In Harpsund, there was a tendency for the opposite pattern, but because of high variation, no significant effects could be detected ( Figure 2 ). In Pustnäs, the ants were most active in the end of May; in Harpsund, they were most active in mid-June ( Figure 2 ). L. niger was the most numerous ant species in both Pustnäs and Harpsund, but none of the measurement methods detected significant effects of grazing regime on the activity of this species. Number of species and species-specific responses: In Harpsund, pitfall traps showed that species richness was not affected by grazing regime. The ant, Myrmica rubra L. was present in larger numbers (repeated measures ANOVA; F 1,64 = 8.7, p = 0.03) and had higher colony density (F 1,64 = 9.8, p = 0.02) in continuous grazing, whereas Formica polyctena had higher activity density in late grazing (F 1,64 = 12, p = 0.04). The colony density, but not the abundance of individual Myrmica scabrinodis Nylander ants, was higher in late grazing before (F 1,64 = 12.5, p = 0.04), but not after, late onset of grazing. The total number of individuals of Myrmica spp. was higher in continuous grazing (F 1,32 = 22, p = 0.009). In Pustnäs, pitfall traps showed higher species numbers in continuous grazing compared to late grazing (repeated measures ANOVA; F 1,112 = 8.14, p = 0.008). The total number of individuals of Myrmica spp. was significantly higher in continuous grazing (F 1,112 = 9.8, p = 0.005), and this effect was also found for two of the Myrmica species, M. lobicornis and M. rubra (p < 0.04 both species). In contrast, Formica rufibarbis F. was mostly present in late grazing (F 1,112 = 22, p < 0001). After onset of late grazing, these differences between grazing regimes persisted. Ant mounds: In late grazing in Harpsund, 27 mounds of L. flavus and 8 of L. niger were mapped, corresponding to 675 and 200 per hectare, respectively. In continuous grazing, 2 mounds of L. flavus and 10 of L. niger were mapped, corresponding to 300 and 250 mounds per hectare, respectively. Thus, more ant mounds than expected by random allocation were inhabited by L. flavus in late grazing than in continuous grazing (G-test; p < 0.05 in both years, Figure 3 ), while for L. niger a non-significant opposite pattern was found. In Pustnäs, only one mound of L. flavus per treatment area was found, and 25 and 29 mounds of L. niger were found in late and continuous grazing, respectively. In Harpsund, 98% of the inhabited mounds mapped in 2002 were still inhabited in 2003. The number of mounds inhabited by L. flavus decreased between 2002 and 2003 in both pastures, while the number of abandoned mounds or mounds inhabited by Myrmica spp. increased over time ( Figure 3 ). Grazing regime had no significant effect on the mean height of the anthills (17.4 ± 5.4 and 14.4 ± 3.8 cm in late and continuous grazing, respectively; two-way ANOVA, F 1,23 = 0.9, p = 0.4). In Pustnäs, more than half of the mounds that were found in 2002 were completely destroyed by cattle in 2003 and 2004. No significant effects of grazing regime were found on the number of mounds inhabited by L. niger , nor on the number of abandoned mounds was found. Grazing regime had no significant effect on the mean height of the anthills (16.9 ± 5.4 and 17.9 ± 5.8 cm in late and continuous grazing, respectively). Spiders In Harpsund, 7502 specimens belonging to eight families were counted. Of the Lycosidae, 13 different species were identified. Grazing regime had no significant effect on number of individuals. Analysis of functional groups showed that the number of individuals of web builders < 3 mm was higher in continuous grazing after onset of late grazing (repeated measures ANOVA; F 1,16 = 7, p = 0.04), but not earlier in the summer. Runners > 10mm, mainly Trochosa terricola Thorell (Araneae: Lycosoidea), were more common in late grazing after 27 July (F 1,16 = 6.1, p = 0.048), but not earlier in the summer. Grazing regime had no significant effects on the number of individuals or species of the genera Paradosa spp., Alopectosa spp., or Trochosa spp. Some single species showed differences between grazing regimes at some sampling dates in Harpsund, but these differences varied in an inconsistent manner. In Pustnäs, 2465 specimen belonging to seven families were counted. Of the Lycosidae, 11 different species were identified. More species were found in late compared to continuous grazing before onset of late grazing, i.e. in the undisturbed vegetation (F 1,42 = 22, p = 0.0002). After onset of late grazing, no differences were found. Analysis of functional groups showed that the number of individuals of web builders < 3mm was higher in continuous grazing (F 1,42 = 60, p < 0.0001) both before and after onset of late grazing. In contrast, the abundance of sit-and-wait-species and runners > 10mm ( Trochosa spp.), was about seven times higher in late grazing (F 1,42 = 60, p = 0.0006) both before and after onset of late grazing. The taxonomic group Linyphiidae was about four times higher in continuous than in late grazing (F 1,42 = 60, p = 0.0003). No other taxonomic groups were significantly affected by grazing regime. Of single species, the small runner, Paradosa fulvipes was caught in higher numbers in late grazing (F 1,42 = 60, p = 0.007). Carabid beetles Number of individuals and species: In Pustnäs, a total of 288 individuals belonging to 27 species and, in Harpsund, 1429 individuals of 42 species were trapped during the study. Grazing regime as main factor had no significant effect on either species richness or number of individuals (repeated measures ANOVAs; Harpsund, p > 0.1; Pustnäs, p > 0.4). Analyses of cumulative data showed, in contrast, that the abundance of individuals was about 1.5 times higher in late grazing in Pustnäs (F 1,14 = 5.1, p = 0.04). This effect was not found in Harpsund. In Harpsund, however, the number of Carabidae individuals before onset of late grazing was significantly affected by the grazing regime/date interaction (repeated measures ANOVA; F 64,7 = 2.39, p = 0.04). For a number of species, this interaction was marginally significant (F 64.7 = 2.04, p = 0.08). In general, the number of species and individuals were higher in continuous grazing in the early summer and higher in late grazing in the late summer ( Figure 4 ). The shift in preference between the two treatment areas occurred around 1 July, before the onset of late grazing (29 July). In Pustnäs, no interaction effect was found for either number of species (p = 0.6), or number of individuals (p = 0.5, Figure 5 ). Responses of species with different life cycles: In Harpsund, species with different life cycles showed somewhat different grazing regime preference ( Figure 6 ). In downhill blocks, the number of individuals of adult-hibernating species tended to be higher in continuous grazing before, but not after, late onset of grazing (repeated measures ANOVA; F 32,1 = 13, p = 0.06). In the uphill blocks, a significant grazing regime/date interaction was found for adult-hibernating species (F 32,7 = 4.2, p = 0.04), and the number of individuals was higher in continuous grazing at most dates. The adult-hibernating species were replaced by larvae-hibernating species during the first half of July ( Figure 6 ), and date had, consequently, a significant effect on the number of individuals of both life cycle types (p < 0.0001). Of the species that were found in sufficient numbers to be analyzed separately, the adult hibernating species, e.g. Amara communis Panzer (Coleoptera: Carabidae), Bembidion guttula F., Bembidion Lampros Herbst, and Pterostichus versicolor Sturm, were more common in continuous grazing early in the summer in Harpsund. Around 1 July, those species disappeared and were replaced by larvae hibernating species, such as Calathus fuscipes Goeze, Harpalus latus L., Pterostichus niger Schaller, and Trechus secalis Paykull, which were more common in late grazing. With few exceptions, no significant grazing regime preference could be detected at the species level. Adult-hibernating species were considerably smaller (6.7 ± 3.1 mm) than larvae-hibernating species (10.1 ± 5.5 mm). Body size of imagines explained, however, little of the variation in individual number between grazing regimes in Harpsund. The grazing regime/date interaction affected significantly the number of individuals of small species (repeated measures ANOVA; F 64,7 = 3.25, p = 0.02), but the abundance varied between dates in an inconsistent manner. Analyses of cumulative data indicated that large-bodied carabids (> 8 mm) were more common in late grazing (F 1,8 = 15, p = 0.02) and that other species were not affected by grazing regime. In Pustnäs, beetles were collected only during the second half of the summer. During that period, larvae hibernating species were dominant as in Harpsund, but no significant differences between grazing regimes were found. The number of individuals of small species (< 5 mm) before late onset of grazing significantly differed between grazing regimes at some sampling occasions (repeated measures ANOVA; interaction grazing/time F 26,1 = 5.5, p = 0.04) and was generally higher in continuous grazing (grazing as main effect, F 26,1 = 7.1, p = 0.08). After late onset of grazing, the number of individuals of large species (> 8 mm) were significantly more common in late grazing (F 68,1 = 14.4, p = 0.02). Intermediate-sized species (5–8 mm) were affected by the grazing regime/date interaction (F 68,4 = 2.9, p = 0.03), and the number of individuals was lower in late grazing at most dates. Responses of species with different food or habitat preference: almost all species were predators according to literature, and differences between food preference groups were thus not possible to test. In Harpsund, habitat preference contributed to explaining differences in abundance between grazing regimes. Before onset of late grazing, the number of individuals of species preferring shade or moderate exposure was significantly higher in late grazing sampling of the downhill blocks (F = 8.35, p = 0.04). Also, the number of individuals of species preferring sparse vegetation, differed between grazing regimes at some sampling occasions, but the results varied in a inconsistent manner between sampling dates. In Pustnäs, the numbers of individuals belonging to different habitat preference groups were low, and no significant differences between grazing regimes were detected.
Discussion Open grasslands are exceptionally rich in species from several taxonomic and ecological groups. Most grassland habitats in the temperate regions need grazing, mowing or other types of management in order to persist. Today, much research and conservation work aims at designing grassland management that preserves the grasslands' threatened flora and fauna (e.g. Myers 1998 ; Kleijn and Sutherland 2003 ). Timing of management is one aspect that can be easily manipulated for conservation purposes. Several studies have demonstrated effects of timing on growth, flowering, and fruit production of the vegetation-forming plant species (e.g. Wissman 2006 ). As a consequence, timing affects species groups directly associated with these vegetation features, for example phytophagous insects, nectar- and pollen-eaters, and seed predators ( Westrich 1996 ; Morris 2000 ). This study also shows that arthropod predators that are not directly dependent on the vegetation and plant species, are strongly influenced by the timing of management, in this case timing of grazing. In summary, small ants and spiders were in general more common in continuous than in late grazing, whereas larger spiders and Formica -ants were more abundant in late grazing. Ant mound density was higher in late grazing in one of the grasslands. The abundance of carabids was higher in continuous grazing in the early summer, but higher in late grazing in the later summer. The results indicate that timing of grazing affects several habitat variables of the grassland habitat, and that different groups of arthropod predators are reacting to different variables. The possible relationships between timing, habitat variables, and species groups are discussed below and summarized in Figure 7 . Height and structural heterogeneity of the vegetation Tall vegetation in late grazing provides a three-dimensional space for climbing arthropods, and this aspect of the vegetation is one conspicuous difference between continuous and late grazing in the early summer. Also in the late summer, patches with tall vegetation were more common in late grazing, as indicated by error bars in Figure 7 . Such patches thus formed a higher structural heterogeneity in late compared to continuous grazing (see Pihlgren 2007 ). Vegetation structure has been shown to be important for web-building spiders ( Roberts 1996 ; Dennis et al. 2001 ), but contrary to expectations, web builders were more common in continuous grazing. This may be because the grazing cattle destroyed the webs and forced the spiders to move. In contrast, tall vegetation is an obstructing structure for species running on the ground, in particular predators using visual hunting ( Cole et al. 2005 ). This may partly explain why small running spiders, small carabids, and small ants were more abundant in continuous grazing, although higher microhabitat temperature, as discussed below, may be a more important factor for the small arthropods. Large running predators among all studied groups were more common in late grazing, which indicates that the advantage of larger food supply (see below) is a more important factor for larger arthropods than the disadvantages of obstructing vegetation ( Heck and Crowder 1991 ). Temperature and humidity of the microhabitat Earlier studies have shown that the activity or abundance of small arthropods, in general, can be related to the differences in temperature and humidity caused by different vegetation height (e.g. Treweek et al. 1997 ). Clapperton et al. ( 2002 ) showed that soil temperature was about 5° C higher in a grazed pasture than in a non-grazed pasture, mainly due to shading by tall vegetation. In this study, the average vegetation height in continuous grazing was low throughout the season, while late onset of grazing allowed the vegetation to double in height from June to mid-July. Small poikilothermic animals in cold climates may need to spend more time in warm, sunny microhabitats than larger animals that can spend longer time in colder microhabitats after loading heat in the sun ( Sota et al. 2000 ). In this study, small carabids and spiders were significantly more common in low vegetation (continuous grazing) in the early summer in one of the grasslands, but not later when the vegetation was low in both treatments. The result was thus consistent with expectations, and the preference for low vegetation may have been further enhanced by the temperature changes during the summer. The temperature difference between tall and low vegetation can be assumed to be more important if the general air temperature is low. Temperature data (Ultuna Climate and Bioclimate Station, unpublished data) showed that May and June were considerably cooler than July and August. During May and June, only two out of six ten-day periods had a mean temperature > 15° C, compared to six out of six periods during July and August. The preference of small carabids for continuous grazing may also be explained by a combination of temperature, body size, and life cycle. The small carabids found in this study were mainly adult hibernating and were thus present as imago in the grassland in the early summer. Larva hibernating species are larger and were emerging as imago from approximately 1 July. Higher abundance of carabids in continuous grazing in the early, but not in the late summer, may thus be because early-summer species are smaller and therefore prefer low vegetation. It is notable that late grazing became more attractive to carabids around 1 July, three weeks before late onset of grazing, but approximately when adult- hibernating species were replaced by larva-hibernating ones. Temperature and humidity can also be expected to affect the abundance of several of the organisms serving as a prey resource for the studied predator groups. For example, snails and worms are sensitive to desiccation ( Andersen 1997 ) and should be more abundant in late grazing. This may explain the higher abundances of large species of carabids, ants ( Formica spp.), and spiders in late grazing. Food resources as a result of growth, flowering, and seed production A considerable proportion of the arthropods in a grassland are dependent on the grass sword's plants as their main food resource, and these herbivores comprise a food resource that can be expected to attract predators from all of the studied groups ( Morris 2000 ). The plants are utilized by phytophages, eating plant tissue, sap suckers, pollen eaters, nectar eaters, and seed predators. Most species of these groups of herbivores are more abundant in undisturbed compared to grazed vegetation (e.g. Andrzejewska 1971 ; Bestelmeyer and Wiens 1996 ; Treweek et al. 1997 ; Schwab et al. 2002 ), especially species feeding on plant reproductive organs or other apical tissue that is frequently removed by grazing ( Morris 1967 ). The relationship between vegetation, prey supply, and abundance of large predaceous arthropods has been found in several studies, for example by Cole et al. ( 2005 ), showing fields with tall vegetation having higher abundance of large beetles and wolf spiders, and Dennis et al. ( 2001 ), demonstrating higher densities of larger Lycosidae spiders in non-grazed than in grazed fields. This study confirms these results, as higher abundance of many predators, especially large species, was found in late grazing. In the early summer, the vegetation was undisturbed in late grazing, but also after late onset of grazing, patches of tall vegetation remained for a long time, potentially increasing the abundance of prey herbivores. Among ants, species forming large colonies, such as Formica and Lasius ants ( Hölldobler and Wilson 1990 ; Lenoir 2002 , 2003 ), can be expected to be highly dependent on large food resources ( Petal 1974 , 1978 ). This is consistent with the observation in this study that two Formica spp. were more common in late grazing. The abundance of carabids was significantly higher in late grazing after 1 July in both grasslands. As discussed above, the late-summer fauna of carabids consists mainly of large species that may be less dependent on warm microhabitats and more dependent on large food supply, particularly of worms and snails (see Brose 2003 ). Litter Grazing affects the thickness ( Rosén and Bakker 2005 ) and quality ( Bardgett et al. 1998 ) of the litter layer, mainly through the amount of biomass left after the grazing season, but to some extent by trampling of the litter layer the following early summer ( Wissman 2006 ). This study showed that the slightly taller vegetation in late grazing (about 1 cm of difference) resulted in about 4 mm and significantly thicker litter layer the following spring. Although this difference is small, it may imply significantly larger food resources ( Bestelmeyer and Wiens 1996 ) in terms of Diptera larvae and Collembola ( Tian et al. 1993 ), and snails ( Kappes et al. 2005 ), for example. In Pustnäs, the abundance of earthworms tended to be higher in late grazing (unpublished data). These food resources add to the herbivores discussed above, further increasing the food supply in late grazing. For carabids, the litter layer may also affect the habitat's suitability for hibernation, which may be part of the explanation for higher abundance of larva-hibernating species in late grazing. Larvae hibernating species hibernate in or close to the foraging areas ( Lindroth 1992 ). Although not studied, a thicker litter layer and a more heterogeneous vegetation structure may provide favorable conditions for hibernation (cf. Brose 2003 , MacLeod et al. 2004 ). If so, adult carabids can be expected to be more common in late grazing, both because they are hatched there and because they may choose habitats that are optimal for larvae hibernation. In contrast, adult hibernating species migrate to suitable hibernation sites, sometimes far from the summer foraging areas ( Lindroth 1992 ). In the spring they migrate back, possibly choosing the optimal foraging and breeding habitats. Competition Some results of the current study may be explained by competition. For example, Formica ants are known to compete with other ant species for food and suitable nest sites and also to affect other species by predation. Lesica and Kanowski ( 1998 ) showed that Formica and Myrmica ants compete for suitable nest sites, resulting in a lower nest density of Myrmica. Activity of Leptothorax and Myrmica ants has been shown to be higher when Formica spp. were absent ( Puntilla et al. 1991 ). In the current study, the presence of F. polyctena and F. rufibarbis in late grazing may have suppressed the activity of Myrmica spp. The colonies of Formica ants may also have had a negative effect on the Linyphiidae spiders. Lenoir ( 2003 ) found that wood ants that were manipulated to forage exclusively on the forest floor had a negative effect on the activity of Linyphiidae. However, web-building spiders can escape from interference with ground-dwelling wood ants by ‘staying by their webs’ ( Lenoir 2003 ). In the present study, activity of arthropods was measured by the use of pitfall traps, which may reflect higher rates of web destruction. Since no data on abundance of spider webs was collected, it was not possible to estimate the actual abundance of Linyphiidae. Dung deposition Dung serves as an essential substrate for a number of obligate coprophilous arthropods, of which some can be expected to serve as prey for the studied predator groups. Some groups of small arthropods followed temporally the distribution of dung, i.e. were more common in continuous grazing in the early summer and equally common in the two treatments after late onset of grazing. It is possible that dung is important for small predators, but its effect can not be separated from the effect of low vegetation and sun exposure, as discussed above. No groups of larger predators followed the abundance of dung, and it is likely that the effect of dung on the food supply for large predators was of little importance compared to the effect of vegetation height and growth as discussed above. Mechanical disturbance by the grazers The grazers cause mechanical damage and disturbance to the grassland ecosystem, mainly by grazing and trampling. This presumably affects the studied groups of arthropods both directly and indirectly. The main, indirect effect of trampling and grazing ( Zahn et al. 2007 ) is reduction of prey populations, for example by trampling of ground fauna and grazing of sessile life stages of phytophages and seed predators ( Zahn et al. 2007 ). Direct effects of disturbance are trampling mortality of ground dwelling specimens, damage of spider nets, and damage of ant mounds. For example, Duffey ( 1975 ) showed that abundance and diversity of spiders was reduced by trampling by cattle. In this study, larger running spiders were less common in continuous grazing, but, as discussed above, this may be primarily an effect of larger food supply in late grazing. It has been shown that some mound-building ant species are sensitive to grazing, probably due to trampling by the grazers ( Beever and Herrick 2006 ). In the current study L. flavus was present in large numbers in late grazing in Harpsund, while it was almost absent in continuous grazing. In Pustnäs, cattle were observed destroying ant mounds of L. niger , but there were no such observations from Harpsund. Synergies and tradeoffs between habitat variables Some results of this study indicated that two or more of the discussed habitat variables may have synergistic effects on predator abundance, whereas other variables may have opposing effects. Of the variables discussed here, some can be assumed to affect habitat choice of predator arthropods by creating attractive conditions in the grazing treatment in question; other variables have a repelling effect, thus decreasing the abundance in the treatment ( Figure 7 ). One example of a possible synergy is that both favourable temperature conditions (in continuous grazing) and unfavourable hunting conditions (in late grazing) can be expected to increase abundance of small predators in continuous grazing. Another example is that both litter and tall vegetation in late grazing may increase prey abundance, thus attracting predators to the treatment area. For carabids, the effect of litter on hibernation conditions may act in synergy with the two mentioned variables. One example of a possible trade-off between variables is that higher prey abundance attracts while colder microclimate repels small predators in late grazing. In this case, the effect of microclimate seemed to be more important. Another example of opposing effects is that destruction of ant mounds may repel and higher temperature may attract small ants ( L. flavus ) in continuous grazing, while larger food resources may attract the ants in late grazing. In this case, destruction of mounds and food resources seemed to be more important than a warmer microclimate. Methodical comments Ideally experiments with grazing regimes should be replicated across a number of sites. Marriott et al. ( 2004 ) reviewed many site-specific effects and suggested that replication would allow the extraction of general principles from the data. However, replication of grazing treatments are very costly if the treatment areas and cattle groups are as large as in this experiment, allowing the cattle to express a natural grazing behaviour. This setup thus mimics practically applicable grazing regimes, but requires the use of two large treatment areas per site instead of a number of small, interspersed areas. Small treatment areas would create artificial conditions because when small late-grazed exclosures within continuous grazing areas are opened for the grazers, the vegetation is grazed much faster than in a large late-grazed area (unpublished data). Also edge-effects can be expected when grazing exclosures are small and fast running carabids and wolf spiders might either just run through these exclosures or accumulate for shelter. Trap records will, therefore, not provide data on community density of these animals. An obvious disadvantage with the chosen experimental design is that it raises a pseudoreplication problem in several types of statistical tests ( Hurlbert 1984 ; Oksanen 2001 ; Hurlbert 2004 ; Oksanen 2004 ). The data collected at one site were, in fact, two random samples from two different areas, being treated in two different ways. Due to this, and due to the differences in data sampling design between the two grasslands, all analyses were performed for each grassland separately. Furthermore, the results must be interpreted acknowledging the possibility that the observed differences were area effects rather than treatment effects.
Arthropod communities were investigated in two Swedish semi-natural grasslands, each subject to two types of grazing regime: conventional grazing from May to September (continuous grazing) and traditional late management from mid-July (late grazing). Pitfall traps were used to investigate abundance of carabids, spiders, and ants over the grazing season. Ant abundance was also measured by mapping nest density during three successive years. Small spiders, carabids and ants ( Myrmica spp.) were more abundant in continuous grazing than in late grazing while larger spiders, carabids, and ants ( Formica spp.) were more abundant in late grazing. The overall abundance of carabids was higher in continuous grazing in the early summer but higher in late grazing in the late summer. The switch of preference from continuous to late grazing coincided with the time for larvae hibernating species replacing adult hibernating. We discuss possible explanations for the observed responses in terms of effects of grazing season on a number of habitat variables for example temperature, food resources, structure of vegetation, litter layer, competition, and disturbance. Keywords
Acknowledgements We thank Maria Johansson for help in the field and in the lab, Håkan Ljungberg for determination of carabids, and Birgitta Vegerfors for advise on the statistics. The study was funded by the Swedish Research Council for Environment, Agricultural Sciences, and Spatial Planning (Award 215-2002-308 to L. Lenoir and 34.0297/98 to T. Lennartsson).
CC BY
no
2022-01-12 16:13:47
J Insect Sci. 2010 Jun 10; 10:60
oa_package/d6/ef/PMC3014798.tar.gz
PMC3014799
20572784
Introduction The western tarnished plant bug, Lygus hesperus Knight (Heteroptera: Miridae) is a pest of numerous fiber, fruit, seed and vegetable crops ( Jackson et al. 1995 ). Despite their economic importance, relatively little is known about the specific environmental factors influencing their development and fecundity. For many higher organisms, a key influential factor is the density of the population in which they live. Depending on environmental conditions and host plant quality, field population densities of L. hesperus can fluctuate widely (i.e. Bancroft 2005 ; Carrière et al. 2006 ; Demirel and Cranshaw 2006 ), but even under ideal conditions they seldom achieve the densities found in experimental laboratory colonies. Insects reared at such artificially high densities often show evidence of abnormal or retarded development, reduced fertility, and exacerbated mortality rates (reviewed in Peters and Barbosa 1977 ; Clarke and McKenzie 1992 ; Hoffman and Woods 2001 ). These negative effects are often a consequence of increased intraspecific competition for limited nutritive resources, but even an increased contact rate from living in close proximity can influence development and behavior in insects, as observed in Schistocerca gregaria (reviewed in Simpson et al. 1999 ). Identifying the developmental and behavioral responses to such potential stressors can facilitate the design of rearing environments that avoid potential confounding effects in tests using laboratory populations. The responses to such environmental stressors can vary not only by species and severity of the stimuli, but they can also be influenced by the developmental status of an insect. The effects of stress tend to be greater and more persistent in immature stages than in adults (reviewed in Peters and Barbosa 1977 ), although there is evidence that, for some species, negative consequences do not always translate into adulthood (see Campero et al. 2008 ). Fully mature adults exposed to poor environmental conditions tend to have short-term and reversible responses that change as their environment fluctuates. Adjustments in gamete production, mating behavior, and dispersive tendencies are common ( Peters and Barbosa 1977 ). Adults of several Mirid species have been observed exhibiting such adaptive responses. For instance, Lygus lineolaris appears to adjust oviposition and migratory habits to match host density under field conditions ( Rhainds and English-Loeb 2003 ). Similarly, increasing population density reduces female fertility in Dicyphus tamaninii ( Agustı' 1998 ) and reduces male mating behavior in Macrolophus caliginosus ( Castaňé et al. 2007 ). This study was designed to determine if the high population densities used in laboratory rearing conditions negatively affect the rate and extent of nymph maturation and the oviposition rates of adults. Mixed-sex groups of L. hesperus were reared together at three different population densities for each developmental stage. The effect of nutrient access was also tested in nymphs. Nymph development and reproductive maturation were assessed by a series of morphological measures, including a composite measure of fluctuating asymmetry. Mortality rates were tracked for both adults and nymphs.
Materials and Methods Insects The L. hesperus used in this study were obtained from a laboratory-reared stock colony maintained at the US Arid Land Agricultural Research Center (Maricopa, AZ, USA). The individuals in this colony are periodically outbred with locally-caught conspecifics. The stock insects were given unrestricted access to a supply of green beans and an artificial diet mix ( Debolt 1982 ) packaged in Parafilm ( Patana 1982 ). These nutritive sources were replenished as needed. Similar Parafilm coated packets of an agar solution (15 g/liter), were provided to the females as a site for ovipositing, and are hereafter referred to as “egg packets.” Insects were reared at 25° C, 20% relative humidity, under a 14: 10 L:D photoperiod. Nymph Population Density To synchronize the age of nymphs used in experiments, groups of mixed-sex L. hesperus were allowed to oviposit for 4 h, and the fresh egg packets were collected. Nymphs were collected after 2–3 d. Post oviposition age was noted, and the nymphs were transferred into a 355 ml rearing cup (Huhtamaki, www.huhtamaki.com ). Because, under natural conditions, the density of each developmental stage could affect development ( Peters and Barbosa 1977 ), and because both the density and age composition of a localized population can change over time ( Schotzko and O'Keefe 1989b ), single age cohorts were used to mitigate possible confounding influences. Each rearing cup contained two 10 × 10 cm wax paper sheets crumpled to provide additional walking surface, approximately 12 g of fresh green beans, a 12 g artificial diet paraffin packet, and a wire mesh screen to support the food. Diet was replaced every 48 h to ensure freshness. The cups were made of waxed chipboard and were covered with a nylon mesh to ensure adequate air circulation and light exposure. To test for density effects, cups contained both sexes of nymphs in one of three different population sizes (20, 100, or 500). For each density, 14, 10, and 12 cups were prepared, respectively. Sixteen additional cups with 500 nymphs and twice the normal diet (designated 500DD hereafter) were prepared to determine if any negative impact associated with the increased population density resulted from increased competition for limited nutrients. The number of living nymphs was determined every 24 h to track both mortality rates and the number that had molted into adults. After 50% of the survivors reached adulthood, all adults were preserved in 50% ethanol for a minimum of one week. Ten adult males and 10 females were randomly selected from each cup and dissected to assess their developmental state. In cups with starting populations of 20, data were collected from all individuals still living at the end of the sample period. Several physiological parameters were measured. First, individual wet body mass was measured (Sartorius TE153S), after excess ethanol solution was dried from the external surface. Several length measurements were taken using a dissecting scope (Zeiss Stemi SV6) equipped with an ocular micrometer and calibrated to a stage micrometer. Length and width were measured for both forewings. Wing length was calculated from the point of attachment to the thorax to the posterior tip, and width was the perpendicular distance from the angle of the wing where its meets the posterior tip of the scutellum to the opposite side of the wing. The lengths of the second segment from the proximal end of both antennae were also measured. For many of the individuals sampled, the antennae were missing one or more distal segments, but the second segment was almost always present and provided an indicator of overall antenna length. Lastly, the width of the pronotum was determined at its widest point, as an indicator of overall body size. Because environmental stress can also contribute to malformations resulting in non-symmetrical development ( Allendorf and Leary 1986 ; Palmer and Strobeck 1986 ; Clarke and McKenzie 1992 ; Leung and Forbes 1996 ), asymmetry scores can potentially reveal more than simple length measures. Individual fluctuating asymmetry values were calculated by measuring the deviation from perfect bilateral symmetry for each of the paired trait measures (wing length and width, antennal segment length). Within each trait and across all groups, the absolute values of these deviations were ranked to ensure standardization. These ranked fluctuating asymmetry values were summed for each individual to create a statistical composite score. The composite fluctuating asymmetry scores were then used for intergroup comparisons to determine if increasing population size can influence symmetry ( Leung et al. 2000 ). The final trait assessed was degree of gonadal development and activity. Lygus oocyte development goes through three distinct phases (pre-vitellogenic, vitellogenic, choriogenic; Ma and Ramaswamy 1987 ). “Based on this pattern, a four stage scale was used to rate ovarian activity: 0 = previtellogenic oocytes only; 1 = one 0 = no vitellogenic oocytes; 1 = one or more slightly vitellogenic oocytes; 2 = one or more highly vitellogenic oocytes; 3 = vitellogenic oocytes with a pigmented operculum. For males, the length, width, and height of each testis was measured and used to calculate volume. One testis per male was then homogenized in 20 μl of distilled water. A 10 μl aliquot of the homogenate was placed on a Spencer Brightline hemocytometer and spermatozoa were counted under a stereomicroscope to calculate sperm concentration per testis. Because the testes of L. hesperus do not store sperm ( Strong et al. 1970 ), these measures provided a rough estimate of the relative rates of spermatogenesis among males. Adult Population Density The effect of population density on the body mass and oviposition rates of adults was separately assessed by maintaining groups of adults of known age at increasing densities (20, 60, and 100 individuals). For each density, equal numbers of newly molted female and male adults were placed in 15 – 355 ml rearing cups. These individuals spent their nymph stage in 1890 ml cups in groups of 100, which is roughly equivalent to the population density of 20 nymphs in 355 ml cups (0.053 vs. 0.056 nymphs/ml, respectively). This was done to ensure that any observable effects on adult mass and egg production were due solely to the conditions experienced as adults and not as residual effects of the nymph rearing environment. Adults were reared under the same conditions as outlined above for nymphs, with the inclusion of an egg packet. The number of live adults and oviposited eggs was censused every 24 h after cup initiation. After 10 d, a sufficient amount of time for most adults to have fully mature and active gonads, 10 individuals of each sex were randomly selected from each cup and preserved in 50% ethanol. Individual body masses were determined and female ovarian activity was assessed as described for nymphal studies. Statistical Analysis Initial comparisons between multiple groups were conducted using ANOVA, correcting for multiple paired comparisons using the Holm-Sidak method. In cases where the data were non-normally distributed, a Mann-Whitney rank sum test was used for paired comparisons, and for multiple comparisons a Kruskal-Wallis ANOVA was used, corrected by Dunn's method. A Spearman rank order correlation was used to determine the associations between testis volume and sperm number. All analyses were conducted using Sigmaplot 11.0 (Systat Software).
Results Nymph Population Density Increasing the population density had a negative effect on every measured development trait. Some of the most pronounced effects were observed in the groups of 500 that had also received twice the normal amount of food (500DD). Although the increased number of green beans and the second artificial diet packet supplied additional nutrients, the added food effectively decreased the available volume within the cups. These data suggest that this produced a further increase in the population density. Raising the population density in the rearing cups significantly increased the median amount of time it took for nymphs to initiate the adult molt (Kruskal-Wallis ANOVA, H = 33.5, df = 3, p < 0.001), from 18 d in groups of 20 to a high of 24 d in the 500DD groups ( Figure 1A ). The mean mortality rates within the cups also increased significantly ( Figure 1B ; ANOVA, F = 21.6, df = 3, p < 0.001), more than doubling between those two groups, from 28.9% to 61.3%. Increasing density also caused incremental size reductions in all of the external body population density (20 vs 500DD), adult body mass ( Figure 2 ) declined 19.4% in females (t = 8.7, p < 0.001; Holm-Sidak) and 19.6% in males (Q = 9.4, p < 0.05; Dunn's Method). Across all of the rearing conditions, adult female body mass was significantly greater than that of the males (Mann-Whitney rank sum test; T (438,467) = 25,8584.5; p < 0.001). Comparing the 20 and 500DD groups, wing length ( Figure 3 ) declined by 5.2% in females (Q = 9.8, p < 0.05; Dunn's Method) and 6.1% in males (Q = 11.2, p < 0.05) with increasing density. The wings of females were consistently longer than those of males (Mann-Whitney rank sum test; T (438,467) = 21,9163.0; p < 0.001). The results for wing width were nearly identical to those for wing length, with a decline of 3.2% in females and 3.4%) in males between the two density extremes. The length of the second antennal segment ( Figure 4 ) also declined with increasing density, by 9.4% in females (Q = 10.55, p < 0.05; Dunn's Method) and 8.9% in males (t = 13.6, p < 0.001; Holm-Sidak). Unlike body mass and wing length, the females had smaller antennal segments than the males across all groups (Mann-Whitney rank sum test; T (437,467) = 16,5750.5; p < 0.001). Finally, pronotum width ( Figure 5 ) decreased incrementally across the groups, with a difference of 4.7% in females (Q = 6.3, p < 0.05; Dunn's Method) and 5.5% in males (Q = 8.7, p < 0.05; Dunn's Method). Female pronotums were significantly larger than those of males for all population conditions (Mann-Whitney rank sum test; T (438,467) = 24,5358.0; p < 0.001). Despite the negative developmental effects of increasing density, very little asymmetry was observed in the formation of the wings and antennae. The mean trait variation across all population sizes and sexes was low (1.22% ± 0.03%), n = 3293). There were no differences among individuals from different rearing environments ( Figure 6 ) for both females (Kruskal-Wallis ANOVA, H = 7.7, df = 3, p = 0.054), and males (Kruskal-Wallis ANOVA, H = 3.3, df = 3, p = 0.35). There were also no differences between the sexes (Mann-Whitney rank sum test; T (433,463) = 19,3297.0; p = 0.8). In addition to affecting external morphology, the increasing population sizes also influenced the gonadal development of both female and male nymphs, although the changes induced were neither consistently negative nor incremental. Females reared in groups of 500 achieved a significantly higher ovarian stage than those from the other groups ( Figure 7 ; H = 73.9, df = 3, p < 0.001; Kruskal-Wallis ANOVA). Although those in the 500DD group also achieved a slight increase in oocyte production, the difference from the females in groups of 20 and 100 was only suggestive (Kruskal-Wallis ANOVA, H = 5.8, df = 2, p = 0.054). For males, testicular development increased significantly when reared in groups of 100 or 500, compared to those in groups of 20 or 500DD. There were significant changes in both testis volume ( Figure 8A ; ANOVA, F = 7.3, df = 3, p < 0.001) and sperm number ( Figure 8B ; Kruskal-Wallis ANOVA, H = 75.5, df = 3, p < 0.001). However, there was only a weak correlation between testis size and sperm amount (Spearman Rank Order, r = 0.25, p < 0.001, n = 465). Adult Population Density As was observed with nymphs, increasing population size among adults from 20 to 100 individuals caused an insignificant increase in the mortality rate recorded after 10 days of being grouped together ( Figure 9A ; ANOVA, F = 1.844, p = 0.172). In contrast to the nymphs, adult body mass after 10 days was unaffected by increasing population for both females (Kruskal-Wallis ANOVA, H = 1.2, df = 2, p = 0.54) and males (Kruskal-Wallis ANOVA, H = 0.8, df = 2, p = 0.67), but the females (overall median = 0.012 g) significantly outweighed the males (overall median = 0.008 g; Mann-Whitney rank sum test; T(158,161)= 120.0, p < 0.001). The relative ovarian activity in 10 d old adults did not differ between groups (median = 4 in all; Kruskal-Wallis ANOVA, H = 1.0, df = 2, p = 0.602), nor did the rate at which females oviposited eggs ( Figure 9B ; Kruskal-Wallis ANOVA, H = 0.0976, df = 2, p = 0.95). The pace of egg deposition followed the same pattern for all population densities, with the first eggs appearing after 4 d and the oviposition rates plateauing after 8 d.
Discussion The results provide evidence that high population density can have a considerable negative impact on the development and mortality of L. hesperus. In nymphs, the effects of overcrowding were apparent for almost every physiological measure, reducing body mass and size and increasing mortality. In contrast, the only substantial effect on adults was for mortality rate. This suggests that the sensitivity to this stressor may decline as a Lygus matures or that different coping mechanisms are utilized at each stage. The strong response of the nymphs to overcrowding has been observed in immature stages of other insect species and, in many cases, has been attributed to competition for limited nutrients (e.g. Averill and Prokopy 1987 ; Agnew et al. 2000 ; Reiskind et al. 2004 ). A restricted diet can directly impact mass gain and may also influence the timing and pattern of gene expression, resulting in delayed maturation and reduced body size. However, each trait may be affected independently by overcrowding rather than by a linked response. For example, differences in body mass arising from competition for nutrients has been found to have little direct consequence on other developmental traits in some insect species ( Peters and Barbosa 1977 ). A restricted diet isn't the only possible stress factor to which the Lygus nymphs were responding. Given that food supplies were replaced every other day and that doubling the food available actually exacerbated the negative effects of crowding, the results do not appear to be driven by nutritional deficits. The retarded growth rates and body sizes could have been induced by a build-up of toxic contaminates from fecal waste or from the stress of increasing the frequency of interaction with conspecifics ( Peters and Barbosa 1977 ). Such stimuli could change an individual's developmental trajectory via effects on the neuroendocrine system ( Hartfelder and Emlen 2005 ). Further supporting the possible role of nutrient-independent influences is the consistently stronger response of males for most of the traits examined. If a restricted diet were solely responsible, the larger bodied females should have been the sex with more pronounced developmental deficits given their need to provide more endogenous resources than the males. Despite the pronounced changes observed in the traits measured, the stress of overcrowding the nymphs did not produce any effect on fluctuating asymmetry scores. A constant and low fluctuating asymmetry in the face of increasing environmental stress has been observed in other insects ( David et al. 1998 ; Bjorksten et al. 2000 ; Mpho et al. 2000 ; Hoffmann et al. 2002 ) and may be indicative of a homeostatic mechanism that ensures the stable and coordinated development of paired traits, even if at the expense of overall size ( Hoffmann & Woods 2001 ). It is possible that heightened asymmetries did occur as the nymphs were exposed to increasing stress, but these might have gone undetected by the use of simple linear trait measures. Hoffman et al. ( 2002 ) found that while environmental stressors did not produce fluctuating asymmetry changes in wing size, they did have a significant effect on the asymmetry of wing shape. Because different stressors can evoke different developmental responses, fluctuating asymmetry may still be a useful measure for Lygus exposed to challenges other than high density, as was found for Culex pipiens ( Agnew et al. 2000 ; Mpho et al. 2000 , 2002 ). While most of the traits tracked in nymphs responded negatively to increased population density, their gonadal development did not. Both females and males exhibited increased gamete production as newly molted adults when exposed to overcrowding, although at the highest density these gains were negated. The increased density may trigger a competitive response among conspecifics to ensure that they can produce progeny earlier or in greater numbers to better compete for limited resources. The allocation of additional resources to reproductive development may in turn have contributed to their stunted somatic development. The decrease in gamete production in the transition from the 500 to 500DD density is likely the result of passing a threshold stress level after which the development of all traits becomes inhibited. The Lygus exposed to different population densities as adults were able to produce their first clutch of eggs at equivalent rates probably because they were able to rely on the endogenous reserves built up as nymphs reared under relatively hospitable conditions. Longer exposure to overcrowding might eventually produce a negative effect on fertility as the adults become increasingly reliant on newly acquired resources to produce eggs. It is also possible that individuals exposed to high densities at either stage of development may exhibit greatly reduced lifetime fitness (see Hooper et al. 2003 ), which would have gone undetected in this study. Nymphs in particular may suffer longer-lasting consequences given the pronounced effects on other aspects of their development. However, the stunted growth of these individuals is not necessarily indicative of a reduction in their fecundity because traits can be expressed independently ( Karlsson and Wiklund 1984 ; Ohgushi 1996 ). Some adult insects are also able to compensate for developmental deficits incurred in earlier stages ( Campero et al. 2008 ). Just as the short-term reproductive responses to overcrowding may not be indicative of effects on lifetime fecundity, the heightened mortality rates observed in overcrowded nymphs and young adults may not reflect the longevity that the survivors can achieve. The weakest individuals are likely to be the first to die, leaving a more robust population (reviewed in Van Dongen 2006 ). Had more of the susceptible individuals survived, they would have probably exhibited greater developmental deficiencies, including higher fluctuating asymmetry scores, than the population that remained ( Campero et al. 2008 ). Another possibility is that many of the dead nymphs were simply the victims of intraspecific predation ( Beards and Leigh 1960 ; Khattat and Stewart 1977 ). Those that did survive to adulthood may exhibit equivalent longevities regardless of their experiences as nymphs. The higher adult mortality rates over that of nymphs, despite a lack of other detrimental effects, may be due to differences in the way the two stages respond behaviorally to overcrowding. Lygus adults tend to be less aggregated than nymphs ( Sevacherian and Stern 1972 ; Schotzko DJ and O'Keefe 1989) and will flee or defend themselves against unwanted contact with conspecifics. Being forced to endure a high contact rate may cause the adults to become agitated and aggressive to a greater extent than the nymphs. The added energy expenditure and potential for injury associated with increasingly agonistic interactions could compromise immune resistance and increase overall pathogen susceptibility ( Peters and Barbosa 1977 ; Adamo 2006 ). A high population density might also enhance the rate of cannibalism in this omnivorous species, a common response in stressed or poorly sheltered populations of Mirids ( Wheeler 2001 ). In addition to directly impacting mortality rates, cannibalism would also amplify the stressful nature of the rearing environment. In conclusion, these results indicate that L. hesperus is quite sensitive to population density during its development. Although the long-term consequences on adult survivability and lifetime fecundity are unknown, there is sufficient evidence to caution against the experimental use of Lygus raised in crowded laboratory conditions. This study also provides a suite of responsive traits that can be used to effectively monitor other environmental stressors. These results can provide a guide by which future laboratory studies of this important pest species can be better designed and interpreted.
The western tarnished plant bug Lygus hesperus Knight (Heteroptera: Miridae), a major pest of cotton and other key economic crops, was tested for its sensitivity to population density during nymph and adult stages. Nymphs reared to adulthood under increasing densities in laboratory conditions exhibited incremental delays in maturation, heightened mortality rates, and reductions in body mass and various size parameters. In contrast, gonadal activity in both males and females rose with initial density increases. Supplemental nutrients provided to the nymphs failed to offset the negative effects of high density, suggesting that contact frequency, rather than resource partitioning, may be the primary stress. Unlike nymphs, newly eclosed adults exposed to increasing population densities did not suffer negative physiological effects; body mass, mortality rates and patterns of ovipositional activity were unchanged. Collectively, these results indicate that population density can dramatically influence Lygus development, but the specific effects are stage-dependent. Keywords
Acknowledgments I gratefully acknowledge Dan Langhorst for his expert technical assistance and Jackie Blackmer for sharing her knowledge of Lygus biology. I also thank John Byers, Jeff Fabrick, Steve Naranjo, and Brenda Singleton for their comments on earlier versions of this manuscript. Abbreviations double diet
CC BY
no
2022-01-12 16:13:46
J Insect Sci. 2010 May 17; 10:49
oa_package/02/6b/PMC3014799.tar.gz
PMC3014800
21209807
1. Introduction Randall disease (RD) is characterized by tissue deposition of monoclonal immunoglobulin light chains without tinctorial properties [ 1 ]. We report a case of RD associated with plasma cell dyscrasia, left VIth nerve palsy, peripheral neuropathy, kidney disease, and submandibular salivary gland hypertrophy.
3. Discussion Randall disease is a monoclonal immunoglobulin deposition disease [ 2 ]. Monoclonal immunoglobulin deposition disease is a systemic disorder with immunoglobulin chain deposition in a variety of organs, leading to various clinical features [ 3 ]. Visceral immunoglobulin chain deposits may be totally asymptomatic and found only at autopsy [ 4 ]. Submandibular salivary glands can be affected by monoclonal immunoglobulin deposition disease (MIDD). However, peripheral neuropathy and cranial nerve palsies in general, and extraocular motor nerve (VI) palsy associated with diplopia in particular, in the context of RD, are rarely reported in the literature. In 1998, Grassi et al. reported the first precise morphologic and clinical description of neuropathy related to RD [ 5 ]. The diagnosis of monoclonal immunoglobulin deposition disease must be suspected in front of nephrotic syndrome, rapidly progressive tubulointerstitial nephritis, or echocardiographic findings indicating diastolic dysfunction and the discovery of a monoclonal immunoglobulin component in the serum and/or the urine [ 4 ]. The definitive diagnosis is obtained by the immunohistologic analysis of the biopsy of an affected organ, mainly the kidney, using a panel of immunoglobulin chain-specific antibodies, including anti- κ and anti- λ light chain antibodies to stain the non-Congophilic deposits [ 4 ]. In our paper, the diagnosis was made by the immunohistologic analysis of the salivary glands. There is no standard treatment for RD [ 6 , 7 ]. Recent publications have emphasized the success of HDM/auto-SCT [ 6 ] which now appears to be the most reliable and effective treatment of neurological complications of MIDD in young patients. Indeed, the literature reports the successful treatment of AL amyloid polyneuropathy with this therapy [ 8 ]. Novel therapies—thalidomide, bortezomib, and lenalidomide—used in myeloma have not been sufficiently studied in RD [ 9 ]. The future prospects for therapy are based on the pathophysiology of RD and include the blocking of light chain binding to mesangial receptors, the use of transforming growth factor beta (TGF- β ) antagonists and inhibitors of light chain-induced signalling pathways [ 4 ]. This paper is educational in that it demonstrates the interest of considering RD in a clinical picture of a cranial nerve disorder. Further analyses will confirm the diagnosis, and appropriate therapy can improve the clinical abnormalities and prevent potentially serious functional complications. Finally, because of the rarity of this pathology and the improvement in symptoms obtained with high-dose melphalan with auto-SCT and nerve decompression surgery in this young patient, this paper could contribute to the medical literature which is currently scarce in this disease.
Academic Editor: Shaji Kumar Randall disease is an unusual cause of extraocular motor nerve (VI) palsy. A 35-year-old woman was hospitalized for sicca syndrome. The physical examination showed general weakness, weight loss, diplopia related to a left VIth nerve palsy, hypertrophy of the submandibular salivary glands, and peripheral neuropathy. The biological screening revealed renal insufficiency, serum monoclonal kappa light chain immunoglobulin, urinary monoclonal kappa light chain immunoglobulin, albuminuria, and Bence-Jones proteinuria. Bone marrow biopsy revealed medullar plasma cell infiltration. Immunofixation associated with electron microscopy analysis of the salivary glands showed deposits of kappa light chains. Randall disease was diagnosed. The patient received high-dose melphalan followed by autostem cell transplantation which led to rapid remission. Indeed, at the 2-month followup assessment, the submandibular salivary gland hypertrophy and renal insufficiency had disappeared, and the peripheral neuropathy, proteinuria, and serum monoclonal light chain had decreased significantly. The persistent diplopia was treated with nerve decompression surgery of the left extraocular motor nerve. Cranial nerve complications of Randall disease deserve to be recognized.
2. Case Report A 35-year-old woman was hospitalized for sicca syndrome lasting for 6 months. In addition to general weakness and a 6 kg weight loss, the physical examination showed diplopia related to left VIth nerve palsy as confirmed by the ophthalmological examination, submandibular salivary gland enlargement, and peripheral neuropathy confirmed by the electromyogram. Biological screening revealed moderate renal insufficiency with creatinine clearance at 47 mL/min/1.73 m 2 , serum monoclonal kappa light chain immunoglobulin with a level of 175 mg/L and a kappa/lambda ratio of 49, urinary monoclonal kappa light chain immunoglobulin, and proteinuria at 2 g/24 hours with positive Bence-Jones proteinuria. Bone marrow biopsy revealed medullar plasma cell infiltration representing up to 20% of medullar cells. However, there were no other criteria for multiple myeloma. Immunofixation associated with electron microscopy analysis of the salivary glands showed deposits of kappa light chains without characteristics of amyloidosic proteins ( Figure 1 ). In light of these abnormalities, RD associated with plasma cell dyscrasia, left VIth nerve palsy, peripheral neuropathy, kidney disease, and submandibular salivary gland hypertrophy was diagnosed. The patient received high dose melphalan (HDM) (200 mg/m 2 ) followed by autostem cell transplantation (SCT) (CD 34 × 10 6 /kg) which resulted in rapid subtotal and persistent remission. Indeed, two months after the treatment, the submandibular salivary gland hypertrophy had disappeared, the general state of health and peripheral neuropathy had improved, renal function had returned to normal with an increase in creatinine clearance to 91 mL/min/1.73 m 2 and a decrease in proteinuria (<1 g/24 hours), the serum monoclonal light chain level stood at 9.66 mg/L, and the kappa/lambda ratio was 1.97. However, there was still dysaesthesia of the left hand and left VIth nerve palsy. The latter was treated with nerve decompression surgery with disappearance of diplopia one year later. At the 3-year followup assessment, there was no recurrence, but only a persistence of slight paresthesia of the left hand.
Acknowledgment The authors are grateful to Mr. Philip Bastable.
CC BY
no
2022-01-13 01:48:12
Case Rep Med. 2010 Dec 16; 2010:542925
oa_package/53/9e/PMC3014800.tar.gz
PMC3014801
21209808
1. Introduction Spasm of the renal artery is a complication that can be encountered during endovascular procedures [ 1 ]. Arterial spasm can also occur after blunt abdominal trauma and is considered to be secondary to contusion [ 2 ]. This condition should not be confused with traumatic occlusion of the renal artery due to thrombosis or intimal flap formation, which can cause devascularization of the kidney. Additional attention should be paid to distinguish between these two conditions because of the different therapeutic options available [ 3 , 4 ]. In this contribution, we describe a case of severe spasm in the renal artery simulating end-organ infarction.
3. Discussion Renal injuries are classified into five grades according to the increasing severity of trauma. This classification system recognizes the progressive nature of parenchymal and vascular damage according to the increasing level of trauma [ 5 ]. Devascularization of the entire kidney due to vascular laceration or thrombosis of the renal artery is considered to be the most severe form of renal injury (grade 5). Such injuries may occur with or without parenchymal lacerations. If the kidney is devascularized as a consequence of an isolated intimal injury to the renal artery that results in thrombosis, extensive retroperitoneal hemorrhage and hematuria may be absent [ 6 ]. The management of renal trauma ranges from observation without intervention in minor lacerations to an emergency laparatomy for hemodynamic compromise. Minimally invasive techniques are being adopted to manage significant renal trauma if hemodynamic compromise is absent. As with all solid-organ injuries in children, there has been a shift to conservative management (if possible) [ 3 , 4 ]. The classic findings of traumatic renal infarction on CT include nonenhancement of the kidney on the affected side, retrograde opacification of the renal vein from the inferior vena cava, and abrupt truncation of the renal arterial lumen at the point of occlusion. The cortical rim nephrogram sign of a devascularised kidney may be absent in the acute setting [ 6 ]. Nonopacification of one or both kidneys on CT may be due to spasm of the renal artery secondary to contusion. [ 2 ]. This condition should not be confused with traumatic occlusion of the renal artery. In our case, the right kidney of the patient did not show enhancement. Taking into account that severe renal artery spasm and traumatic occlusion of the renal artery may occur simultaneously, making the correct interpretation was difficult. Renal artery spasm is a known complication of renal angioplasty during endovascular procedures. Ogita et al. reported multiple spasms of renal arteries after percutaneous transluminal renal angioplasty in children [ 7 ]. Spasm of the renal artery can also be seen after abdominal surgery in response to external stimuli. Yamagiwa et al. reported atrophy of the kidney following removal of the neuroblastoma. This was presumably due to renal artery spasm and endothelial damage, which probably led to stagnation of renal blood flow and finally to thrombosis of the renal artery [ 8 ]. Koehler and Friedenberg demonstrated three cases of renal artery spasm during angiography simulating end-organ infraction [ 9 ].
4. Conclusion Traumatic injuries to the renal artery and severe spasm of the renal artery secondary to contusion may produce identical findings on CT. Correlation between clinical findings and laboratory results has an important role in such situations. Repeated radiological assessment is therefore recommended to avoid misinterpretation. To reduce the risks of inappropriate treatment, radiologists must be aware of the CT findings of blunt trauma to the kidney.
Academic Editor: John Kortbeek Traumatic occlusion of the renal artery is a serious injury. Management differs according to the grade of injury. In most circumstances, emergency surgical revascularization or endovascular intervention is required. We describe the case of a child with multiorgan injuries and spasm of the main renal artery after blunt trauma simulating arterial occlusion or end-organ infarction.
2. Case Report A 4-year-old child was brought to the emergency room after falling from the fourth floor of a building. CT of the chest and abdomen was undertaken within the first hours after the accident. Right pulmonary contusion with minimal effusion and multiple liver lacerations with mild perihepatic fluid were detected on CT. Enhancement was not observed in the right kidney. The proximal and middle portion of the right renal artery had narrowed lumens (Figures 1(a) and 1(b) ). These findings were compatible with occlusion of the distal segment of the renal artery and end-organ infarction. The patient was hemodynamically stable, and laboratory findings were unremarkable, so endovascular management was planned. Control color Doppler ultrasonography 30 min before angiography detected normal vascularization of the right renal parenchyma and a patent renal artery. CT of the abdomen was obtained after 5 hours, and normal enhancement of the right kidney was confirmed ( Figure 2 ). On 6-month followup, color Doppler ultrasonography was carried out, and vascularization of the kidney and renal artery was found to be within normal limits. Thus, complete recovery without complications was observed, and laboratory findings were normal. Nonopacification of the right kidney was considered to be due to spastic occlusion of the renal artery (presumably in response to trauma).
CC BY
no
2022-01-13 01:48:12
Case Rep Med. 2010 Dec 16; 2010:207152
oa_package/51/f1/PMC3014801.tar.gz
PMC3014802
20672976
Introduction A species distribution is determined by physical (e.g. temperature, rainfall patterns), biological (e.g. food availability, competition, predation) and historical factors. Establishing the importance of each factor is essential to generate accurate predictive models of distribution and to estimate expected changes in distribution and conservation status, given particular pressures (e.g. present climate change). In the case of organisms with complicated life cycles, each stage may respond to different factors or may show a response with a different intensity or threshold. Accordingly, the requirements of each stage must be considered. Bioclimatic belts are the result of all physical variables affecting the landscape, since they are defined by thermal indexes, rainfall patterns and plant communities ( Rivas-Martínez 1987 ). Moreover, these bioclimatic units correspond to an altitudinal zonation. Therefore, bioclimatic belts might predict suitable habitats for lotic species, since they involve major factors defining a river, such as altitude, slope, temperature and rainfall pattern, which will ultimately define current velocity, amount of water, substrate and level of dissolved oxygen. Calopteryx Leach (Odonata: Calopterygidae) damselflies are excellent models to study the potential use of bioclimatic belts predicting distributions. They have a widespread distribution, their ecological requirements are well-known and, due to their conspicuousness, extensive data are available. Moreover, they display a high variability, showing a large number of subspecies or local forms ( Askew 2004 ) and their phylogeny has been the object of several genetic studies (e.g. Dumont et al. 2005 ). Thus, their biogeography and variability may be discussed using present distributional ranges. Furthermore, sexual selection processes have received a great deal of attention in the family Calopterygidae. Specific discrimination is based on the recognition of secondary sexual traits during a complex courtship ritual. In general, secondary traits would give information about the bearer's physical condition ( Andersson 1994 ). In this family, some of the secondary sexual traits seem to be clearly dependent on the male condition ( Grether 1996b ) and are related to greater sexual fitness in different aspects ( Grether 1996a , b ; Rantala et al. 2000 ; Siva-Jothy 2000 ; Córdoba-Aguilar 2002 ; Contreras-Garduño et al. 2006 , 2007 ). In fact, sexual selection processes in secondary sexual traits play a major role in specific divergence ( Svensson et al. 2006 ). Intra- and interspecific sexual interactions are especially important in this family: male territorial contests among conspecifics ( Grether 1996b ; Córdoba-Aguilar 2002 ) or heterospecifics ( Tynkkynen et al. 2004 , 2005 , 2006 ) and female mate choice during courtship ( Siva-Jothy 1999 ). During these processes, secondary sexual characters are shown to the opponent or to the potential mate. Moreover, other selective forces are known, such as conspicuousness to predators ( Grether 1997 ; Svensson and Friberg 2007 ) and prey ( Grether and Grey 1996 ) and trade-off with immune response ( Siva-Jothy 2000 ). Interspecific interactions (male-male competence and female mate choice) are especially relevant for our study, since they are modulated by the relative abundances of each species in sympatry ( Tynkkynen et al. 2004 , 2005 , 2006 ). These sexual selection processes may lead to local events of secondary sexual traits displacement ( Waage 1979 ; Tynkkynen et al. 2004 , 2005 , 2006 ) (see ‘evolutionary implications’). The geography of the Iberian Peninsula creates a great diversity of lotic habitats, with differences between the Eurosiberian and Mediterranean regions, and, in the latter, between the areas of mountain, plateau or coast. As Iberian Calopteryx species have different ecological requirements (see ‘study species’), an associated distribution with respect to the bioclimatic belts may be expected, which will ultimately be reflected in the spatial frequency of each species. Together with species distribution, a higher relative frequency would involve a higher relative abundance of the species, which deeply influences interspecific interactions. Some forms and subspecies have been described on the Iberian Peninsula (see ‘study species’), which might be explained by interspecific sexual interactions. The purpose of this study was to: 1) test the use of bioclimatic units in the preliminary prediction of distributions of Iberian Calopteryx species; 2) investigate whether differences in relative abundances exist on the Iberian Peninsula, in order to explain the variability of Iberian species from a sexual selection perspective; and 3) discuss paleobiogeography and future distributions within a global climate change context.
Materials and Methods Study species Three species of Calopteryx inhabit the Iberian-Balearic region: Calopteryx virgo meridionalis Sélys, 1873, Calopteryx xanthostoma (Charpentier, 1825) and Calopteryx haemorrhoidalis (Vander Linden, 1825). The distribution and habitat range of these species coincide partially and they frequently co-occur. Habitat selection seems to be mainly determined by larval requirements, defined principally by water temperature ( Schütte and Schrimpf 2002 ), which strongly influences the global distribution of odonates ( Corbet 1999 ). C. haemorrhoidalis appears in welloxygenated, clean and rather fast-flowing streams and rivers ( Grand and Boudot 2006 ); C. virgo meridionalis occurs in cold, fast-flowing streams and rivers, with abundant waterside vegetation ( Dijkstra and Lewington 2006 ; Grand and Boudot 2006 ); C. xanthostoma inhabits sunny, rather lowflowing rivers, with finer sediment and floating hydrophytes, even with pronounced drought periods ( Goodyear 2000 ; Grand and Boudot 2006 ). C. xanthostoma larvae tolerate higher water temperatures and lower oxygen concentration than C. virgo larvae ( Carchini and Rota 1985 ; Ferreras-Romero 1988 ). Secondary sexual traits are conspicuous and plastic. In the Iberian species, males show a pigmented wing spot ( Figure 1 ) and have the last three abdominal sternites specifically pigmented (reddish in C. virgo meridionalis , yellow in C. xanthostoma and carmine in C. haemorrhoidalis ). Although all these traits are shown to the potential mate during courtship, apparently only wing pigmentation plays a role in specific discrimination ( Svensson et al. 2007 ). Females are specifically distinguished by the relative position of the pseudopterostigma on the wing and wing pigmentation ( Figure 1 ). Wing pigmentation is used for specific recognition and discrimination by males ( Beukema 2004 ; Svensson et al. 2007 ). However, other species recognition cues might also be used by males, at least by C. virgo males ( Svensson et al. 2007 ). The pseudopterostigma might also play a role in species discrimination (Outomuro D and Ocharan FJ, unpublished observations). Some latitudinal differences with respect to secondary sexual traits have been recorded in Spain for the three species. From the northern slopes of the Cantabrian range to the Sistema Central range, males of C. virgo meridionalis and C. xanthostoma significantly have a proportionally more pigmented wing southwards. C. virgo meridionalis females have more pigmentation level southwards, while C. xanthostoma females have a shorter pseudopterostigma northwards ( Ocharan Larrondo 1987 ; Outomuro D and Ocharan FJ, in prep.). Moreover, C. haemorrhoidalis have two subspecies on the Iberian Peninsula: C. h. asturica from the Cantabrian Eurosiberian region and C. h. haemorrhoidalis from the rest of the Peninsula ( Ocharan 1983 ). Distribution maps A bibliographic review of Calopteryx Iberian-Balearic records was carried out (see Appendix 1). Only records in which reliable 10 × 10 km UTM coordinates might be assigned were employed. References for Calopteryx splendens (Harris, 1782) and Calopteryx virgo (Linnaeus, 1758) (non-Iberian taxa) were considered as C. xanthostoma and C. virgo meridionalis records; C. haemorrhoidalis subspecies were not taken in account. Furthermore, our own unpublished records from several Spanish regions were included (principally from the Segura river basin; see Appendix 2). Obtained distribution maps must be understood as known (not real) species distribution, since many Iberian areas have scarce or no recorded data. Grid density in an area is not directly related to species frecuency, but to sampling effort; since this effort is equivalent for the three species, their relative values remain valid. Data were introduced in a geo-referenced matrix as 10 × 10 km UTM coordinates, obtaining species presence maps using ArcGis 9.1 (ESRI, Redlands, U.S.A.). A χ 2 test was used to test whether the distribution of each species was random with respect to bioclimatic belts. According to Rivas-Martínez ( 1987 ), two biogeographical regions may be recognised on the Iberian Peninsula: the Mediterranean and Eurosiberian regions, the boundary between which is located along the southern slopes of the Cantabrian and Pyrenean ranges and Galicia/northwest of Portugal. The above author recognises different bioclimatic belts ( Figure 2 ) defined by thermal indexes. The Mediterranean region shows five belts on the Iberian Peninsula (from lower to higher altitude): thermo-, meso-, supra-, oro- and cryoromediterranean. The Eurosiberian region shows four belts: coline, montane, subalpine and alpine. Each belt is divided into horizons (the thermocoline horizon may also be considered a bioclimatic belt). Although bioclimatic belts are altitudinally zoned, altitude is not a variable used in their definition as bioclimatic units; therefore the altitude ranges are not similar or equivalent, especially between the two regions. Distribution maps for Spain were superimposed on the map of bioclimatic belts, thus obtaining the UTM presence grid for each belt. Since more than one belt is possible inside each grid, there are more presence data for each belt than the total number of grids. Presence-corrected frequencies for each belt were calculated as the quotient between the total number of species presence grids in the belt and the total number of grids occupied by any of the three species. Finally, potential distribution maps are presented for each species on the Iberian Peninsula, including data from Portugal. These maps might indicate theoretical population fragmentation for each species. Due to the territorial behaviour of Calopteryx males (and homing behaviour in females), dispersion of these species is very low, although a small part of the population may disperse over relatively longer distances (e.g. more than 1 km, Stettmer 1996 ; Schütte et al. 1997 ). It may therefore be assumed that greater population fragmentation would also involve greater population isolation. However, these maps must be understood as potential maps, in which other biological, physical or chemical factors should not be forgotten, as predators, microclimatic effects, water quality or human impacts.
Results and Discussion Species distribution None of the species showed a random distribution related to bioclimatic belts (χ 2 test for each species; d.f = 8; P <0.001). C. virgo meridionalis ( Figure 2A and Table 1 ) is frequent in the Eurosiberian region (63.7% of Spanish records, despite the fact that this region only supposes 15% of Spanish territory), both in coline (52.1% of Eurosiberian records) and montane belts (41.2%)). It is very scarce in subalpine and alpine belts, since these only occur in high summits of the Cantabrian and Pyrenean ranges; its presence there might be associated with mountain valleys, where a low slope allows suitable waters for larval development. In the Mediterranean region, it is associated with major mountain ranges, generally appearing in the top horizon of the supramediterranean belt (50.5%), close to the oromediterranean belt. In coline, montane and supramediterranean belts, suitable conditions for larval development occur, i.e., cold rapidly-flowing rivers with abundant vegetation and rocky beds ( Goodyear 2000 ; Dijkstra and Lewington 2006 ; Grand and Boudot 2006 ). In the Mediterranean region, only the presence of mountains permits these conditions. The southernmost Iberian populations occur in Los Alcornocales Natural Park (Cadiz), with meso- and thermomediterranean belts not associated with mountain ranges. This area has a Mediterranean climate with an Atlantic influence (high rainfall) ( Carpintero et al. 2000 ) that allows suitable running waters for this species. The potential distribution map ( Figure 2A ) shows a typical oceanic species with a low frequency in the Mediterranean region. Three major groups of Iberian populations may be distinguished: northern, central (populations in the central mountain ranges) and southern groups. The northern group is continuously distributed over the Eurosiberian region, where the species shows a high relative frequency of presence (179 UTM grids, versus 83 for C. xanthostoma and 72 for C. haemorrhoidalis ). Since the larval habitat is widespread, it may be assumed that higher relative frequency involves higher relative abundance of the species. Unfortunately, data are not available on population size to support this assumption. In the Mediterranean region, the relative frequency and abundance are lower (138 UTM grids, versus 247 for C. xanthostoma and 302 for C. haemorrhoidalis ). In the central group, a certain degree of isolation may be observed among these populations and with respect to Eurosiberian group; both species frequency and abundance are intermediate. The southern group is widely fragmented and C. virgo meridionalis only appears in mountain streams supporting small populations; for that reason, both species frequency and relative abundance are lower compare to the central and northern groups. C. xanthostoma ( Figure 2B and Table 1 ) is principally distributed over the Mediterranean region (84.6% of Spanish records), where it is more frequent in the northern half; it is clearly associated with supra- (47.9% of Mediterranean records) and mesomediterranean (41.4%) belts. C. xanthostoma prefers less fast-flowing rivers (lower slope) than C. virgo meridionalis. Moreover, this species withstands lower oxygen levels and higher temperatures ( Carchini and Rota 1985 ; Ferreras-Romero 1988 ). The supra- and mesomediterranean belts provide suitable conditions for these rivers. When it co-occurs with C. virgo meridionalis , its relative frequency in the mesomediterranean belt is higher. At higher altitudes (oromediterranean belt), the slope is too pronounced, while temperature may be too high and oxygen insufficient at lower altitudes (thermomediterranean belt). In the Cantabrian Eurosiberian region, C. xanthostoma is distributed in lower altitudes than C. virgo meridionalis ; it is associated more with the coline belt. As ecological requirements are partially coincident between these two species ( Carchini and Rota 1985 ; Ferreras-Romero 1988 ), C. xanthostoma co-occurs frequently with C. virgo meridionalis in this region. Population isolation ( Figure 2B ) is much lower than in C. virgo meridionalis. In the northern half of the Iberian Peninsula (Duero and Ebro basins), nearly continuous species distribution is observed, whereas populations appear to be more fragmented in the southern half. Isolation is not as clear as in C. virgo meridionalis since C. xanthostoma is less associated with mountain rivers. C. xanthostoma has a greater relative frequency and abundance compared to C. virgo meridionalis in the northern half of the Mediterranean region; C. virgo meridionalis is locally more frequent and abundant in mountain rivers. However, its relative frequency and abundance is much lower in the Eurosiberian region, especially in the Cantabrian area. C. haemorrhoidalis is widely distributed over the east and south of the Iberian Peninsula, as well as the Ebro basin, the Sistema Central range and the coastal strip of Eurosiberian region ( Figure 2C ), presenting the typical distribution of a Mediterranean species. In the Mediterranean region, C. haemorrhoidalis ( Table 1 ) is principally associated with meso- (55.2%) and supramediterranean (27.5%o) belts, and to a lesser extent with the thermomediterranean belt (15.2%). This is probably due to the fact that this species requires well-oxygenated streams ( Grand and Boudot 2006 ). In most cases, therefore, oxygen will not be sufficient where temperature is high and slope is low (thermomediterranean belt). In the Eurosiberian region, it only appears in the montane (44.9%) and coline (53.8%) belts. However, presence in the montane belt only occurs in the Pyrenees, whereas on the Cantabrian Coast (and in the northwest of the Peninsula) it only inhabits the coline belt. In the Cantabrian coline belt, it is associated with coastal thermal enclaves, which may be called the thermocoline belt, characterised by warm winters and marked oceanity which imply little thermal amplitude between winter and summer ( Rivas-Martínez 1987 ). C. haemorrhoidalis is totally absent in the Northern Sub-plateau (Duero basin). This is the only Calopteryx species which inhabits the Balearic Islands (Majorca and Minorca), associated with the mesomediterranean belt. There is a certain degree of population isolation ( Figure 2C ), distinguishing two major groups: 1) a Cantabrian group, consigned to the thermocoline belt; 2) the rest of the Iberian Peninsula (with more or less isolated populations). These are poorly connected by a narrow strip in the northwest of Spain. Relative frequency and abundance is much lower in the Eurosiberian region (restricted to the thermocoline belt), in relation to the other two species. The opposite situation occurs in the Mediterranean region. Corrected relative frequencies In the Eurosiberian region ( Figure 3A ), C. virgo meridionalis was the species with the highest relative frequency for the four belts. In contrast, C. haemorrhoidalis was the least frequent species (subalpine and montane belt data only refer to the Pyrenees). Species presence patterns related to bioclimatic belts and altitude may be observed. C. xanthostoma showed a maximum frequency in the montane belt, corresponding to medium river courses. C. haemorrhoidalis showed a maximum presence in the coline belt (in this case, the thermocoline belt), corresponding to low river courses. Results in the Mediterranean region ( Figure 3B ), were equivalent, though more significant; a clear frequency gradation being obtained. C. virgo meridionalis showed a decrease from a maximum in the cryoromediterranean belt to a minimum in the thermomediterranean belt. C. haemorrhoidalis presented the opposite results. These results are due to the fact that C. virgo meridionalis inhabits headwater stretches, while C. haemorrhoidalis appears in lower and/or more thermal stretches. Since C. xanthostoma inhabits medium river courses, it showed a maximum presence in the intermediate belt (supramediterranean), decreasing at higher and lower belts. In the Eurosiberian region ( Figure 4A ), coexistence of the three species occurred in the coline belt (on the Cantabrian Coast) and the montane belt (in the Pyrenees). The most frequent species association was C. virgo meridionalis with C. xanthostoma. This may be easily explained by the fact that C. haemorrhoidalis generally inhabits thermal coastal rivers (Cantabrian coastal strip) or lower altitudes (Pyrenees) in this region. In the Mediterranean region ( Figure 4B ), the highest coexistence values for the three species occurred in meso-, supra- and oromediterranean belts. The most frequent association was C. haemorrhoidalis with C. xanthostoma , especially in the aforementioned belts. This is a logical finding, seeing as these two species present higher relative frequencies than C. virgo meridionalis in the Mediterranean region. The association between C. virgo meridionalis and C. xanthostoma occurred at higher belts. Evolutionary implications The differences in relative abundances reported above may have a strong influence on interactions between species. As was mentioned above, sexual selection processes may involve secondary sexual characters displacement of isolated taxa in sympatry (e.g. Waage 1979 ) in such a way that these traits are modified divergently. This would be produced by specific recognition mistakes, since secondary characters are poorly-differentiated ( De Marchi 1990 ). The displacement supposes an energy saving in reproductive effort (mating, sexual harassment and interspecific aggression) ( Waage 1979 ; Mullen and Andrés 2007 ). Differences in relative abundance may create differential pressures on interspecific interactions that may in turn produce more or less noticeable secondary sexual character displacement depending on the abundance of the species that displaces the other ( Tynkkynen et al. 2004 , 2005 , 2006 ). In Central Europe, C. virgo virgo males were more aggressive against the C. splendens males with larger wing spots, causing a displacement of this trait in the latter. Moreover, the degree of displacement depended on C. virgo virgo relative abundance. Wherever C. virgo virgo was more abundant, C. splendens had a smaller wing spot ( Tynkkynen et al. 2004 , 2005 , 2006 ). This may be applied to females, though in terms of sexual harassment and interspecific matings. In fact, heterospecific matings are common, although reciprocal hybridization occurs at a low frequency ( Tynkkynen et al. 2008 ). Assuming the phylogenetic equivalence between Central European and Iberian species ( Weekers et al. 2001 ), C. virgo meridionalis males might displace C. xanthostoma male secondary traits depending on their relative abundances. Female phenotypes would be ‘reinforced’ where species abundance is lower. In fact, where each species is less abundant, a new different form or subspecies with modified secondary traits appears. Morphological differences found by Ocharan Larrondo ( 1987 ) in Iberian Calopteryx populations may be due to a character displacement phenomenon. This would be produced in species populations with low relative abundance. The aforementioned author described C. virgo meridionalis females with a dark wing phenotype in the central Iberian Peninsula (Mediterranean region), where this species has a low relative abundance. Moreover, he described C. xanthostoma females with reduced or no pseudopterostigma on the northern slopes of the Cantabrian range (Eurosiberian region), where this species also presents a low relative abundance. Finally, Ocharan ( 1983 ) described C. haemorrhoidalis asturica , a subspecies consigned to Cantabrian Eurosiberian populations (thermocoline belt), where its relative abundance is also low. A recent study focussing on C. virgo meridionalis and C. xanthostoma showed these differences once again in Iberian populations, from the northern slopes of the Cantabrian range to the central Iberian Peninsula (D. Outomuro D and Ocharan FJ, in prep.). In fact, coloration differences were found in secondary traits not only in females, but also in wing spot extension in males, showing an increase of pigmentation southwards. Differences show a clinal variation supported by clinal relative abundance. Furthermore, other sexual character differences were found in areas where the three Iberian Calopteryx species coexist, suggesting a possible role of C. haemorrhoidalis in character displacement on C. virgo meridionalis (Outomuro D and Ocharan FJ, unpublished observations). These described variations may not be clearly explained by environmental factors (e.g. altitude) or other hypotheses for melanism such as thermoregulation, cryptic coloration, protection from ultraviolet radiations, disease resistance, etc. (Outomuro D and Ocharan FJ, unpublished observations). However, further studies are necessary to explain these forms or subspecies inhabiting the Iberian Peninsula, especially genetic studies between Iberian populations, since recent works are insufficient and too general (e. g. Weekers et al. 2001 ). Biogeography and implications in a climate change context During the last major glaciation (Würm glaciation, Pleistocene), the western Mediterranean would have been one of the refugia for the genus Calopteryx. After this period, Calopteryx taxa would have reinvaded western Europe from the western Mediterranean refugium and centralwestern Asia refugium/refugia ( Weekers et al. 2001 ). C. virgo meridionalis described distribution and other facts support the hypothesis that C. virgo meridionalis also stayed in the western Mediterranean refugium during the Pleistocene (likewise C. xanthostoma and C. haemorrhoidalis : Dumont et al. 2005 ): 1) existence of relict southern populations (also in Morocco, see below), corresponding to the Iberia refugium, and 2) excluding distributions of C. virgo meridionalis and C. virgo virgo and presence of intermediate forms in sympatry ( Maibach 1986 ). The separation of these two subspecies from an ancient one might have been due to isolation during the last major glaciation. C. virgo meridionalis shows typically relict populations. In the southernmost regions of the Iberian Peninsula, it only persists in microclimatic refugia; for instance, spots in the south and at high altitudes in the southern mountain ranges. Moreover, in Africa, only two relict locations are known in Morocco (Riff mountains over 1000 m: Jacquemin and Boudot 1999 ); two old records (Sélys 1871; Martin 1910 ) in northern Algeria have not been reconfirmed ( Samraoui and Menaï 1999 ). Recent dispersion from southern Europe to northern Africa is unlikely, since southern Iberian populations sustain a low number of individuals (Ferreras-Romero M, University of Pablo Olavide, Seville, Spain, personal communication). Mediterranean Peninsulas might have acted as glacial refugia during Würm glaciation. Later dispersion might have involved a clash with congeneric species in Central Europe. That is the case of Calopteryx Iberian taxa (though not with C. haemorrhoidalis ). Many species distributions are subdivided by narrow hybrid zones which would have been produced by the clash between two divergent genomes, both expanding their distributional range from glacial refugia. One such hybrid zone is located in central-southern France ( Hewitt 2000 ). An introgression zone between C. xanthostoma and C. splendens based on morphological characters has been described in this hybrid zone ( Dumont et al. 1993 ), and another may possibly exist between C. virgo meridionalis and C. virgo virgo , since their distribution is continuous from Central Europe to the Iberian Peninsula. Maibach ( 1986 ) described intermediate forms between these two subspecies in Central France, where the contact zone is supposedly located. Unfortunately, to our knowledge, no more information on introgression zones between C. virgo subspecies has been reported in France. At least another C. virgo subspecies has been described, named as Calopteryx virgo festiva (Brullé, 1832), which inhabits the southern Balkans and Turkey ( Dijkstra and Lewington 2006 ). The Balkans also acted as a glacial refugium and a post-glacial source of species for eastern and western areas ( Hewitt 2000 ). C. virgo meridionalis and C. virgo festiva might have been dispersed from their refugia (Iberia and the Balkans) to Europe after the Würm glaciation and would have clashed against the nominal subspecies C virgo virgo originating from Asian réfugia. Several C. splendens subspecies have likewise been reported, most of which come from southern Mediterranean peninsulas, forming introgression strips with C. splendens splendens ( Grand and Boudot 2006 ). It is believed that many species will change their distributional range to higher altitudes and/or latitudes as a response of climate warming. Headwater streams are also sensitive to climate change and some scarce macroinvertebrate taxa might run the risk of local extinction due to an increase in winter temperatures ( Durance and Ormerod 2007 ). Expansion northwards of the distributional range of 34 non-migratory Odonata species was documented in Great Britain between 1960 and 1995, apparently as a result of climate change ( Hickling et al. 2005 ). Faunistic references are increasingly more frequent nowadays, supporting northern expansion of some Odonata species, as well as an increase in migratory flows to the Britain Isles. However, a possible increase in sampling efforts should be taken into account; so new data for a species do not necessarily mean that it did not previously exist in those areas ( Askew 2004 ). Distributional range expansions to higher latitudes or/and latitudes in the northern hemisphere have not only been documented in dragonflies, but also in butterflies, birds, lichens, alpine flora, forests and even in a lagomorph species (for a review, see Parmesan 2006 ). A general increase in temperature and decrease in rainfall level is predicted for the next 100 years in Spain, less pronounced in coastal zones and islands ( Castro et al. 2005 ). In keeping with Iberian Calopteryx ecological requirements and the distributions reported in this paper, the current climate change may severely affect their populations. Effects may be especially serious in the least thermal species, C. virgo meridionalis. C. virgo seems to be adapted to relatively cold waters, since it grows faster at low temperatures and has a higher standard metabolism than C. splendens ( Schütte and Schrimpf 2002 ). Therefore, Calopteryx virgo meridionalis populations might be displaced to higher latitudes and/or higher altitudes. For instance, distributional range modification was clearly observed in Great Britain between 1960 and 1995: northwards expansion was higher in C. virgo ( Hickling et al. 2005 ), since C. splendens prefers higher temperatures. Southern peninsular populations of C. virgo meridionalis , which are severely fragmented, are especially threatened by climate warming. A decrease in distributional range and possible local extinctions may be expected. These new vacant habitats (free of competitors and with new optimal conditions) might be occupied by C. xanthostoma or C. haemorrhoidalis (more thermal species). C. xanthostoma occurs in medium river courses, so its expansion is not as clear as that of C. haemorrhoidalis. A total reorganization of species distributions is likely. Intra- and interspecific interactions are especially marked in this family, so shifts in species distribution may involve profound changes in these interactions, affecting also interspecific dynamics. However, genetic studies need to be conducted to clarify the level of hybridization and genetic diversity in isolated populations, whose likelihood of survival might be compromised. The use of bioclimatic belts to predict species distributions may be applied to other lotic species, especially endangered species. Although data for these species are usually scarce and disperse (except for some countries with traditional monitoring programs), this method may be applied to obtain preliminary results of species distributions. Specific variables should be considered to create accurate predictive models. However, not only physical variables may predict a species distribution, but also the association with other species, for which more data might be available. This association may therefore be used as a first step to assess the appropriate conservation status for little-known species. In addition, the obtained distributions and the association with bioclimatic belts may be used to study temporal series, consider past distributions and predict future changes in species distribution (especially outstanding within a global climate change context). Finally, a species distribution and its relation with other related species distributions must be considered in terms of evolutionary biology, considering its role as a cause of interpopulation variability and ultimately in speciation.
Results and Discussion Species distribution None of the species showed a random distribution related to bioclimatic belts (χ 2 test for each species; d.f = 8; P <0.001). C. virgo meridionalis ( Figure 2A and Table 1 ) is frequent in the Eurosiberian region (63.7% of Spanish records, despite the fact that this region only supposes 15% of Spanish territory), both in coline (52.1% of Eurosiberian records) and montane belts (41.2%)). It is very scarce in subalpine and alpine belts, since these only occur in high summits of the Cantabrian and Pyrenean ranges; its presence there might be associated with mountain valleys, where a low slope allows suitable waters for larval development. In the Mediterranean region, it is associated with major mountain ranges, generally appearing in the top horizon of the supramediterranean belt (50.5%), close to the oromediterranean belt. In coline, montane and supramediterranean belts, suitable conditions for larval development occur, i.e., cold rapidly-flowing rivers with abundant vegetation and rocky beds ( Goodyear 2000 ; Dijkstra and Lewington 2006 ; Grand and Boudot 2006 ). In the Mediterranean region, only the presence of mountains permits these conditions. The southernmost Iberian populations occur in Los Alcornocales Natural Park (Cadiz), with meso- and thermomediterranean belts not associated with mountain ranges. This area has a Mediterranean climate with an Atlantic influence (high rainfall) ( Carpintero et al. 2000 ) that allows suitable running waters for this species. The potential distribution map ( Figure 2A ) shows a typical oceanic species with a low frequency in the Mediterranean region. Three major groups of Iberian populations may be distinguished: northern, central (populations in the central mountain ranges) and southern groups. The northern group is continuously distributed over the Eurosiberian region, where the species shows a high relative frequency of presence (179 UTM grids, versus 83 for C. xanthostoma and 72 for C. haemorrhoidalis ). Since the larval habitat is widespread, it may be assumed that higher relative frequency involves higher relative abundance of the species. Unfortunately, data are not available on population size to support this assumption. In the Mediterranean region, the relative frequency and abundance are lower (138 UTM grids, versus 247 for C. xanthostoma and 302 for C. haemorrhoidalis ). In the central group, a certain degree of isolation may be observed among these populations and with respect to Eurosiberian group; both species frequency and abundance are intermediate. The southern group is widely fragmented and C. virgo meridionalis only appears in mountain streams supporting small populations; for that reason, both species frequency and relative abundance are lower compare to the central and northern groups. C. xanthostoma ( Figure 2B and Table 1 ) is principally distributed over the Mediterranean region (84.6% of Spanish records), where it is more frequent in the northern half; it is clearly associated with supra- (47.9% of Mediterranean records) and mesomediterranean (41.4%) belts. C. xanthostoma prefers less fast-flowing rivers (lower slope) than C. virgo meridionalis. Moreover, this species withstands lower oxygen levels and higher temperatures ( Carchini and Rota 1985 ; Ferreras-Romero 1988 ). The supra- and mesomediterranean belts provide suitable conditions for these rivers. When it co-occurs with C. virgo meridionalis , its relative frequency in the mesomediterranean belt is higher. At higher altitudes (oromediterranean belt), the slope is too pronounced, while temperature may be too high and oxygen insufficient at lower altitudes (thermomediterranean belt). In the Cantabrian Eurosiberian region, C. xanthostoma is distributed in lower altitudes than C. virgo meridionalis ; it is associated more with the coline belt. As ecological requirements are partially coincident between these two species ( Carchini and Rota 1985 ; Ferreras-Romero 1988 ), C. xanthostoma co-occurs frequently with C. virgo meridionalis in this region. Population isolation ( Figure 2B ) is much lower than in C. virgo meridionalis. In the northern half of the Iberian Peninsula (Duero and Ebro basins), nearly continuous species distribution is observed, whereas populations appear to be more fragmented in the southern half. Isolation is not as clear as in C. virgo meridionalis since C. xanthostoma is less associated with mountain rivers. C. xanthostoma has a greater relative frequency and abundance compared to C. virgo meridionalis in the northern half of the Mediterranean region; C. virgo meridionalis is locally more frequent and abundant in mountain rivers. However, its relative frequency and abundance is much lower in the Eurosiberian region, especially in the Cantabrian area. C. haemorrhoidalis is widely distributed over the east and south of the Iberian Peninsula, as well as the Ebro basin, the Sistema Central range and the coastal strip of Eurosiberian region ( Figure 2C ), presenting the typical distribution of a Mediterranean species. In the Mediterranean region, C. haemorrhoidalis ( Table 1 ) is principally associated with meso- (55.2%) and supramediterranean (27.5%o) belts, and to a lesser extent with the thermomediterranean belt (15.2%). This is probably due to the fact that this species requires well-oxygenated streams ( Grand and Boudot 2006 ). In most cases, therefore, oxygen will not be sufficient where temperature is high and slope is low (thermomediterranean belt). In the Eurosiberian region, it only appears in the montane (44.9%) and coline (53.8%) belts. However, presence in the montane belt only occurs in the Pyrenees, whereas on the Cantabrian Coast (and in the northwest of the Peninsula) it only inhabits the coline belt. In the Cantabrian coline belt, it is associated with coastal thermal enclaves, which may be called the thermocoline belt, characterised by warm winters and marked oceanity which imply little thermal amplitude between winter and summer ( Rivas-Martínez 1987 ). C. haemorrhoidalis is totally absent in the Northern Sub-plateau (Duero basin). This is the only Calopteryx species which inhabits the Balearic Islands (Majorca and Minorca), associated with the mesomediterranean belt. There is a certain degree of population isolation ( Figure 2C ), distinguishing two major groups: 1) a Cantabrian group, consigned to the thermocoline belt; 2) the rest of the Iberian Peninsula (with more or less isolated populations). These are poorly connected by a narrow strip in the northwest of Spain. Relative frequency and abundance is much lower in the Eurosiberian region (restricted to the thermocoline belt), in relation to the other two species. The opposite situation occurs in the Mediterranean region. Corrected relative frequencies In the Eurosiberian region ( Figure 3A ), C. virgo meridionalis was the species with the highest relative frequency for the four belts. In contrast, C. haemorrhoidalis was the least frequent species (subalpine and montane belt data only refer to the Pyrenees). Species presence patterns related to bioclimatic belts and altitude may be observed. C. xanthostoma showed a maximum frequency in the montane belt, corresponding to medium river courses. C. haemorrhoidalis showed a maximum presence in the coline belt (in this case, the thermocoline belt), corresponding to low river courses. Results in the Mediterranean region ( Figure 3B ), were equivalent, though more significant; a clear frequency gradation being obtained. C. virgo meridionalis showed a decrease from a maximum in the cryoromediterranean belt to a minimum in the thermomediterranean belt. C. haemorrhoidalis presented the opposite results. These results are due to the fact that C. virgo meridionalis inhabits headwater stretches, while C. haemorrhoidalis appears in lower and/or more thermal stretches. Since C. xanthostoma inhabits medium river courses, it showed a maximum presence in the intermediate belt (supramediterranean), decreasing at higher and lower belts. In the Eurosiberian region ( Figure 4A ), coexistence of the three species occurred in the coline belt (on the Cantabrian Coast) and the montane belt (in the Pyrenees). The most frequent species association was C. virgo meridionalis with C. xanthostoma. This may be easily explained by the fact that C. haemorrhoidalis generally inhabits thermal coastal rivers (Cantabrian coastal strip) or lower altitudes (Pyrenees) in this region. In the Mediterranean region ( Figure 4B ), the highest coexistence values for the three species occurred in meso-, supra- and oromediterranean belts. The most frequent association was C. haemorrhoidalis with C. xanthostoma , especially in the aforementioned belts. This is a logical finding, seeing as these two species present higher relative frequencies than C. virgo meridionalis in the Mediterranean region. The association between C. virgo meridionalis and C. xanthostoma occurred at higher belts. Evolutionary implications The differences in relative abundances reported above may have a strong influence on interactions between species. As was mentioned above, sexual selection processes may involve secondary sexual characters displacement of isolated taxa in sympatry (e.g. Waage 1979 ) in such a way that these traits are modified divergently. This would be produced by specific recognition mistakes, since secondary characters are poorly-differentiated ( De Marchi 1990 ). The displacement supposes an energy saving in reproductive effort (mating, sexual harassment and interspecific aggression) ( Waage 1979 ; Mullen and Andrés 2007 ). Differences in relative abundance may create differential pressures on interspecific interactions that may in turn produce more or less noticeable secondary sexual character displacement depending on the abundance of the species that displaces the other ( Tynkkynen et al. 2004 , 2005 , 2006 ). In Central Europe, C. virgo virgo males were more aggressive against the C. splendens males with larger wing spots, causing a displacement of this trait in the latter. Moreover, the degree of displacement depended on C. virgo virgo relative abundance. Wherever C. virgo virgo was more abundant, C. splendens had a smaller wing spot ( Tynkkynen et al. 2004 , 2005 , 2006 ). This may be applied to females, though in terms of sexual harassment and interspecific matings. In fact, heterospecific matings are common, although reciprocal hybridization occurs at a low frequency ( Tynkkynen et al. 2008 ). Assuming the phylogenetic equivalence between Central European and Iberian species ( Weekers et al. 2001 ), C. virgo meridionalis males might displace C. xanthostoma male secondary traits depending on their relative abundances. Female phenotypes would be ‘reinforced’ where species abundance is lower. In fact, where each species is less abundant, a new different form or subspecies with modified secondary traits appears. Morphological differences found by Ocharan Larrondo ( 1987 ) in Iberian Calopteryx populations may be due to a character displacement phenomenon. This would be produced in species populations with low relative abundance. The aforementioned author described C. virgo meridionalis females with a dark wing phenotype in the central Iberian Peninsula (Mediterranean region), where this species has a low relative abundance. Moreover, he described C. xanthostoma females with reduced or no pseudopterostigma on the northern slopes of the Cantabrian range (Eurosiberian region), where this species also presents a low relative abundance. Finally, Ocharan ( 1983 ) described C. haemorrhoidalis asturica , a subspecies consigned to Cantabrian Eurosiberian populations (thermocoline belt), where its relative abundance is also low. A recent study focussing on C. virgo meridionalis and C. xanthostoma showed these differences once again in Iberian populations, from the northern slopes of the Cantabrian range to the central Iberian Peninsula (D. Outomuro D and Ocharan FJ, in prep.). In fact, coloration differences were found in secondary traits not only in females, but also in wing spot extension in males, showing an increase of pigmentation southwards. Differences show a clinal variation supported by clinal relative abundance. Furthermore, other sexual character differences were found in areas where the three Iberian Calopteryx species coexist, suggesting a possible role of C. haemorrhoidalis in character displacement on C. virgo meridionalis (Outomuro D and Ocharan FJ, unpublished observations). These described variations may not be clearly explained by environmental factors (e.g. altitude) or other hypotheses for melanism such as thermoregulation, cryptic coloration, protection from ultraviolet radiations, disease resistance, etc. (Outomuro D and Ocharan FJ, unpublished observations). However, further studies are necessary to explain these forms or subspecies inhabiting the Iberian Peninsula, especially genetic studies between Iberian populations, since recent works are insufficient and too general (e. g. Weekers et al. 2001 ). Biogeography and implications in a climate change context During the last major glaciation (Würm glaciation, Pleistocene), the western Mediterranean would have been one of the refugia for the genus Calopteryx. After this period, Calopteryx taxa would have reinvaded western Europe from the western Mediterranean refugium and centralwestern Asia refugium/refugia ( Weekers et al. 2001 ). C. virgo meridionalis described distribution and other facts support the hypothesis that C. virgo meridionalis also stayed in the western Mediterranean refugium during the Pleistocene (likewise C. xanthostoma and C. haemorrhoidalis : Dumont et al. 2005 ): 1) existence of relict southern populations (also in Morocco, see below), corresponding to the Iberia refugium, and 2) excluding distributions of C. virgo meridionalis and C. virgo virgo and presence of intermediate forms in sympatry ( Maibach 1986 ). The separation of these two subspecies from an ancient one might have been due to isolation during the last major glaciation. C. virgo meridionalis shows typically relict populations. In the southernmost regions of the Iberian Peninsula, it only persists in microclimatic refugia; for instance, spots in the south and at high altitudes in the southern mountain ranges. Moreover, in Africa, only two relict locations are known in Morocco (Riff mountains over 1000 m: Jacquemin and Boudot 1999 ); two old records (Sélys 1871; Martin 1910 ) in northern Algeria have not been reconfirmed ( Samraoui and Menaï 1999 ). Recent dispersion from southern Europe to northern Africa is unlikely, since southern Iberian populations sustain a low number of individuals (Ferreras-Romero M, University of Pablo Olavide, Seville, Spain, personal communication). Mediterranean Peninsulas might have acted as glacial refugia during Würm glaciation. Later dispersion might have involved a clash with congeneric species in Central Europe. That is the case of Calopteryx Iberian taxa (though not with C. haemorrhoidalis ). Many species distributions are subdivided by narrow hybrid zones which would have been produced by the clash between two divergent genomes, both expanding their distributional range from glacial refugia. One such hybrid zone is located in central-southern France ( Hewitt 2000 ). An introgression zone between C. xanthostoma and C. splendens based on morphological characters has been described in this hybrid zone ( Dumont et al. 1993 ), and another may possibly exist between C. virgo meridionalis and C. virgo virgo , since their distribution is continuous from Central Europe to the Iberian Peninsula. Maibach ( 1986 ) described intermediate forms between these two subspecies in Central France, where the contact zone is supposedly located. Unfortunately, to our knowledge, no more information on introgression zones between C. virgo subspecies has been reported in France. At least another C. virgo subspecies has been described, named as Calopteryx virgo festiva (Brullé, 1832), which inhabits the southern Balkans and Turkey ( Dijkstra and Lewington 2006 ). The Balkans also acted as a glacial refugium and a post-glacial source of species for eastern and western areas ( Hewitt 2000 ). C. virgo meridionalis and C. virgo festiva might have been dispersed from their refugia (Iberia and the Balkans) to Europe after the Würm glaciation and would have clashed against the nominal subspecies C virgo virgo originating from Asian réfugia. Several C. splendens subspecies have likewise been reported, most of which come from southern Mediterranean peninsulas, forming introgression strips with C. splendens splendens ( Grand and Boudot 2006 ). It is believed that many species will change their distributional range to higher altitudes and/or latitudes as a response of climate warming. Headwater streams are also sensitive to climate change and some scarce macroinvertebrate taxa might run the risk of local extinction due to an increase in winter temperatures ( Durance and Ormerod 2007 ). Expansion northwards of the distributional range of 34 non-migratory Odonata species was documented in Great Britain between 1960 and 1995, apparently as a result of climate change ( Hickling et al. 2005 ). Faunistic references are increasingly more frequent nowadays, supporting northern expansion of some Odonata species, as well as an increase in migratory flows to the Britain Isles. However, a possible increase in sampling efforts should be taken into account; so new data for a species do not necessarily mean that it did not previously exist in those areas ( Askew 2004 ). Distributional range expansions to higher latitudes or/and latitudes in the northern hemisphere have not only been documented in dragonflies, but also in butterflies, birds, lichens, alpine flora, forests and even in a lagomorph species (for a review, see Parmesan 2006 ). A general increase in temperature and decrease in rainfall level is predicted for the next 100 years in Spain, less pronounced in coastal zones and islands ( Castro et al. 2005 ). In keeping with Iberian Calopteryx ecological requirements and the distributions reported in this paper, the current climate change may severely affect their populations. Effects may be especially serious in the least thermal species, C. virgo meridionalis. C. virgo seems to be adapted to relatively cold waters, since it grows faster at low temperatures and has a higher standard metabolism than C. splendens ( Schütte and Schrimpf 2002 ). Therefore, Calopteryx virgo meridionalis populations might be displaced to higher latitudes and/or higher altitudes. For instance, distributional range modification was clearly observed in Great Britain between 1960 and 1995: northwards expansion was higher in C. virgo ( Hickling et al. 2005 ), since C. splendens prefers higher temperatures. Southern peninsular populations of C. virgo meridionalis , which are severely fragmented, are especially threatened by climate warming. A decrease in distributional range and possible local extinctions may be expected. These new vacant habitats (free of competitors and with new optimal conditions) might be occupied by C. xanthostoma or C. haemorrhoidalis (more thermal species). C. xanthostoma occurs in medium river courses, so its expansion is not as clear as that of C. haemorrhoidalis. A total reorganization of species distributions is likely. Intra- and interspecific interactions are especially marked in this family, so shifts in species distribution may involve profound changes in these interactions, affecting also interspecific dynamics. However, genetic studies need to be conducted to clarify the level of hybridization and genetic diversity in isolated populations, whose likelihood of survival might be compromised. The use of bioclimatic belts to predict species distributions may be applied to other lotic species, especially endangered species. Although data for these species are usually scarce and disperse (except for some countries with traditional monitoring programs), this method may be applied to obtain preliminary results of species distributions. Specific variables should be considered to create accurate predictive models. However, not only physical variables may predict a species distribution, but also the association with other species, for which more data might be available. This association may therefore be used as a first step to assess the appropriate conservation status for little-known species. In addition, the obtained distributions and the association with bioclimatic belts may be used to study temporal series, consider past distributions and predict future changes in species distribution (especially outstanding within a global climate change context). Finally, a species distribution and its relation with other related species distributions must be considered in terms of evolutionary biology, considering its role as a cause of interpopulation variability and ultimately in speciation.
Associate Editor: James Miller was editor of this paper Using bioclimatic belts as habitat and distribution predictors, the present study examines the implications of the potential distributions of the three Iberian damselflies, Calopteryx Leach (Odonata: Calopterygidae), with the aim of investigating the possible consequences in specific interactions among the species from a sexual selection perspective and of discussing biogeographical patterns. To obtain the known distributions, the literature on this genus was reviewed, relating the resulting distributions to bioclimatic belts. Specific patterns related to bioclimatic belts were clearly observed in the Mediterranean region. The potential distribution maps and relative frequencies might involve latitudinal differences in relative abundances, C. virgo meridionalis Sélys being the most abundant species in the Eurosiberian region, C. xanthostoma (Charpentier) in the northern half of the Mediterranean region and C. haemorrhoidalis (Vander Linden) in the rest of this region. These differences might explain some previously described latitudinal differences in secondary sexual traits in the three species. Changes in relative abundances may modulate interactions among these species in terms of sexual selection and may produce sexual character displacement in this genus. C. virgo meridionalis distribution and ecological requirements explain its paleobiogeography as a species which took refuge in Iberia during the Würm glaciation. Finally, possible consequences in species distributions and interactions are discussed within a global climate change context. Keywords
Acknowledgements We want to thank all the people who provided us some bibliography for the distribution review, and especially we thank to S. Ferreira who sent us many papers with data from Portugal. DO holds a research fellowship from Fundación para el Fomento en Asturias de la Investigación Científica Aplicada y la Tecnología (FICYT).
CC BY
no
2022-01-12 16:13:47
J Insect Sci. 2010 Jun 11; 10:61
oa_package/6c/8b/PMC3014802.tar.gz
PMC3014803
20572783
Introduction The achievement of a complete inventory of the earth's biota remains an urgent priority for biodiversity conservation. One of the main challenges is exploring the wilder regions of the world where intact habitats of high conservation value remain unknown. Arid areas are a major terrestrial habitat among these environments ( Polis 1991 ). In South America, deserts are the largest macro-habitat, covering more than 57.3% of the surface area ( Mares 1992 ). The dry neotropics support considerable biological diversity, though they have received little attention in comparison with the wet, tropical forests ( Bestelmeyer and Wiens 1996 ). Patagonia is a large xeric biome located in the southern tip of South America, remarkably understudied despite the fact that some of the original components and functions of this arid ecosystem are still preserved. One of the largest conservation units of arid ecosystems in Argentina is the Natural Protected Area Península Valdés, located in the northeastern zone of this biome. Since 1999, this area has been included in the UNESCO World Heritage List. Invertebrates represent an essential part of ecosystems ( Seymour and Dean 1999 ) having great abundances and species richness in almost all habitats ( James et al. 1999 ; Andersen et al. 2004 ; Corley et al. 2006 ), occurring at all levels of the food web ( Samways 1994 ; Seymour and Dean 1999 ; Andersen et al. 2004 ), and playing vital roles in the structure and fertility of soils, the pollination of flowering plants, nutrient cycling, and in the decomposition of organic material and predation ( Greenslade 1992 ; Ayal et al. 2007 ). Furthermore, arthropods can be used for monitoring environmental changes because of their high species abundances, richness, and habitat fidelity ( Andersen and Majer 2004 ). Terrestrial arthropods are even better monitors than vegetation because of their rapid response to habitat changes and the capability of generating a finer environmental classification than vascular plants or vertebrates ( Samways 1994 ; Seymour and Dean 1999 ; Andersen et al. 2004 ). In arid regions, invertebrates are the most abundant animals ( Crawford 1986 ; Ayal et al. 2007 ). In these habitats, arthropods play key roles (principally in and above the soil) as decomposers, herbivores, granivores, and predators, controlling nutrient and energy flow through trophic levels in the food chain ( Crawford 1986 ; Polis 1991 ; Greenslade 1992 ; Ayal et al. 2007 ). Arthropods fill these important functional roles in deserts because they are less constrained by low water availability and extreme thermal environments than other animals ( Whitford 2000 ; Andersen et al. 2004 ). The arthropod biomass and species diversity is much greater than all other desert animal biomass and diversity combined ( Polis 1991 ). The aim of this work was to give a preliminary description of the composition and structure of the arthropod community of Península Valdés, using species abundance models, diversity analysis and a trophic guild approach, based on a planned and intensive sampling effort. The purpose is to contribute to a currently limited knowledge of the ground-dwelling arthropod fauna of Patagonia ( Cuezzo 1998 ; Flores 1998 ; Ceballos and Rosso de Ferradás 2008 ; Crespo and del Valverde 2008 ; Ocampo and Ruiz Manzanos 2008 ).
Materials and Methods Ground-dwelling arthropods were sampled using pitfall traps during the summers of 2005, 2006 and 2007. A total of 648 traps, 12 cm in diameter at the opening and 12 cm deep, were placed (216 traps/year). According to previous optimization studies of the pitfall sampling in the area (Cheli, unpublished observations), each trap was filled with 300 ml of a 30% solution of ethylene glycol used as a preservative, and each trap was opened on-site for two weeks in the middle of February. Traps were located at least 20 m apart from each other, covering the main environmental units of Península Valdés ( Figure 1 ). The two main vegetation units of Península Valdés are: (1) shrub steppe with 67%) of total vegetal cover dominated by Chuquiraga avellanedae Lorentz (Asterales: Asteraceae), Condalia microphylla Cav. (Rosales: Rhamnaceae), Paronychia chilensis DC (Caryophyllales: Caryophyllaceae), Hoffmanseggia trifoliata Cav. (Fabales: Fabaceae), Nassella tenuis (Phil.) Barkworth (Poales: Poaceae), Achnatherum speciosa (Trin, & Rupr.) Barkworth (Poaceae), Poa ligularis Nees & Steud. (Poaceae); and (2) shrub-grass steppe with 75%> of total vegetal cover dominated by C. avellanedae , Hyalis argentea D. Don ex Hook & Arn (Asteraceae), H. trifoliata , P. chilensis , S. tenuis , Sporobolus rigens (Trin.) E. Desv. (Poaceae), Piptochaetium napostaense (Speg.) Hack. (Poaceae), Plantago patagonica Jacq. (Lamiales: Plantaginaceae) ( Bertiller et al. 1981 ). All specimens were identified to order and family levels. Additionally, in order to have a good estimation of the community structure at the species level, three representative groups with different abundances were chosen: Formicidae (Hymenoptera) (the most abundant taxa), Coleoptera (a medium to high abundance taxon), and Heteroptera (Hemiptera) (low abundance taxa). In those cases where it was not possible to determine individuals at the species level, the individuals were described as morphospecies for further analysis. Voucher specimens were deposited in the entomological collection of Centro Nacional Patagónico (CENPAT-CONICET), Museo de La Plata and IADIZA (CRICYT-CONICET). Araneae were only analyzed to the order level due to the large numbers of juvenile specimens and of individuals whose small size impeded proper determination. The same level of analysis was used for Psocoptera because of the lack of accurate literature and keys. Finally, flying Hymenoptera, Lepidoptera, and the suborder Auchenorrhyncha (Hemiptera) were excluded from analysis because the sampling protocol used for this study was not suited for these groups. Statistical analysis Abundance analysis: Abundance distribution models were used to describe the structure of the community. To choose which model best described the community, a Bayesian selection was performed for four models. Those models increased in their evenness as follows: (a) Dominance pre-emption model, (b) Logarithmic Series, (c) Logarithmic Normal Distribution, and (d) MacArthur's Broken Stick model ( Tokeshi 1990 , 1993 ; Magurran 2004 ). The decision criterion for choosing a model was the lowest value of the Akaike Information Criterion (AIC) ( Gelman et al. 2003 ). The estimation of parameters was calculated by means of Markov Chain Montecarlo ( Gelman et al. 2003 ) using the pymc library for Bayesian estimation for the python programming language ( Fonnesbeck 2009 ). Diversity analysis: Diversity was estimated through the Shannon-Wiener index, the Shannon evenness measure, and the richness of families and species ( Moreno 2001 ; Magurran 2004 ). The Shannon-Wiener diversity index was calculated using natural log, and differences between groups were tested by the Hutchenson method (a modification of the t-test, see Magurran 1988 ) using Bio∼DAP software. Guild analysis: To indicate the trophic structure of the arthropod community, species were classified into feeding guilds as herbivores, predators, and scavengers (following Borror et al. 1989 ; Morrone and Coscarón 1998 ; Claps et al. 2008 ). The relationship among abundance and richness of feeding guilds was analyzed using the X 2 test. All α-values for multiple tests were corrected by Bonferroni's correction (α' = α /3 = 0.0167) ( Zar 1999 ).
Results A total of 28, 111 arthropods belonging to 18 orders, 52 families and 160 species/morphospecies were collected. At the order level, Hymenoptera (Formicidae and Mutillidae) represented 83.2% of the total catch, thus there were very low relative abundances of other orders. Among the Hymenoptera, 99.3% were ants (Formicidae). As a consequence of their colonial behavior, they fall in the traps in large numbers; therefore, the percentages of capture were calculated excluding Formicidae to better describe the dominance relationships between the captured groups. This revealed a shared sub-dominance between Araneae and Coleoptera, followed in magnitude by Orthoptera, Collembola, and Solifuga ( Table 1 , Figure 2 ). At the family level, the analysis showed a sub-dominance of six families (Sminthuridae, Tenebrionidae, Acrididae, Phloeothripidae, Carabidae, and Mummusidae) which represents more than 60% of the total catch. A complete description of the community at the order and family levels is given in Table 1 . Among the Formicidae caught, 75.1% belong to the Myrmicinae subfamily with Pheidole bergi Mayr and Solenopsis patagonica Emery being the most abundant species, representing more than 50% of the total captures ( Figure 3 ). A complete description of the ant assemblage is given in Table 2 . The most abundant families of beetles were Tenebrionidae and Carabidae, representing more than 75% of the total captures of this group, while the most numerous species were Blapstinus punctulatus Solier, Trirammatus (Plagioplatys) vagans (Dejean) and Metius malachiticus Dejean ( Figure 4 , Table 3 ). With respect to the true bug assemblage, the most numerous families were Oxicarenidae and Blissidae with more than 54% of the total captures of this group. The most abundant species was Anomaloptera patagonica Dellapé & Cheli ( Figure 5 ); also found were Valdesiana curiosa Carpintero, Dellapé & Cheli (Miridae). Both taxa were very recently described as new based on specimens collected from this study. A complete description of the true bug community can be found in Table 4 . Abundance analysis: The distribution abundance model which best described the abundance data, both at the family and species levels, was the logarithmic series model (AIC fam: 202.231; AIC sp: 134.32). Also, this model best described the species abundances of ants (AIC: 138.551) and beetles (AIC: 134.318). The true bug species were equally well described both by the log series (AIC: 41.318) as well as the log normal series (AIC: 39.72) ( Table 5 ). In addition, excluding ants from the analysis increased the capacity of the logarithmic series model to describe the species abundance distribution of the community (AIC excluding ants: 513.668; AIC including ants: 652.527). Diversity analysis: There was a significant increase of diversity (Shannon-Wiener index) at both the family and species levels when ants were excluded from the analysis (Hutchenson test: for the family level, t' = 101.494, p < 0.0001; for the species level, t' = 39.928, p < 0.0001) as well as an increase in the evenness of both taxonomical levels. At the species level, beetles were more diverse than ants (Hutchenson test; t' = 11.995, p < 0.0001). True bugs were equally as diverse as beetles (Hutchenson test, t' = 2.249, p = 0.026) and ants (Hutchenson test, t' = 1.645, p = 0.103). The Shannon species evenness measure was considerably high and similar among the three groups of species ( Table 6 ). Guild analysis: There was a significant difference among abundances of trophic guilds ( X 2 0.05; 2 = 459.75; p < 0.001). The abundance of predators was greater than herbivores ( X 2 0.05; 1 = 458.34; p < 0.001) and scavengers ( X 2 0.05; 1 = 97.81; p < 0.001), while the abundances of scavengers were greater than herbivores ( X 2 0.05; 1 = 139.64; p < 0.001). Family richness did not differ significantly among trophic guilds ( X 2 0.05; 2 = 5.81; p = 0.0548) ( Figure 6 ).
Discussion This is the first community study based on a planned and intensive sampling effort that describes the composition and structure of the ground-dwelling arthropod community of Península Valdés. The most important orders based on abundance were Hymenoptera, Coleoptera, and Araneae. The same community pattern was found in other arid areas of Argentina ( Gardner et al. 1995 ; Molina et al. 1999 ; Lagos 2004 ), as well as in other regions of the world ( Bromham et al. 1999 ; Seymour and Dean 1999 ). The three aforementioned orders are the most diverse and abundant in the world, and several authors considered them “hyper-diverse” taxa ( Gibson et al. 1992 ; Martín-Piera and Lobo 2000 ; Lagos 2004 ). The community was dominated by few abundant taxa at both family and species levels. Also, there were some groups with intermediate abundances and a large proportion of “rare” taxa for which very few individuals were caught. Therefore, the distribution of both species and family abundances were better described by the Logarithmic series model. This model depicts a system where some species could have arrived at an unsaturated habitat at randomly spaced intervals of time in order to occupy the remaining fractions of the niche hyperspace, thus having intermediate levels of niche preferences. Similarly, this model describes systems in which one or a few factors dominate the ecological relationships of the community and in which the intensity of migration between communities is important ( Magurran 2004 ). It is worth noting that, at the species level, taxa with remarkably different abundance, such as ants, beetles, and true bugs, were equally described by the logs series. Still, in the case of true bugs, which were adequately described both by the log and log normal series, this represents a special case of log normal distribution called “canonical.” Such pattern is a consequence of random niche separation every time a new species is incorporated into the assemblage ( Magurran 2004 ). In this sense, these findings increase knowledge on niche segregation in general and on the invertebrate community structure of northeast Patagonia. Ants are a central component of arthropod abundance in the study area, representing more than 80% of total captures. The contribution of P. bergi and S. patagonica , both well-known recruiting species, may explain such outstanding numbers. Still, excluding ants from analyses of the assemblages of northeast Patagonia lead to similar findings in terms of abundance patterns. Such consistency likely reflects the robustness of the model and its explanatory factors for the Patagonian arthropods. In arid Patagonia, as in most deserts, the factors dominating the insect community structure are probably related to plants. Vegetation cover has shown to be correlated with diversity, dominance, and species abundance of ground-dwelling arthropods in other deserts ( Crawford 1988 ; Seymour and Dean 1999 ). Vegetation structure usually provides the habitat template for the assembly of ground-dwelling arthropods in multitrophic communities by offering shelter, food resources, oviposition micro-sites, or refuge against predators ( Dennis et al. 1998 ; Seymour and Dean 1999 ; Mazía et al. 2006 ). In turn, in northwest Patagonia, where there is a similar habitat to the one examined in this study, plant spatial structure has been shown to influence the activity of ground-dwelling ants and beetles ( Farji-Brener et al. 2002 ; Folgarait and Sala 2002 ; Maz ́a et al. 2006 ). In addition, it should be considered that in Península Valdés sheep grazing has occurred since the late 19th century. Sheep grazing appears to have modified the vegetation and accelerated the soil degradation processes ( Beeskow et al. 1995 ). These changes are generally referred to as changes in vegetation structure, diminishing their cover and exposing bare soil to erosive effects, which eventually leads to the fragmentation of the preexisting patches into smaller remnant patches ( Bisigato and Bertiller 1997 ). Grazing, through its impact on vegetation, could be influencing observed arthropod communities. From a trophic level approach, studies comparing protected areas versus grazed habitats in other arid areas from Argentina have found that arthropod communities were dominated by scavengers in protected sites and by predators in disturbed areas ( Gardner 1995 ; Molina et al. 1999 ; Lagos 2004 ). In Península Valdés, the ground-dwelling arthropod community was dominated by predators, which suggests that sheep grazing could be one of the main variables modeling the arthropod assemblage structure. Predation could probably act as an important factor driving the distribution and abundances of surface-dwelling arthropods in this habitat (i.e., a top-down effect) and as such could be used as a key element in understanding the above-ground desert community structure. This study found that the arthropod community of northern Patagonia had similar diversity values to those recorded in other arid areas of Argentina, such as the Chaco ( Gardner et al. 1995 ; Molina et al. 1999 ) and the central Monte Desert ( Lagos 2004 ). However, lower arthropod families and coleopteran species richness were found, as was smaller evenness at family and species levels. Reduced richness could be explained because of the lower temperatures present in Patagonia, which could constrain the number of species living there. In turn, a less even assemblage such as that found in this study suggests that the dominance of some species over others is greater than it is in other arid zones in northern Argentina. Species autoecological features coupled with a restrictive climate could explain why the community is dominated by a few species. For example, the most abundant beetle, B. punctulatus (Tenebrionidae), has a small body size that could allow them to hide into the soil fissures during extreme environmental periods. These features can also be observed in the true bug assemblage. For instance, A. patagonica is also small size and has wings like the elytra of coleoptera that enable it to tolerate extreme environmental conditions. The adequate description by the same abundance distribution model both at the family and the species level suggests that the former can be a reasonable predictor of the subjacent abundance model in this community. This reduces costs in terms of time dedicated to taxonomic determination and is in accordance with previous work (e.g. Cagnolo et al. 2002 ). Using a higher taxonomic category than species level in community analysis has several advantages (see Gaston 2000 ), but it can be biased if the community has a fauna rich in endemisms ( Samways et al. 1996 ). The results obtained in this study could be extended to all of arid Patagonia, due to similar environmental conditions in the area. This work not only improves the knowledge of the composition, taxonomy, and trophic structure of ground-dwelling arthropod communities in arid Patagonian habitats, but also increases the taxonomic knowledge of Hemiptera through the discoveries of new genera and two new species very recently described as new based on material recovered from this survey (see Dellapé and Cheli 2007 ; Carpintero et al. 2008 ). Additionally, it is necessary to place the results of this study within a conservation context because the richness and composition of a community of ground-dwelling arthropods can be taken as a reflection of the biotic and structural diversity of whole terrestrial ecosystems ( Iannacone and Alvariño 2006 ). Because of its abundance, diverse behaviors, and ecological interactions, the development of new lines of research to elucidate the variables controlling the main ecological aspects of grounddwelling arthropods will contribute significantly to the knowledge and functioning of arid Patagonian ecosystems. It also may help to create and assess management and conservation tools for the arid terrestrial ecosystem.
Associate Editor: Megha Parajulee was editor of this paper. This is the first study based on a planned and intensive sampling effort that describes the community composition and structure of the ground-dwelling arthropod assemblage of Península Valdés (Patagonia). It was carried out using pitfall traps, opened for two weeks during the summers of 2005, 2006 and 2007. A total of 28, 111 individuals were caught. Ants (Hymenoptera: Formicidae) dominated this community, followed by beetles (Coleoptera) and spiders (Araneae). The most abundant species were Pheidole bergi Mayr (Hymenoptera: Formicidae) and Blapstinus punctulatus Solier (Coleoptera: Tenebrionidae). Two new species were very recently described as new based on specimens collected during this study: Valdesiana curiosa Carpintero, Dellapé & Cheli (Hemiptera, Miridae) and Anomaloptera patagonica Dellapé & Cheli (Hemiptera, Oxycarenidae). The order Coleoptera was the most diverse taxa. The distribution of abundance data was best described by the logarithmic series model both at the family and species levels, suggesting that ecological relationships in this community could be controlled by a few factors. The community was dominated by predators from a trophic perspective. This suggests that predation acts as an important factor driving the distribution and abundances of surface-dwelling arthropods in this habitat and as such serves as a key element in understanding desert, above-ground community structure. These findings may also be useful for management and conservation purposes in arid Patagonia. Keywords
Acknowledgments The authors are grateful to those professional taxonomists that generously dedicated their time to species determination: G. Flores, S. Roig-Juñent, S. Claver, P. Dellapé, D. Carpintero, F. Ocampo, A. Lanteri, N. Cabrera, and M. Kun. We would also like to thank F. Grandi, F. Brusa, G. Pazos, V. Rodriguez, D. Galvan, L. Venerus, A. Bisigato and U. Pardiñas for their invaluable collaboration. We thank deeply Centro Nacional Patagónico and its staff for providing facilities and logistic support, and also Mrs. Amos Chess, Vicente Hueche, Jorge Mendioroz, Victor Huentelaf and Pedro “Perico” Ibarra who allowed access to the study areas. Finally thanks to L. Cella, R. Loizaga de Castro for her language assistance, two anonymous reviewers and Dr. Henry Hagedorn for their valuable comments that improved the manuscript. G. Cheli was supported by a PhD fellowship awarded by CONICET. This work was declared of interest by the Administration of the Natural Protected Area Península Valdés.
CC BY
no
2022-01-12 16:13:46
J Insect Sci. 2010 May 17; 10:50
oa_package/37/b2/PMC3014803.tar.gz
PMC3014804
20672977
Introduction The hemlock woolly adelgid, Adelges tsugae Annand (Hemiptera: Adelgidae), is native to China and is a pest of eastern hemlock, Tsuga canadensis ( L .) Carriere (Pinales: Pinaceae), and Carolina hemlock, Tsuga caroliniana Englem., in the eastern USA ( Knauer et al. 2002 ) that causes tree mortality ( Orwig and Foster 1998 ). A. tsugae has a hemimetabolous life cycle, spending most of its life on hemlock. When feeding, it inserts its mouthparts directly into the tree, and the insect remains in this position throughout its life. Many studies regarding the biology, physiology, and ecology of this insect have been published, ( McClure 1987 , 1990 , 1991 ; Young et al. 1995 ; Parker et al. 1997 , 1999 ; Gouli et al. 2000 ; Skinner et al. 2003 ) and various management tactics have been investigated ( McClure 1987 , 1992 ; Cheah and McClure 1996 ; Montgomery 1996 ; Sasaji and McClure 1997 ; Wallace and Hain 2000 ; Blumenthal 2002 ; Cassagrande et al. 2002 ). These tactics, however, are limited by the cost of treating large areas and by the prevalence of hemlock in watershed regions where use of broad spectrum chemical insecticides is generally forbidden. Currently, no effective management strategy has been found that is amenable to large-scale application in these watershed regions. The outcome of releases of the predacious lady beetle, Sasajiscymnus tsugae require 4–7 years for assessment, and results vary based on the initial quality of the test site ( Cheah 2004 ). As tree mortality can occur in as little as three years ( McClure 1987 ) and performance of S. tsugae is superior when released in healthier hemlock forests ( Cheah 2004 ), management tactics that provide immediate protection of hemlocks are needed. These tactics must be relatively inexpensive and easy to adopt on a large scale. Insect pathogens represent an environmentally sound approach to pest management that meets these requirements, but due to the feeding behavior of A. tsugae , only those that cause mortality via direct contact, such as fungi, are suitable. Naturally-occurring fungal pathogens of A. tsugae have been identified and recovered, several of which were found to induce 64–82% mortality among the adult sistens when applied at a rate of 1 × 10 8 conidia per ml ( Gouli et al. 1997 ). Moreover, research has shown that several fungal isolates are pathogenic to A. tsugae , but do not cause significant mortality to S. tsugae ( Parker et al. 2004 ). The fungi previously obtained by Gouli et al. ( 1997 ) were acquired from a single location in the eastern USA. The objectives of this study were to expand on that work by isolating additional fungal entomopathogens from A. tsugae in the eastern USA and China and to characterize them for suitability as mycoinsecticides.
Materials and Methods Sampling of A. tsugae Collections of A. tsugae were conducted in the eastern USA during the spring and fall of 1997 ( Table 1 ). Sample sites consisted of multi-aged hemlock trees with new growth and moderate infestations of A. tsugae. Within each site, 10 infested trees ranging in height from 3–13 m were selected, and, from each tree, ten 10-cm branchlets were collected and singly placed in plastic bags for a total of 1000 linear cm of infested hemlock branchlets per site. Samples were held in the laboratory at 15–25° C and processed within 72 h of collection. In China, A. tsugae is not a serious pest of hemlock and was difficult to find. Therefore, forest stands with hemlock trees were exhaustively searched in a radial pattern, sampling every tree with signs of A. tsugae. All available samples on a given tree were collected, up to 25 branchlets per tree. Specimen processing, fungal identification and storage Branchlets containing A. tsugae were examined at 40× magnification, and individuals with signs of fungal infection (eg. off-color, misshapen, bloated, or mummified) were removed from the twig with fine-point forceps and transferred to sterile paper towels moistened with sterile distilled water containing 30 IU of penicillin G and 70 IU of streptomycin sulfate. Forceps were disinfected between cadavers with 75% ethanol to prevent cross contamination. Cadavers were held at 22 ± 2° C for 1–2 wks until fungal outgrowth was observed. Fungal outgrowth was collected using a sterile probe and transferred to potato dextrose agar medium containing 150 IU/ml penicillin G and 350 IU/ml streptomycin sulfate. Cultures were incubated at 22 ± 2° C for 7 d, identified at 400× magnification using the methodology of Gouli et al. ( 2005 ), and transferred to -80° C storage in the Entomology Research Laboratory fungal collection at the University of Vermont, Burlington, VT. All isolates were also submitted to the USDA Agricultural Research Service collection of Entomopathogenic Fungi (ARSEF), Ithaca, NY for verification of identification and for preservation. Test fungi and preparation for bioassays Fungi used in this study were obtained as multispore isolates from the A. tsugae cadavers collected in the eastern USA and China. In addition, isolates previously obtained from A. tsugae in the eastern USA were included ( Gouli et al. 1997 ). As reference standards, the study included two additional isolates: GHA, a Beauveria bassiana (Balasamo) (Hyphomycetes) strain (BotaniGard) from Laverlam International Corporation ( www.laverlamintl.com ), and ARSEF 1080, a Metarhizium anisopliae (Metchnikoff) (Hypocreales: Clavicipitaceae), originally isolated from Helicoverpa zea (Boddie) (Lepidoptera: Noctuidae) in Florida. For the remainder of this article, all isolates, with the exception of GHA, are referred to by their ARSEF accession number. Stock plates of fungi were prepared as spread plates on 1⁄4 strength Sabouraud's dextrose agar supplemented with 1% yeast extract (SDAY/4) and maintained at 4° C until needed. Fungal material used for testing was obtained by subculturing from the stock plates onto SDAY/4. Cultures were prepared as spread plates and incubated for 10 d at 22° C. Conidial suspensions were prepared by transferring the fungal colonies from two Petri dishes into 20 ml of sterile distilled water containing 0.5 g of glass balls followed by vigorous shaking. Suspensions were filtered through eight layers of cheesecloth and calibrated to a stock concentration of 1 × 10 8 conidia per ml using a Neubauer haemocytometer. Additional test concentrations were prepared through serial dilutions to 1 × 10 7 , 5 × 10 6 , 1 × 10 6 , 1 × 10 5 and 1 × 10 4 conidia per ml. For all assays, the viability of the testing material was assessed post-application by spraying a 9 cm diameter Petri dish containing 20 ml SDAY/4 for 2–3 s with the suspension containing 5 × 10 6 conidia per ml. These plates were incubated at 25° C for 20 h (16 h for GHA), after which three drops of lactophenol cotton blue stain (VWR International, www.vwr.com ) were applied to the medium surface to kill the fungi. Glass coverslips were placed over the medium, and conidia were inspected for germination at each of the three spots at 400× magnification. A conidium was considered germinated if the germ tube was longer than the width of the conidium ( Hywell-Jones and Gillespie 1990 ). All fungal material used in this study had germination rates > 95%. Myzus persicae bioassays A stock laboratory colony of apterous Myzus persicae (Sulzer) (Homoptera: Aphididae) was maintained on mustard, Brassica juncea (L.) Czernajev and Cosson (Brassicales: Brassicaceae), cv. Florida broad leaf. Five adult females were placed on a freshly excised fourth or fifth true leaf for 24 h, during which time they produced first instars. The number of first instars was adjusted to 12 per leaf, and, throughout the experiment, the leaf petioles were held in a covered, 30 ml plastic creamer cup filled with a 60 ppm solution of 20-10-20 allpurpose fertilizer (Peter's Professional Fertilizer, Scotts-Sierra Horticultural Products). Cups were then incubated in a plastic tray fitted with a thrips-proof mesh lid, to allow for air exchange. Aphids were incubated at 22 ± 2° C for 7 d, at which point all insects were 1-d-old adult females. Five leaves for each fungal replicate were sprayed on both sides with 0.6 ml of a 5 × 10 6 conidia per ml suspension using an airbrush at 12 psi (Badger Airbrush Co., www.badgerairbrush.com ). Leaves were individually sprayed to the point of run-off. These assays were replicated a single time and the results were used to roughly screen which isolates were pathogenic. Equal numbers of controls were treated with sterile distilled water. All treatments were held in an incubator at 22 ± 2° C and 16:8 L:D for 6 d. Mortality was assessed daily, and new birthed juveniles were removed to maintain a constant number of aphids on each leaf. Adelges tsugae bioassays Populations of A. tsugae were fieldcollected from T. canadensis in Lovingston, VA. Infested branchlets were collected from 10 codominant trees, 6–8 m tall, that had never been treated with insecticides. These trees had new growth, which supports populations of A. tsugae with high vigor ( McClure 1991 ). Branchlets with ≥ 2.5 cm of new growth and at least 24 first instar aestivating adelgids were selected as experimental units. This stage of A. tsugae was chosen for testing because the insect remains in this form for several months without molting and not covered with the waxy exudate, making it an ideal target for fungi. Five branchlets were treated for each replicate at each concentration. Branchlets were individually sprayed on both sides with 0.6 ml of test conidial suspension using an airbrush at 12 psi. Spraying was conducted so that an even mist of suspension was applied, but so that the suspension did not run off the branchlet. Controls were likewise sprayed with sterile distilled water. Treated branchlets were held singly in Pyrex test tubes (20 × 250 mm) containing 20 g of white sand that had been previously heated for 5 h at 85° C. Four ml of sterile distilled water were added to the sand in each tube to maintain branchlet viability ( Gouli et al. 1997 ; Parker et al. 1999 ). The tops of the tubes were covered with one layer of muslin (140 thread count) held in place with an elastic band. Tubes were held at 22 ± 2° C, and mortality was assessed 6 d post-application. A. tsugae were considered dead if they did not maintain their body turgor after gentle probing with a blunt needle or if they were solid with fungal mycelium ( Parker et al. 1999 ). Experiments were conducted as completely randomized designs and replicated three times. The first 12 dead A. tsugae observed from each treatment were transferred to Petri dishes lined with sterile paper towels moistened with sterile distilled water containing 30 IU of penicillin G and 70 IU of streptomycin sulfate. Dishes were incubated at 22 ± 2° C and examined for fungal outgrowth after 5 d. Characterization of fungi for growth and sporulation The 12 isolates tested in the A. tsugae bioassays were assessed for rate of growth, conidial production and germination at 15, 20, 25, and 30° C. For each, 10 μl of a suspension of 1 × 10 6 conidia per ml was inoculated onto a six mm diameter sterile filter paper placed in the center of a standard nine cm diameter Petri dish containing 20 ml of SDAY/4. Dishes were inverted and incubated in the dark for 20 d. Two orthogonal measurements of the colony diameter were recorded on days 5, 10, 15, and 20, and averaged for each time point. At 20 d, four sample cores were taken from these colonies using a 5 mm diameter cork borer. These were pooled into 10 ml of 0.1% Tween 80 containing 0.6% Greenshield (Whitmire Micro-Gen Research Labs, www.wmmg.com/home.asp ) and sonicated for 10 min to separate conidia. Conidia concentrations were estimated using two counts on a Neubauer improved haemocytometer at 400× magnification and adjusted to conidia per cm2 of colony. For the assessment of conidial germination rate, 50 μl of a suspension of 1 × 10 6 conidia per ml for each isolate was streaked onto 9 cm diameter Petri dishes containing one-tenth strength Sabouraud's dextrose agar supplemented with 0.10% yeast extract (SDAY/10). All germination was conducted in the dark and was assessed at 10, 13, 15, and 17 h after streaking. Germination was assessed as previously described. Experimental designs and statistics For all bioassays, data were corrected for control mortality using Abbott's correction factor ( Abbott 1925 ). A Kolmogorov-Smirnov test for non-normality was applied using PROC FREQ in SAS ( SAS Institute 1996 ) to test the distribution of mortality within fungal genera. All isolates were independently assessed three times using five experimental units per replicate. The M. persicae assays were conducted as an incomplete completely randomized design and analyzed within fungal genus using PROC GLM in SAS followed by a Bonferroni means separation procedure. The A. tsugae assays were conducted as a completely randomized block design and the lethal concentrations were determined using SAS PROC LOGIT with the logit switch in the model statement ( SAS Institute 1996 ). For all fungal characterization studies, isolates were independently assessed three times for each temperature using four experimental units per replicate. The rates of growth, conidial production, and germination were analyzed within temperature as fixed effect ANOVA models using PROC GLM followed by a Student-Neuman-Keuls means separation in SAS ( SAS Institute 1996 ).
Results Entomopathogenic isolates recovered Sixty-two isolates of entomopathogenic fungi were recovered from cadavers in the eastern USA, and 18 were recovered from southern China ( Table 1 ). The fungal species recovered from the USA and China were similar, and the most prevalent fungi collected were Lecanicillium lecanii and Isaria farinosa. Both have known entomopathogenic associations with homopteran species ( Milner and Lutton 1986 ; Kish et al. 1994 ). These fungi are commonly dispersed by wind and rain-splash, which may explain why they were most frequently observed. The remaining isolates were identified as Beauveria bassiana or Fusarium spp. B. bassiana is a cosmopolitan fungus with a broad host range and is commonly found in the soil ( Goettel and Inglis 1997 ). Fusarium is also a soil fungus, and many species are phytopathogenic, occasionally occurring as weak entomopathogens ( Humber 1997 ). In some cases, the observed fungi were not culturable. For example, in the spring 2007 sampling, 8% of A. tsugae were associated with Beauveria , but only nine were culturable. The fungal outgrowth on these cadavers was often yellow, which is a sign of colony aging based on the production of the secondary metabolite tenellin ( Khachatourians and Qazi 2008 ), and senescence may be an explanation for the inability to culture those. Entomogenous isolates recovered The list of fungal genera recovered from A. tsugae cadavers are presented in Table 2 . The most common were Alternaria and Cladosporium. These fungi are commonly associated with aphid-infested plants ( Agrios 1998 ), though some studies have identified entomopathogenic strains within both of these genera ( Hatzipapas 2002 ; Abdel-Baky and Abdel-Salam 2003 ). In addition, these fungi quickly colonize insect cadavers, including those dead from senescence and those killed by fungal pathogens ( Hatting et al. 1999 ), thus their presence may be obscuring the effect of other pathogens. In general, the proportion of fungi recovered from southern China were similar to those from the eastern USA, with the exception of Acremonium spp., which were associated with 13.8% of A. tsugae cadavers from China. Some species of Acremonium have been identified as entomopathogenic, including Acremonium larvarum and an Acremonium sp. ( Sanchez-P na 1990 ; Steenberg and Humber 1999 ). While it was not possible to isolate and perform bioassays of all putative pathogens from the fungal group, future studies investigating their impact on A. tsugae and their interaction with the entomopathogenic fungi isolated could explain their high abundance on A. tsugae cadavers. Myzus persicae bioassays When the entomopathogenic isolates were screened against M. persicae , mortality ranged from 0–86% ( Figure 1 ). When grouped by fungal species, mortalities were normally distributed for B. bassiana , L. lecanii and I. farinosa (p = 0.16, 0.25, and 0.11, respectively). These results reflect a broad range of efficacy within species. Mortalities among the Fusarium sp. isolates were not normally distributed (p = 0.02). Overall, 70% of these resulted in < 10% aphid mortality, while one isolate, 5821, demonstrated the highest efficacy (86%) ( Figure 1A ). This indicated that there was great diversity in the entomopathogenic capacity within the Fusarium isolates recovered. Similar rates of mortality were observed for all B. bassiana isolates, ranging from 10–35% ( Figure 1A ). Isolate 5817, collected in central Massachusetts, and 5818, from southern Connecticut, had the highest mortality, each resulting in 40% mortality, one-third higher than the commercial strain, GHA, which was equal in mortality to the others. Among the 19 I. farinosa , mortality ranged from 10–43% ( Figure 1B ). Isolates were statistically similar (p > 0.05) with respect to mortality. The virulence of the I. farinosa isolates, with the exception of 5775, were statistically identical, thus 5826 and 5827 were selected for further study because they were from different geographic origins. Mortality rates of 6–54% were obtained among the L. lecanii isolates ( Figure 1C ). Statistically significant differences were found among the isolates (p < 0.0001), two of which produced > 50% mortality. Based on the M. persicae bioassays, 10 isolates obtained from A. tsugae and two reference strains were selected for further efficacy studies against A. tsugae including: four L. lecanii , three B. bassiana , two I. farinosa , one Fusarium sp., one M. anisopliae 1080, and GHA. Adelges tsugae bioassays The 12 isolates tested were pathogenic to A. tsugae ( Table 3 ), with mortality varying from 32–84% at the highest concentration used. The highest mortality was observed with 1080, which killed 84% of A. tsugae at an application rate of 1 × 10 8 conidia per ml. The GHA strain and both I. farinosa caused the lowest mortality, while the mortality rates of the remaining isolates could be separated into three groups. The highest mortality rate was obtained from 5798, an L. lecanii from Massachusetts, followed by 5170 and 5824, a B. bassiana from Massachusetts and an L. lecanii from Virginia, respectively. There were no significant differences in the LC 50 values for the remaining five (p > 0.05). Fungal outgrowth of the test fungi were obtained and confirmed to fungal species from > 80%) of all cadavers examined 11 d post application. The A. tsugae assays broadly separated the fungal isolates tested with respect to efficacy. The bioassay system used, however, was a complex system using field-collected branchlets. The types and abundance of microorganisms associated with the test branchlets could not be standardized. Therefore, A. tsugae mortality was quantified on new plant growth only. This ensured that mortality assessment was conducted on the summer aestivating generation only, and not on individuals from previous generations that had died, but remained on the tree. The most virulent against A. tsugae was 5170, which had a log LC 50 of 7.09 ( Table 3 ). Statistically, 5170 was more virulent than the three other B. bassiana tested and the third most virulent overall. 5818 and 5796 were not statistically different (p > 0.05), and the least virulent was GHA, which was significantly less virulent (p < 0.001) than the others. The most virulent L. lecanii was 5798, which was the second most virulent overall, followed by 5165, which was the fourth most virulent overall and was statistically non-resolvable from the other L. lecanii. Neither I. farinosa isolate demonstrated strong virulence. The typical field application rate was comparable to the calculated LC25 for both. Extrapolation of the logit model estimated their LC 50 values to be roughly 10 times higher than a typical field application rate of 1 × 10 7 conidia per ml. Despite these results, I. farinosa was the second most frequently recovered fungus from the A. tsugae cadavers in the eastern USA collections and the most frequently recovered fungus from China. The Fusarium sp. tested, 5821, was not as virulent against A. tsugae as it was against M. persicae , and it was not statistically different from the two least virulent L. lecanii or the least virulent B. bassiana recovered from A. tsugae. The LC 50 value for 1080 was found to be the lowest for A. tsugae (7.9 × 10 5 conidia per ml); 200 times lower than the typical field application rate of the industrial product BotaniGard. This was also the most efficacious against M. persicae. Based on the A. tsugae efficacy trials, four isolates were superior: M. anisopliae 1080, B. bassiana 5170 and 5796, and L. lecanii 5793. Characterization of fungi for growth and sporulation In general, isolates had the fastest growth rates at 20 and 25° C ( Table 4 ). The rate of growth for all isolates at 25° C was consistent for the first 10 d, but decreased significantly (p < 0.0001) by day 15. This is likely the result of growth restriction of the fungus caused by the limited size of the Petri dish. For this reason the rate of growth calculation only includes data up to day 10. Isolates of B. bassiana and M. anisopliae survived and grew at 30° C: however, I. farinosa 5826 and 5827 and L. lecanii 5165, 5793, and 5824 were unable to form colonies at this temperature and did show further growth when plates were removed at 20 d and placed at 22° C. When growth rates were compared at 25° C, the industrial strain GHA and I. farinosa 5827 grew significantly slower (p < 0.0001) than the other isolates. The rate of growth, however, does not take into account the productivity of the isolate. For this reason, the production of conidia was also measured. B. bassiana GHA, 5818, and 5170, produced the most conidia at all temperatures ( Figure 2 ). In most cases, the production of conidia was highest at 25° C. Exceptions to this were I. farinosa 5827, which was most productive at 15° C, and Fusarium sp. 5821, which was most productive at 30° C. Excluding these isolates, spore production consistently increased from 15 to 25° C and then decreased at 30° C. In addition to growing and sporulating at field temperatures, these fungi must be able to germinate rapidly to colonize A. tsugae. The germination rates of 5165, 5824, 5826, and GHA are presented in Figure 3 . The figure including germination rates of all test isolates at the four temperatures is shown in Figure 4 . Overall, GHA spores germinated fastest at all temperatures tested, reaching nearly complete germination within 13 h at 25 and 30° C ( Figure 3 ). Four isolates, L. lecanii 5165 and 5824 and I. farinosa 5826 and 5827 showed a general inability to tolerate 30° C. The L. lecanii were capable of germination at 30° C, but none of the four were able for form colonies at this temperature.
Discussion A total of 79 culturable entomopathogenic fungi were recovered from 8,515 A. tsugae. These fungi, L. lecanii, I. farinosa, B. bassiana and Fusarium spp., have a global distribution and are known pathogens of insects ( Booth 1972 ; Humber 1997 ). Gouli et al. ( 1997 ) identified high levels of Cladosporium and Alternaria among populations of A. tsugae. These fungi are sooty molds associated with homopteran honeydew ( Agrios 1988 ), and some species within these genera are known to be plant pathogens ( Bustan et al. 2008 ), while other species have been documented as insect pathogens ( Hatzipapas et al. 2002 ; Abdel-Baky and Abdel-Salam 2003 ). Their presence is relevant for the future development of a mycoinsecticide because the candidate isolate must be aggressive enough to overcome these rapidly-growing fungi. This may explain the efficacy of 1080. Although it was not initially isolated from A. tsugae , it is a soil fungus exhibiting rapid growth characteristics. Likely the most successful candidate strains will be those capable of overcoming the influence of antagonistic fungi. The efficacy of the selected entomopathogenic isolates against A. tsugae varied, as did their growth, germination and sporulation characteristics. GHA was found to have superior germination and sporulation characteristics, however, it was the least efficacious against A. tsugae. The microclimates of the new and old growth of hemlock branchlets are unique with respect to each other. When trees no longer have new growth, A. tsugae populations rapidly decline ( McClure 1991 ). For this reason, the A. tsugae bioassay testing was performed on new growth collected from healthy infested hemlock trees. While this allowed for a clear assessment of mortality and the resolution of relative efficacy among isolates, the assessment of the performance of these fungi on a large scale is still needed. The base-line characteristics for a fungal-based biopesticide to manage A. tsugae require that the selected isolate have rapid germination to infect the host quickly, and cause mortality at field temperatures. It should also possess characteristics that make it suitable for mass-production, including a rapid growth rate and good sporulation. In general, for the purposes of commercial fungal spore production, complex substrates such as molasses, corn steep liquor, and various grains are used to obtain higher levels of sporulation than can be achieved on artificial medium ( Jackson et al. 1997 ; Wraight et al. 2001 ). In this study, however, an artificial medium was selected to compare the relative production capacities of the fungal isolates. Based on this comparison, a relative cost of scale that is based on the amount of surface area of agar-based medium required to produce one liter of fungal material at the LC 50 rate ( Table 5 ) is proposed. Overall, M. anisopliae 1080, B. bassiana 5170 and 5818, and L. lecanii 5798 required the lowest amount of medium to produce enough material for one liter at the LC 50 rate. Although the commercial production of a mycoinsecticide would not be based on an agar-based production system, a comparison based on these data was conducted. When the surface area required to produce enough material for 1 l of the LC 50 rate for these fungi was estimated, the capacity to produce virulent-conidial production relative to the most virulent, 1080, showed that four isolates would be the most suitable for further development as biological control agents. These were M. anisopliae 1080, B. bassiana 5170 and 5818, and L. lecanii 5798. The use of different formulations for these isolates may also improve their overall efficacy under field conditions ( Daoust et al. 1983 ; Pan et al. 1988 ; Lomer et al. 1993 ; McClatchie et al. 1994 ; Alves et al. 1997 ; Evans 1999 ). This study provides baseline information on the fungi associated with A. tsugae and their ability to cause mortality against low density populations of aestivating sistens. This generation was selected because no protocol for rearing A. tsugae has been developed and it is the only generation that can be reliably field-collected without contaminating individuals from previous generations. Both the progrediens and sistens remain throughout their life on hemlock at the site where they initially feed as crawlers, thus multiple generations of live and dead A. tsugae can occupy the same part of the tree. This makes the actual estimation of mortality difficult because individuals killed from a test treatment may be difficult to resolve from dead A. tsugae from previous generations. The spore production characterization studies showed that some isolates performed better at the cooler temperatures, indicating that these may be more active during cooler periods, when the other life forms of A. tsugae are present. Due to this, the use of isolates active at cooler temperatures for field applications in the spring or fall periods should be considered. Further, it should be considered that while our study demonstrates the ability of fungi to cause mortality in populations of A. tsugae , the field application of a mycoinsecticide may not yield the same results and any effective application of fungi for A. tsugae management would require a novel approach.
Associate Editor: Fernando Vega was editor of this paper. Fungi associated with the hemlock wooly adelgid, Adelges tsugae Annand (Hemiptera: Adelgidae), were collected throughout the eastern USA and southern China. Twenty fungal genera were identified, as were 79 entomopathogenic isolates, including: Lecanicillium lecanii (Zimmermann) (Hypocreales: Insertae sedis), Isaria farinosa (Holm: Fries.) (Cordycipitaceae), Beauveria bassiana (Balasamo) (Hyphomycetes), and Fusarium spp (Nectriaceae). The remaining fungal genera associated with insect cadavers were similar for both the USA and China collections, although the abundance of Acremonium (Hypocreaceae) was greater in China. The entomopathogenic isolates were assayed for efficacy against Myzus persicae (Sulzer) (Homoptera: Aphididae) and yielded mortality ranging from 3 to 92%. Ten isolates demonstrating the highest efficacy were further assessed for efficacy against field-collected A. tsugae under laboratory conditions. Overall, two B. bassiana , one L. lecanii , and a strain of Metarhizium anisopliae (Metchnikoff) (Hypocreales: Clavicipitaceae), demonstrated significantly higher efficacy against A. tsugae than the others. Isolates were further evaluated for conidial production, germination rate and colony growth at four temperatures representative of field conditions. All isolates were determined to be mesophiles with optimal temperature between 25–30° C. In general, conidial production increased with temperature, though two I. farinosa produced significantly more conidia at cooler temperatures. When efficacy values were compared with conidial production and temperature tolerances, Agricultural Research Service Collection of Entomopathogenic Fungi (ARSEF) 1080, 5170, and 5798 had characteristics comparable to the industrial B. bassiana strain GHA. Keywords
Acknowledgements We thank Timothy Tigner, Virginia Dept. of Forestry, George Keoch, New Jersey Dept. of Environmental Management, Charles Burnham and Kenneth Gooch, Massachusetts Dept. of Environmental Management, and David Orwig of Harvard Forest for identifying the collection sites in the eastern USA. We also thank Wenxia Zhao, Li Guang-Wu, and Zhang Yongan of the Chinese Academy of Forestry, Beijing for identifying the collection sites in southern China. Scott Costa of the University of Vermont helped with the statistical analyses, and James Boone helped with the collection of A. tsugae cadavers in China. Funding for this work was provided, in part, by the US Dept. of Agriculture Forest Service, #FS 42-97-003, and through the support of Finch, Pruyn & Co., Glens Falls, NY. Funding for the research conducted in China was provided by the US Dept. of. Agriculture, Foreign Agricultural Service, Research and Scientific Exchange Division and by the Chinese Academy of Forestry, Institute of Forest Protection. This article was prepared in partial fulfillment for the degree of Master of Science for WRR at the Dept. of Plant and Soil Science, University of Vermont. Abbreviations Agricultural Research Service Collection of Entomopathogenic Fungi; Sabouraud dextrose agar and yeast; Beauveria bassiana Strain GHA, the active ingredient in BotaniGard
CC BY
no
2022-01-12 16:13:47
J Insect Sci. 2010 Jun 11; 10:62
oa_package/db/2c/PMC3014804.tar.gz
PMC3014805
20569128
Introduction Species of the genus Rhagoletis are important pests of fruits such as apples, cherries, tomatoes, walnuts, and blueberries. They are equally important as the focus of the debate about the possibility of sympatric speciation via the formation of host races on new host plants ( Bush 1966 ; Berlocher and Feder 2002 ). In the case of the apple host race of Rhagoletis pomonella , two key adaptations arose approximately 150 years ago in the ancestral (and still extant) hawthorn race that allowed colonization of apple. One is alteration of the olfactory response so that both sexes are attracted to the odor of the new host apple ( Linn et al. 2003 , 2004 ; Dambroski et al. 2005 ), and the other is shifting life history phenology to match the fruit ripening time of apple ( Filchack et al. 2000 ). This study is the first attempt to catalog genes involved in olfaction in Rhagoletis by carrying out an expressed sequence tag (EST) project on the antennae and maxillary palps of Rhagoletis suavis (Loew) (Diptera: Tephritidae). This species was used because it can be obtained more easily in the large numbers required for an EST project on olfactory organs than R. pomonella can. Many features of the molecular biology of olfaction in Rhagoletis can be anticipated from what is known of olfaction in Drosophila melanogaster , which is a key model organism for studying olfaction ( Rützler and Zwiebel 2005 ; Hallem et al. 2006 ; Vosshall and Stocker 2007 ). Two major gene families involved are the odorant binding proteins (OBPs) ( Hekmat-Scafe et al. 2002 ) and the odorant receptors (ORs) ( Robertson et al. 2003 ). OBPs are usually highly expressed, which makes detection in antennal EST projects likely (e.g. Robertson et al. 1999 ); whereas ORs are generally expressed at such low levels that they are difficult to obtain with this method. Based on the D. melanogaster genome, it was anticipated that the most important recoveries from this EST project would be key olfactory gene products such as ORs and OBPs. However, other classes of genes have been proposed as having a possible role in olfaction, such as chemosensory proteins (Briand et al. 2002; Laitigue et al. 2002 ) and odorant degrading enzymes, as well as genes that are of general interest.
Materials and Methods Flies and collection of antennae and palps Collection of large numbers of Rhagoletis flies is most easily accomplished by rearing larvae from infested fruit ( Rhagoletis life history is described by Boller and Prokopy 1975 ). In the fall of 2000, approximately 50,000 R. suavis (Loew) larvae were reared from black walnut, Juglans nigra L. (Fagales: Juglandaceae), fruit from sites near White Heath, Illinois (Piatt County). Pupae were placed in a 4° C cold room to break diapause and then removed in batches throughout the spring of 2001. Emerging flies were placed in cages with food and water ( Prokopy and Bush 1973 ) until they could be processed. Processing was carried out as rapidly as possible after eclosion because young adults were assumed to have the highest expression of olfactory receptors. Heads from live flies were removed and accumulated at -80° C. The day before RNA extraction, the frozen heads were shaken on a soil sieve to harvest antennae. Maxillary palps and major head bristles that may also have chemoreceptors were harvested incidentally. Maxillary palps have sensilla used in odor recognition and express gustatory receptors, but mRNA would not have been obtained from major bristles because the cell bodies are not in the bristles. The shaking and sieving was not severe enough to break the heads, so there was no contamination from brain or eye tissues. RNA extraction and cDNA library construction Total RNA was isolated from antennae and maxillary palps using a guanidinium thiocyanate/phenol-chloroform extraction protocol (RNA Isolation Kit, Stratagene, www.stratagene.com ). mRNA was purified from total RNA using a Poly(A) Quik® mRNA Isolation Kit (Stratagene, www.stratagene.com ), which utilizes an oligodT cellulose column. A unidirectional plasmid cytomegalovirus-polymerase chain reaction cDNA library primed with oligo-dT was constructed by Stratagene using PCR amplification. The plasmid library was transformed into Stratagene's host strain Epicurian Coli® XL-IO GoldTM. For further details of molecular methods and results see Ramsdell ( 2004 ). Clone sampling & DNA sequencing Plasmid clones were sampled by plating the library onto LB-kanamycin agar and picking colonies. Colonies were individually transferred to 96-well plates. Each well contained 80 μl of a 30% (v/v) glycerol-LB mixture. Six plates were prepared and submitted to the W.M. Keck Center for Comparative and Functional Genomics (University of Illinois at Urbana-Champaign) for sequencing from the 5′ end using ABI automation. Clones of interest were cultured and purified, and the insert was sequenced from both directions when necessary. Sequence analysis Sequences were edited with Microsoft Excel® and BBEdit Lite (Bare Bones Software, Inc., www.barebones.com ). DNA Strider v1.1 ( Marck 1988 ) was used for protein translations, to find open reading frames, convert reverse complement sequence reads, and generate Kyte-Doolittle hydropathy plots ( Kyte and Doolittle 1982 ). BLAST ( Altschul et al., 1990 , 1997 ) was used with networked servers (National Center for Biotechnology Information; http://www.ncbi.nlm.nih.gov ) to find the most similar sequence matches to the R. suavis ESTs in the GenBank databases. Significantly similar matches had E values of 10 -4 . An initial screen using the tblastn option (translated DNA query searching translated DNA database) was followed with both nucleotide and protein BLAST searches (blastn, blastx, blastp) of the largest open reading frames. Searches were generally restricted to Diptera. Sequences of interest were aligned using Clustal X 2.0 ( Larkin et al. 2007 ) using default settings. EST sequences were deposited in the dbEST EST database at the National Center for Biotechnology Information (Accessions EX453814 EX454354). To show the relationships of the Drosophila melanogaster OR sequences that are most similar to the R. suavis OR, a neighbor-joining tree of corrected distances was built using Clustal X ( Larkin et al. 2007 ). Bootstrapping was performed with Clustal X with 10,000 pseudoreplications.
Results Recovery of ESTs A total of 544 clones was sequenced, with an average length 532.02 ± SE 9.88 bp (range 14 to 967 bb). A wide variety of gene transcripts was obtained. As expected from a normalized library, 418 (76.8%) of the sequences were unique. The largest number of duplicates was 18 for a sequence similar to DmCG13095 (a peptidase). Of the 544 total sequences, 186 had no obvious ORF and did not produce a significant BLASTx match with a known protein sequence in GenBank. Of the 358 sequences with an ORF, 86 produced either a weak match with a known gene, or a low E-value match with a sequence of unknown function. Of the 313 sequences with a significant match to a sequence with a known function, 37 were mitochondrial and 276 were nuclear. As expected, protein BLAST searches yielded much smaller E values than did nucleotide searches. The exceptions involved nucleotide matches with sequences from other tephritid flies ( R. pomonella and the medfly Ceratitis capitata ), which usually resulted in the smallest E values, presumably because 3′ UTRs retained some sequence similarity. A representative set of the nuclear matches is shown in 3 1. Given that mRNA was extracted from antennae and maxillary palps, it is not surprising that 48 (9%) of the sequences had a function or putative function relating to chemoreception. Also, 24 of the sequences had a known or possible role in development. This finding is not surprising because the source flies were young adults that were not fully mature. Also included in Table 1 are a few sequences that have been implicated in diapause and life history; such genes were not the target of this study, but they are noted because diapause is critical to host race formation in Rhagoletis . Although they do not appear to play a role in diapause initiation, heat shock loci can be up-regulated during diapause ( Rinehart et al. 2007 ). Chemosensory proteins Thirteen sequences were recovered that coded for two different chemosensory proteins (CSPs), RsCSP1 and CSP2. The R. suavis CSPs matched only chemosensory proteins in the public databases and were identified as belonging to the conserved domain of the CSP family. Proteins from D. melanogaster Antennal Protein 10 (A10 or OS-D) and Ejaculatory Bulb Protein III (PEBme III), were the best matches for RsCSP1 and RsCSP2, respectively. A10 and RsCSP1 had a pairwise amino acid identity of 66%, and RsCSP2 and PEBme III were 82% identical. The R. suavis CSPs have an amino acid identity of 45.7%; the mature forms are 50.9% identical. RsCSP1 was 155 amino acids in length, including a signal peptide of 21 amino acids, and RsCSP2 had a length of 127 with its 18 amino acid signal. Odorant binding proteins Nine OBPs, RsObp1 to RsObp9, were recovered. All had top matches to dipteran OBPs in the public databases. The Kyte-Doolittle hydropathy plots of the nine proteins showed typical OBP profiles with hydrophobic peptide signals ( Peng and Leal 2001 ). Including their peptide signals, the OBPs ranged in length from 124 to 164 amino acids. Overall, the R. suavis OBPs were diverse and showed little conservation of amino acid residues. The mature OBPs had mean pairwise amino acid identities of 19.9%, with a range of 7.4 to 55.9%. Signal peptides were 15 to 26 amino acids in length ( Ramsdell 2004 ). Odorant receptor protein The R. suavis OR sequence (EX453813, 634 bp) was identified as an OR because a protein BLAST search of a 450 bp/150 amino acid ORF significantly (2E-04) matched DmOr49a . Resequencing of the clone from both ends revealed an unambiguous match with two D. melanogaster OR sequences. These were DmOr49a (4E-56, amino acid identity = 31%) and DmOr85f (1E-37, amino acid identity = 26%). The alignment of these three sequences is shown in Figure 1 . Based on the alignment, it is likely that a few amino acids were missing at the N-terminus of the R. suavis sequence. To increase the likelihood that the nearest known homolog of the R. suavis receptor was found, the nine Drosophila OR sequences were included in the neighbor-joining tree analysis, ranked in order of decreasing E value, between DmOr49a and an Anopheles gambiae receptor (AGAP001912, 8E-28). The resulting neighbor-joining tree ( Figure 2 , shows only the relevant part of tree, including Or85f from Drosophila pseuodoobscura supports the conclusion that the D. melanogaster homolog of the R. suavis odorant receptor sequence, henceforth RsOr1, was DmOr49a . The RsOr1 sequence clearly showed the characteristic hydropathy plot of a 7-transmembrane protein, with alternating hydrophobic and hydrophilic regions ( Figure 3 ).
Discussion Chemosensory Proteins The function of CSPs is not clear at this point. They are highly expressed in insect antennae, and some work supports a role as olfactory ligand transporters (Briand et al. 2002; Lartigue et al. 2002 ). Recent work in Bombyx mori , however, indicates that they are commonly expressed in many parts of the body in addition to antennae ( Gonga et al. 2007 ). The fact that two different CSPs were recovered in this small study of 544 ESTs indicates that, consistent with other work, CSPs are highly expressed in antennae, but their possible role in Rhagoletis olfaction remains uncertain. Odorant binding proteins Drosophila melanogaster has 51 OBPs ( Hallem et al. 2006 , Hallem and Carlson 2006 ). Thus the recovery of nine different R. suavis OBP sequences, all with D. melanogaster orthologues, ( Table 1 ) from only 544 ESTs suggests that most, if not all, of the R. suavis OBPs could be recovered by a modestly more extensive EST study. The exact role that OBPs could play in host specificity remains unknown; however, it is quite likely that they play a significant part. Recent work on Drosophila pheromone reception demonstrates both that OBPs are necessary for chemoreception and that some are highly specific for particular odorants ( Xu et al. 2005 ). The R. suavis OBPs have a mean pairwise amino acid identity of about 20%, which is typical for phylogenetically distant members of the OBP gene family ( Robertson et al. 1999 ). Their diversity, coupled with their apparent homology to D. melanogaster OBPs, make them good candidates for use as genetic tools in studies of acalypteran and other dipteran lineages. The odorant receptor sequence Odorant receptors are believed to play a critical role in the host finding behavior in insects, yet they are difficult to obtain without completely sequenced genomes. Only a few odorant receptor sequences have been discovered in insect EST projects (and none of these are published), suggesting a low rate of expression. Indeed Vosshall et al. ( 1999 ) noted that D. melanogaster ORs were present in fewer than 1 in 500,000 clones in an antennal library. This is lower than this rate of 1 OR in 544 clones, but it is likely that a Rhagoletis genome will be necessary to obtain a complete set of OR genes. RsOr1 is significant as the first reported putative ligand-binding receptor from a tephritid fly. It is not the first tephritid receptor; that distinction belongs to a receptor recovered from C. capitata by Larrson et al. ( 2004 ). However, the C. capitata OR was homologous to the atypical, “non-canonical” Or83b , which plays a role in localizing conventional or “canonical” receptors to the membrane and is highly conserved across insects ( Jones et al. 2005 ). But Or83b does not bind odorant ligands, and, thus, its homologs are unlikely to play a direct part in host plant adaptation. RsOr1 , on the other hand, was clearly homologous to the canonical DmOr49a . Unfortunately, it is not possible, at this point, to speculate on the volatile, or volatiles, which elicits a response from RsOr1 , as the ligands of DmOr49a have not yet been determined ( Hallem and Carlson 2006 ). However, it is probable that the OR sequences, or their expression patterns, or both, differ substantially between R. suavis and the apple maggot R. pomonella . The fruit volatiles of apples are characterized by high concentrations of esters ( Linn et al. 2003 ; Souleyre et al. 2005 ), while those of the ancestral host of R. pomonella , hawthorns, are characterized by ethyl acetate, long-chain alcohols, and various aldehydes ( Linn et al. 2003 ). But a completely different spectrum of volatiles, dominated by terpenes and terpenoids, occurs in walnut fruits ( Hennemanm et al. 2002 ). Moreover, many of the walnut terpenoids, such as β-pinene, limonene, β-caryophylene, and α-humulene ( Hennemanm et al. 2002 ) did not elicit any responses from the (incomplete) set of ORs tested by Hallem and Carlson ( 2006 ). Thus R. suavis may provide insights into insect olfaction that are not possible with Drosophila . R. suavis may be a good species with which to study the various roles of odorant degrading enzymes in olfaction. As pointed out by Rützler and Zwiebel ( 2005 ), odorant degrading enzymes are necessary to remove the signaling molecule after a cell response has been initiated and also because chemosensory systems must be open to the environment. Odorant degrading enzymes may have a secondary role of degrading toxic odorants before they can cause cellular damage. One of the major components of walnut fruit odor is limonene, which is used as an insecticide, and also causes “spontaneous stimulation of sensory nerves” ( Weinzierl 1998 , p. 106; mechanism not known). Detoxification of limonene in the cutworm Spodoptera is reported to be similar to mammalian detoxification ( Miyazawa et al. 1998 ), where oxidative degradation by cytochrome P450s appears to be the most important pathway (e.g., Miyazawa et al. 2002 ). No cyt P450 sequences were recovered in this study, but they represent one of several pathways that should be studied in olfaction in phytophagous insects. While tremendous strides have been made in understanding the molecular biology of chemosensation in recent years ( Rützler M and, Zwiebel LJ. 2005 , Hallem et al. 2006 , Vosshall and Stocker 2007 ), we are still very far from being able to understand the relative importance for host adaptation of peripheral vs. central processes, sequence vs. expression differences, or even the relative importance of the different classes of genes involved. Koop et al. ( 2008 ) have recently demonstrated that expression differences for both Ors ORs and OBPs have been involved in the adaptation of Drosophila sechellia to its food plant Morinda citrifolia . But more classes of molecules will need to be included in future such studies. ODEs and CSPs will certainly need to be added. But even genes that seemingly have little to do with olfaction may be important. For example, Hsp70 genes could affect receptor function in chemosensory cells because of their role of in guiding the folding of proteins ( Bukau et al. 2006 ).
Associate Editor: Zhijian Tu was editor of this paper. Rhagoletis fruit flies are important both as major agricultural pests and as model organisms for the study of adaptation to new host plants and host race formation. Response to fruit odor plays a critical role in such adaptation. To better understand olfaction in Rhagoletis , an expressed sequence tag (EST) study was carried out on the antennae and maxillary palps of Rhagoletis suavis (Loew) (Diptera: Tephritidae), a common pest of walnuts in eastern United States. After cDNA cloning and sequencing, 544 ESTs were annotated. Of these, 66% had an open reading frame and could be matched to a previously sequenced gene. Based on BLAST sequence homology, 9% (49 of 544 sequences) were nuclear genes potentially involved in olfaction. The most significant finding is a putative odorant receptor (OR), RSOr1 , that is homologous to Drosophila melanogaster Or49a and Or85f . This is the first tephritid OR discovered that might recognize a specific odorant. Other olfactory genes recovered included odorant binding proteins, chemosensory proteins, and putative odorant degrading enzymes. Keywords
Acknowledgments We thank Steve Ramsdell for the many hours he spent collecting and processing black walnuts. Without his assistance, obtaining the 50,000 flies necessary for this study would not have been possible. We thank an anonymous reviewer for the suggestion that Hsp70 genes could play a role in olfaction. NSF DEB-99-77011, NSF DEB 06-14528, and AG 200735604-17886 provided support. Abbreviations chemosensory protein; expressed sequence tag; odorant binding protein; odorant receptor
CC BY
no
2022-01-12 16:13:46
J Insect Sci. 2010 Jun 3; 10:51
oa_package/ba/73/PMC3014805.tar.gz
PMC3014806
20672979
Introduction The apple clearwing, Synanthedon myopaeformis (Borkhausen) (Lepidoptera: Sesiidae), is a xylophagous species that attacks pome and stone fruit trees ( Mustafa and Sharaf 1994 ; Canadian Food Inspection Agency 2006 ). The larval form of this insect lives under the bark of fruit trees, especially apple ( Malus ), but sometimes pear ( Pyrus ), almond ( Prunus amygdalus Batsch) and a few other closely related plant species ( Bartsch 2004 ). The larvae located under the bark of tree trunk and thick branches bore deep sub-cortical galleries 20 to 25 mm long and cut into the phloem ( Dickler 1976 ; Iren and Bulut 1981 ). The control of this pest is difficult because the adults have a long emergence period and the larvae develop inside the trunk and thick branches. Failure to prevent injury can lead to reduced tree vigor and yield ( Iren et al. 1984 ; Kovancı 1986 ). Until the 1980's S. myopaeformis has been regarded as a secondary pest of apple trees weakened by other factors, but in the last decade it has become a serious pest of apple trees in Antalya, in southwestern Turkey, and other parts of the country(Zeki et al. 1998). This can be attributed to changes in apple production technology and pest control strategies ( Ateyyat 2006 ). Chemical treatments that were locally applied only onto trunk and thick branches, or generally applied onto entire tree using broad spectrum insecticides previously provided control of apple clearwing ( Ulu et al. 1983 ; Iren et al. 1984 ; Maçan et al. 1987 ; Kılıç et al. 1988 ; Ateyyat 2005 ). According to several previous studies ( Dickler 1976 ; Frankenhuyzen and Jansen 1978 ; Ateyyat and Al-Antary 2006 ), trunk treatment with various materials (carboxyl methyl cellulose, endosulfan, ethyl-parathion, mineral oil, polyvinyl acetate etc.) against this pest reduced S. myopaeformis populations below the economic threshold that is, according to the Technical Bulletin of Turkish Ministry of Agriculture (Zeki et al. 1998), 8–10 larvae per tree during MarchOctober. Also, Maçan et al. ( 1987 ) reported that only trunk and thick branch spraying with several broad spectrum insecticides (including chlorpyriphos ethyl and azinphos methyl) effectively controlled larvae of S. myopaeformis. However, many of these broad spectrum insecticides are less frequently recommended in integrated pest management (IPM) programs, due to their negative effects on natural enemies ( Trematerra 1993 ; Balázs et al. 1996 ). The need for new materials to reduce S. myopaeformis populations prompted field studies to determine the potential of several materials as control agents and to determine whether trunk treatment alone as a control procedure could be enough to reduce the S. myopaeformis populations.
Materials and Methods Test materials The test materials used were cotton seed oil [Antbirlik Corp. Ltd., Antalya, Turkey; density = 0.94 g/ml, containing linoleic acid (49–58%), palmitic acid (22–26%), oleic acid (15–20%)), and a 10% mixture of arachidic acid, behenic acid and lignoceric acid], lime [Taspinar Ltd., Antalya; hydrated, 481 kg/cu.m., the primary active component is calcium carbonate, additional components are: calcium oxide, magnesium oxide and magnesium carbonate] and used motor oil [the Castrol-Turkish distributor, Antalya; weight (w) = 20W-50], The choice of cotton seed oil and used motor oil was based upon comprehensive data on the use of oils as insecticides and acaricides ( Chapman 1967 ; Butler et al. 1988 ; Willett and Westigard 1988; Butler and Henneberry 1990 ; Erler 2004 ). Lime, is traditionally used in Turkey to protect park and garden trees from insects and fungal pathogens by whitewashing their trunks. Experimental site and design Trials were carried out in a 0.37 ha orchard comprised of 98 ‘Starking’ and eight ‘Golden Delicious’ trees (the latter were excluded from the study) in Korkuteli, located at an altitude of ∼1000 m, near Antalya during the 2004 and 2005 growing seasons. The orchard was abandoned and free from pesticide sprays since 1998. The trees were 19–20 years old and heavily infested by S. myopaeformis. The trees were grouped for treatments in rows and treatments were applied in a completely randomized block design in three replications, with a water-treated control plot in each replicate; each plot consisted of eight trees. Applications Four applications were made during each growing season. The first was made at the start of adult emergence when first moths were detected in the bait traps during the first half of May. Detection of adult emergence and observation of the flight of adults was based on bait traps. Other applications were made at one month intervals, following the first. All test materials were applied onto the surface of tree trunks and thick branches by using a largehand brush. The entire trunk area and the first 50 cm of primer thick branches, ≥16 cm in diameter, were treated with the test materials. The oils were applied directly (without emulsification). The lime was applied as an aqueous solution (30% lime in water). Sampling and data collection In each year, treatments were evaluated by counts of adults trapped and pupal skin. Adult trapping was carried out in both years by placing bait traps from the beginning of April to the end of September. Cylindrical plastic containers (18 × 18 cm; diameter × height) modified for this purpose were used as bait traps. Each trap contained 0.5 1 of bait consisting of 80% water, 20% grape molasses and 2–3 g yeast. Six traps per treatment (2 per plot) were positioned at a height of about 1.6 m in apple trees randomly selected from the center areas of each plot. All traps were checked and cleaned on a weekly interval until late September. The numbers of exuviae protruding from the barks of tree trunks and thick branches were also determined weekly. On each sample date, the entire circumference of the trunk and the first 50 cm of treated main branches were inspected for exuviae on each of 6 trees per treatment (two per plot). The trees were selected at random in each plot at the beginning of the study and marked with colored plastic strips. To eliminate recounting the same skins at the next samplings, the exuviae protruding from the bark were removed by pulling out with a fine forceps after being counted. Statistical analysis Data obtained from the weekly samplings were analyzed by ANOVA ( SAS Institute 2001 ) and converted to yearly mean number of adults caught per trap or of exuviae per tree for each treatment. The Tukey's test was used to separate differences in means among the treatments. Yearly percent decreases in adult catches and exuviae for each treatment were also calculated by using a formula [decrease (%) = 100 × (A–B)/A, where A is yearly mean number of adult catches per trap or of exuviae per tree in the control; and B is yearly mean number of adult catches per trap or of exuviae per tree in the treatment, as defined by Rice and Coats ( 1994 )]. In addition, the weekly mean numbers of adult catches and exuviae are presented in graphic form.
Results and Discussion The weekly mean numbers of adults caught in bait traps and of exuviae protruding from the barks of tree trunks and thick branches in both growing seasons are presented in Figure 1 . Adult flight began at the beginning of May, reached a maximum at the beginning or in the middle of July, and ended at the end of September. The weekly total numbers in treated plots were less than those in watertreated control plots in both years. However, numbers varied considerably from year to year, and more substantial treatment effects were observed in the second year of the study ( Figure 1 ). Test materials varied in their impact on S. myopaeformis ( Table 1 ). Used motor oil, cotton seed oil and lime treatments in the first year of the study did not differ significantly from each other or from water-treated control in terms of the mean numbers of adults caught and exuviae. This suggests that the treatment had minimal impact on developing larvae. In contrast, there were significant differences between the test materials and water-treated control in the second year ( P <0.05). When compared on the basis of mean numbers for the same materials over two different years, the used motor oil and cotton seed oil treatments were significantly different from each other ( P <0.05). There were no significant differences between the lime treatments in both years ( P <0.05). The used motor oil and cotton seed oil treatments caused 32.8% and 20.7%) reductions in mean number of adults caught in the first year, 81.3% and 70.8% in the second year, respectively. Reductions in mean numbers of exuviae were at approximately the same rates ( Table 1 ). The lime treatment caused significant reduction only in mean number of adults caught in the second year of the study. However, this decrease was significantly less than those in oil treatments and was not statistically different from the water only control. There was no difference in appearance of treated trees as an indication of phytotoxicity on plant tissue during the study compared with water-treated controls. However, it must be kept in mind that the oil treatments affect stomatal openings and can cause damage by reduced plant transpiration in the course of time. In addition, more attention must be paid to the use of used motor oil due to its different structure from oils used as insecticides. It is suggested that it should not be used before detailed investigations on its phytotoxicity are made and it takes an approval for use from Turkish Ministry of Agriculture. The results obtained from the study revealed S. myopaeformis flight activity from early May to late September and adult flight reached a maximum at the beginning or in the middle of July. This coincides with the work of Al-Antary and Ateyyat ( 2006 ), except for flight peak of the insect. Al-Antary and Ateyyat ( 2006 ) reported that adults of S. myopaeformis had two flight peaks in the Ash-Shoubak apple-growing region of southern Jordan; the first one was between 11th and 18th June and the second was in mid-July. The results also revealed that tree trunk treatment with oil substances significantly reduced the insect populations compared with water-treated control. The fact that the number of trapped adults and pupal cases protruding motor oil and cotton seed oil treatments in the second year may be attributed to the low egglaying activity of the females in oily surfaces in the first year of the study. Taking into consideration results from previous studies ( Butler and Hennebery 1991 ; Fenigstein et al. 2001 ; Erler 2004 ; Erler et al. 2007 ), it can be surmised that the oils likely act as settling and oviposition deterrents. Also, Dickler ( 1976 ) reported that apple clearwing populations were enormously reduced in two years by trunk treatments with the combination of ethyl parathion and mineral oil, and that the applications were made at one month intervals, from the beginning of April to the end of August. In a previous study by Ateyyat and Al-Antary ( 2006 ), various treatments including (1) using a flexible wire to mechanically kill the larvae, (2) painting the trunk of trees with a mixture of water, copper sulfate, petroleum oil, and Durusban® (chlorpyriphos), (3) mounding soil to cover the graft union area and (4) a cloth veil wrapped around the main tree trunk from its base up to a height of 80 cm were evaluated for the control of S. myopaeformis and the insecticidal paint treatment was found to cause the greatest population reduction. Irrespective of origin, oils are generally considered physical suffocants which interfere with respiration in insects and mites ( Chapman 1967 ; Butler et al. 1988 ; Willett and Westigard 1988; Butler and Henneberry 1990 , 1991 ). In the present study, decreases in mean numbers of adults caught and exuviae in used motor oil and cotton seed oil plots in the first year of the study were substantially less than those in the second year. Taking into consideration the economic threshold value, the first year results of the oil treatments were higher than the economic threshold whereas the second year results were under the economic threshold (Zeki et al. 1998). This indicates that the larvae developing inside the tree trunks and thick branches were affected by the oil treatments to a limited extent. In conclusion, the trunk treatment with used motor oil and cotton seed oil as a control procedure caused significant decreases in the population density of S. myopaeformis, showing that it may be a useful method to control this pest. Moreover, the relatively low cost of these oils compared to most commercial insecticides, low or no mammalian toxicity and reduced potential for development of arthropod resistance make them attractive candidates for use in the control of this pest.
Results and Discussion The weekly mean numbers of adults caught in bait traps and of exuviae protruding from the barks of tree trunks and thick branches in both growing seasons are presented in Figure 1 . Adult flight began at the beginning of May, reached a maximum at the beginning or in the middle of July, and ended at the end of September. The weekly total numbers in treated plots were less than those in watertreated control plots in both years. However, numbers varied considerably from year to year, and more substantial treatment effects were observed in the second year of the study ( Figure 1 ). Test materials varied in their impact on S. myopaeformis ( Table 1 ). Used motor oil, cotton seed oil and lime treatments in the first year of the study did not differ significantly from each other or from water-treated control in terms of the mean numbers of adults caught and exuviae. This suggests that the treatment had minimal impact on developing larvae. In contrast, there were significant differences between the test materials and water-treated control in the second year ( P <0.05). When compared on the basis of mean numbers for the same materials over two different years, the used motor oil and cotton seed oil treatments were significantly different from each other ( P <0.05). There were no significant differences between the lime treatments in both years ( P <0.05). The used motor oil and cotton seed oil treatments caused 32.8% and 20.7%) reductions in mean number of adults caught in the first year, 81.3% and 70.8% in the second year, respectively. Reductions in mean numbers of exuviae were at approximately the same rates ( Table 1 ). The lime treatment caused significant reduction only in mean number of adults caught in the second year of the study. However, this decrease was significantly less than those in oil treatments and was not statistically different from the water only control. There was no difference in appearance of treated trees as an indication of phytotoxicity on plant tissue during the study compared with water-treated controls. However, it must be kept in mind that the oil treatments affect stomatal openings and can cause damage by reduced plant transpiration in the course of time. In addition, more attention must be paid to the use of used motor oil due to its different structure from oils used as insecticides. It is suggested that it should not be used before detailed investigations on its phytotoxicity are made and it takes an approval for use from Turkish Ministry of Agriculture. The results obtained from the study revealed S. myopaeformis flight activity from early May to late September and adult flight reached a maximum at the beginning or in the middle of July. This coincides with the work of Al-Antary and Ateyyat ( 2006 ), except for flight peak of the insect. Al-Antary and Ateyyat ( 2006 ) reported that adults of S. myopaeformis had two flight peaks in the Ash-Shoubak apple-growing region of southern Jordan; the first one was between 11th and 18th June and the second was in mid-July. The results also revealed that tree trunk treatment with oil substances significantly reduced the insect populations compared with water-treated control. The fact that the number of trapped adults and pupal cases protruding motor oil and cotton seed oil treatments in the second year may be attributed to the low egglaying activity of the females in oily surfaces in the first year of the study. Taking into consideration results from previous studies ( Butler and Hennebery 1991 ; Fenigstein et al. 2001 ; Erler 2004 ; Erler et al. 2007 ), it can be surmised that the oils likely act as settling and oviposition deterrents. Also, Dickler ( 1976 ) reported that apple clearwing populations were enormously reduced in two years by trunk treatments with the combination of ethyl parathion and mineral oil, and that the applications were made at one month intervals, from the beginning of April to the end of August. In a previous study by Ateyyat and Al-Antary ( 2006 ), various treatments including (1) using a flexible wire to mechanically kill the larvae, (2) painting the trunk of trees with a mixture of water, copper sulfate, petroleum oil, and Durusban® (chlorpyriphos), (3) mounding soil to cover the graft union area and (4) a cloth veil wrapped around the main tree trunk from its base up to a height of 80 cm were evaluated for the control of S. myopaeformis and the insecticidal paint treatment was found to cause the greatest population reduction. Irrespective of origin, oils are generally considered physical suffocants which interfere with respiration in insects and mites ( Chapman 1967 ; Butler et al. 1988 ; Willett and Westigard 1988; Butler and Henneberry 1990 , 1991 ). In the present study, decreases in mean numbers of adults caught and exuviae in used motor oil and cotton seed oil plots in the first year of the study were substantially less than those in the second year. Taking into consideration the economic threshold value, the first year results of the oil treatments were higher than the economic threshold whereas the second year results were under the economic threshold (Zeki et al. 1998). This indicates that the larvae developing inside the tree trunks and thick branches were affected by the oil treatments to a limited extent. In conclusion, the trunk treatment with used motor oil and cotton seed oil as a control procedure caused significant decreases in the population density of S. myopaeformis, showing that it may be a useful method to control this pest. Moreover, the relatively low cost of these oils compared to most commercial insecticides, low or no mammalian toxicity and reduced potential for development of arthropod resistance make them attractive candidates for use in the control of this pest.
Associate Editor: Eileen Cullen was editor of this paper. The efficacy of trunk treatment with three materials, cotton seed oil, lime and used motor oil, were evaluated for the control of apple clearwing, Synanthedon myopaeformis (Borkhausen) (Lepidoptera: Sesiidae) in an apple orchard during two successive years (2004 and 2005). The weekly total number of adult catches and exuviae was recorded each year. No treatments caused significant reductions in mean numbers of adults caught in bait traps or the exuviae protruding from the barks of tree trunks and thick branches in the first year of the study whereas all of them differed significantly from each other or from water-treated control in the second year ( P < 0.05). A comparison of the mean numbers of adult catches and exuviae in both years revealed significant differences between the used motor oil and cotton seed oil treatments ( P < 0.05). The lime treatments in both years significantly differed in terms of adult catches, but not exuviae ( P <0.05). In the second year, compared with those in water-treated control plots, the mean number of adult catches and exuviae decreased by 81.3% and 88.3% in the used motor oil-treated plots, and by 70.8% and 83.3% in the cotton seed oil-treated plots, respectively. Although population reductions in the lime treatment were significant in the second year, the effect was at a much reduced level in comparison to the two oil treatments. The overall results suggest that used motor oil and cotton seed oil may have potential for the control of apple clearwing. Keywords
Acknowledgements This research was supported by the Scientific Projects Administration Unit of Akdeniz University. The author is thankful to the growers for their assistance in applications, and to an anonymous reviewer for his comments and suggestions for improvement of the manuscript.
CC BY
no
2022-01-12 16:13:47
J Insect Sci. 2010 Jun 14; 10:63
oa_package/e0/2f/PMC3014806.tar.gz
PMC3014807
20569129
Introduction The white-backed planthopper, Sogatella furcifera (Horvath) (Hemiptera: Delphacidae), is widely distributed throughout Asia and is considered a major pest of rice in the region. The nymphs and adults suck the plant sap and reduce plant vigor, delay tillering, stunt, yellow leaves, and shrivel grains ( Khan and Saxena 1984 ), and heavy infestation may cause hopper burn, complete death of the rice plants ( Pathak 1968 ). S. furcifera has caused intermittent famines in eastern Asia since ancient times, and became conspicuous in southeast Asia after the so-called Green Revolution of the 1960s ( Noda et al. 2008 ). Because of their long-distance migration, S. furcifera can cause sudden devastation to rice. To date, studies on S. furcifera have focused mainly upon its biology ( Xiao and Tang 2007 ), occurrence ( Seino et al. 1987 ), varietals resistance ( Heinrichs and Rapusas 1985 ), integrated pest management ( Litsinger et al. 2005 ), and it interactions with Nilaparvata lugens ( Matsumura and Suauki 2003 ). However, there is little knowledge of its genetic diversity and population genetic structure. As S. furcifera has been a very serious long-range migratory pest since the early 1970s, many scientists have been engaged in studying its migration with meteorological data. From 1977 to 1980, the main insect populations of China were investigated using high-altitude aerial netting, ship-catching the recapture of the colour-labelled insects, dissection of female ovaries, radar monitoring, atmospheric current analysis etc., and results showed that in the early spring S. furcifera came continuously from Indochina Peninsula. S. furcifera in the Greater Mekong Subregion, such as Thailand and Vietnam, may carry over by the southwest atmospheric current and its migratory period in south China from March to July. However, small numbers of S. furcifera can still be overwinter on spring rice and ratoon rice in southwestern Yunnan Province and southern Guangxi Province ( National Coordinated Research Group 1981 ). Moreover, the rice area in Indochina was the head of Southeast Asia, and the same as in the Greater Mekong Subregion ( Bui 1991 ). Therefore, the pest in China came directly from the Red River Delta, and most of the initial sources were from the Mekong Delta ( Wu et al. 1997 ). This propensity for long-range flight, combined with the small body size of S. furcifera and the fact that flight activity is nocturnal, means that it is extremely difficult to observe insect migration while it is in progress ( Chapman et al. 2003 ). However, considering genetic diversity and structure of S. furcifera may contribute to speculation about its migratory route and is also essential to the establishment of effective forecasting strategies for this long-range migrate insect. DNA-based molecular markers have been used in a wide range of taxa. Inter-simple sequence repeat (ISSR) markers, which are cost-effective, rapid and efficiently sensitive, are extremely useful for assessing genetic variability in some species ( Gui et al. 2008 ; Zietkiewicz et al. 1994 ). In contrast to other dominant markers, such as Random Amplified Polymorphic DNA, the ISSR technique uses longer primers, thus allowing higher annealing temperature and greater reproducibility of the DNA fragments ( Fang et al. 1997 ). The high degree of polymorphism, low cost, and good repeatability of ISSRs have allowed the successful detection of intra-specific polymorphisms and characterization of genetic diversity in various species, such as peanut, Arachis hypogaea ( Raina et al. 2001 ), two species of cyclically parthenogenetic aphids, Acyrthosiphon pisum and Pemphigus obesinymphae ( Abbot et al. 2001 ), and the mosquito Aedes aegypti ( Abbot et al. 2001 ). The fact that the population of S. furcifera in China has not been suppressed may be due to the ineffective coordination of implementing the control strategy, as well as the lack of comprehensive knowledge (including population structure and diversity) of the pest. A sound understanding of the genetic diversity, migration/dispersal patterns, and the environmental adaptability of S. furcifera is essential to the development of rational control strategies. The present study was designed to use ISSR analysis for investigating the genetic structure as well as diversity of S. furcifera among the geographic regions in the partial Greater Mekong Subregion. The main objectives of the study were to (i) assess bio-geographic relationships and genetic similarities across several populations in Yunnan Province and its adjacent southeastern Asian countries; and (ii) provide useful information in modeling and forecasting outbreaks of S. furcifera and in designing sustainable strategies to manage the pest.
Materials and Methods Sampling A total of 47 populations of S. furcifera were sampled across 11 geographical regions in Yunnan Province and three Southeast Asian countries, including Vietnam, Laos, and Myanmar. The longitude, latitude, and altitude of each sampling population, sample size, and collection dates were recorded ( Figure 1 ; Appendix — available online). Approximately 30 individuals per population were collected and the collected samples were stored in 80% alcohol until ISSR analysis. DNA extraction Total genomic DNA was isolated from S. furcifera using a modified SDS method ( Wang et al. 2001 ). The DNA concentration and 260/280 nm absorbance ratio were determined using a GeneQuant RNA/DNA calculator spectrophotometer (Pharmacia Biotech, www.apbiotech.com ). All samples were stored at -20°C until needed. ISSR-PCR amplification Sixty-seven ISSR primers were selected from the ISSR primer set (UBC primer set #9) developed by the University of British Columbia Biotechnology Laboratory ( www.biotech.ubc.ca ) and synthesized by Sangon Biological Engineering Technique & Service, Co. Ltd. ( www.sangon.com ). These primers were initially screened, and the 14 primers that produced bright, clear, and reproducible fragments were utilized for further study ( Table 1 ). Each PCR amplification reaction mixture consisted of 2 μl reaction buffer, 2.5 mM/L Mg 2+ , 2 μM/L dNTPs, 0.8 μ MIL primer, 1 U Taq DNA polymerase (TaKaRa, www.takarabio.com ), and 30 ng DNA templates in a total volume of 20 μl, and 2.0% of deionized formamide was added to the PCR mixture to increase band clarity. Amplification was performed in a Mastercycler Gradient (Eppendorf, www.eppendorf.com ) under the following cycle profile: 4 min at 94°C, followed by 35 cycles of 1 min at 94°C, 1 min annealing (temperature depending on primers used) ( Table 1 ), and 2 min extension at 72°C, ending with 10 min at 72°C for a final extension. The PCR products were separated on 2% agarose gels in 0.5 × TBE buffer and detected by staining with GeneFinder. Band size was compared with a 100 bp DNA ladder (TaKaRa), and determined by spectro-photometry using an ImageQuant 300 (Beckman Instruments Inc., www.beckmancoulter.com ) Data analysis The ISSR bands were analyzed to estimate the genetic variations among and within populations studied. The banding patterns were recorded using a gel documentation system (Bio-Rad Gel Doc 1000, www.biorad.com ). Amplified fragments were scored for the presence or absence of bands (1 = present; 0 = absent; 9 = not amplified, missing value). Since ISSR markers were dominantly inherited, each band was assumed to represent the phenotype at a single biallelic locus ( Williams et al. 1990 ). Bands with differing intensity were treated equally, but only bright and discernible fragments ranging from 220 to 2000 bp were included in the statistical analysis. To evaluate the discriminatory power of molecular markers, polymorphic information content and marker index were calculated according to Gui et al. ( 2008 ). The ISSR molecular data were elaborated using the NTSYS-pc (Numerical Taxonomy System) version 2.10 computer program ( Rohlf 2002 ). The SEVIQUAL (similarity for qualitative data) program was used to calculate the genetic similarities. Similarity matrices were then converted into distance matrices (distance = 1 - similarity). Based on these matrices, dendrograms were constructed using the Neighbor-Joining (NJ) method. In addition to NJ cluster analysis, UPGMA (unweighted pair-group method with arithmetic averages) was performed on the same data sets. All the computations were performed using NTSYS-pc software. The bootstrap analysis was performed with 500 replicates in NJ trees using the FreeTree software (available at: http://www.natur.cuni.cz/∼flegr/freetree.htm ). In order to estimate the congruency among dendrograms, cophenetic values (rcp) based on the results of the NJ cluster and UPGMA cluster analysis were calculated to measure the quality of the clustering ( Rohlf and Sokal 1981 ). The cophenetic matrices for each index type were computed and compared using the Mantel matrix correspondence test. This test yielded a product moment correlation (r) that provided a measurement of relatedness between two matrices. In order to partition the total phenotypic variance into within and among populations, the non-parametric Analysis of Molecular Variance (AMOVA) program 1.5 was also applied as described by Excoffier ( 1993 ), where the variation component was partitioned among individuals within population, among populations within region, and among regions. Then a permutational procedure (i.e. 1000 random permutations) was used to provide tests of significance for each of the hierarchical variance components based on the original inter-individual squared-distance matrix. Homogeneity of molecular variance among populations was tested with Bartlett's statistics. The input files for AMOVA were prepared with the aid of AMOVA-PREP version 1.01( Miller 1998 ). Geographical distances of pairs of populations were calculated using the latitude, longitude, and elevation of each population. The Mantel Z-statistic (1000 permutations; routine MXCOMP in NTSYS) was used to test the correlation between geographical distances and genetic distances ( Mantel 1967 ). As one of the most important methods of ordination analysis, principal coordinate analysis (PCOA) was performed using the NTSYS-pc version 2.10 software ( Rohlf 2002 ) to examine the resolving power of the ordination. It constructed a new set of orthogonal coordinate axes maximum variance in as few dimensions as possible.
Results SR profile Fourteen ISSR primers were selected from a total of 67 on the basis of clarity, usability, and reproducibility of their banding patterns; the data are shown in Table 1 . The 14 primers produced a total of 121 bright and discernible bands, 81.8% (99 bands) of which were polymorphic. The number of bands produced by individual primers was in the range of 6–11 with an average of 8.6. The size of the polymorphic bands ranged from 220 bp to 2000 bp. The representative banding patterns are shown in Figure 2 . The primers differed greatly in their potential usability as indicated by the number of securable amplified bands, e.g. the primer (AC)8(AT)T produced as many as 11 bands, while primer (CA) 8 A amplified only 6 bands. The average polymorphic information content varied from 0.28 (AG) 8 CC) to 0.74 (AC) 8 AG), whereas the marker index ranged from 2.48 (AC) 8 TC) to 5.84 (AG) 8 TA). The mean polymorphic information content and marker index of the 14 primers was 0.55 and 4.63, respectively. The ISSRs that exhibited a high polymorphic information content value, together with a higher multiplex ratio, were likely to be efficient for the analysis of intra-specific genetic variation in a species like S. furcifera for which no prior sequence information was available. Genetic diversity of S. furcifera To assess the overall distribution of genetic diversity, the AMOVA program was used to analyze the distance matrix, the data are shown in Table 2 . AMOVA provides F ST of population differentiation, which is equivalent to an F ST statistics when the degree of relatedness among the genetic variants is evaluated ( Belaj et al. 2004 ). AMOVA analysis showed highly significant (p < 0.001) genetic differentiation among regions. A large proportion of genetic variation (79.84%) resided among regions, whereas only 20.16% resided among populations within regions. The F ST value showed a much higher differentiation ( F ST =0.72) among regions, indicating a high level of genetic differentiation. Cluster and coordination analysis The similarity matrices calculated from the polymorphic ISSR bands showed highly variable genetic distances among the different populations. The genetic distance was the highest (1.1052) between the northeast population (Daguan, Zhaotong) and Red River Delta population (Yuanyang, Honghe) in Yunnan Province, and the lowest (0.1475) between southern population (Mengzi, Honghe) in Yunnan Province and Maten population in Vietnam. The dendrogram generated from the NJ cluster analysis showed the genetic relationship among 47 S. furcifera populations; the data are shown in Figure 3 . The populations collected from similar geographic regions generally grouped in the same cluster or nearby clusters. Two major clusters, one consisting of mostly southwestern populations of Yunnan Province and Myanmar populations; and the other one consisting of southeastern and central populations of Yunnan Province (including all the Red River Delta populations), plus Vietnam and Laos populations, were visible in NJ cluster analysis. The UPGMA tree showed a similar pattern of clustering with the NJ tree, and the results of the Mantel test indicated a highly significant cophenetic correlation (r = 0.7748, p = 0.0001) between the NJ tree and the UPGMA tree. The results of PCOA showed three main groups in the two dimensional PCOA; the data are show in Figure 4 . Group 1 included populations from Myanmar and adjacent Lingcang and Baoshan prefectures of Yunnan Province (Cluster II in the NJ tree). Group 2 included populations from Laos, small partial populations of Red River Delta and Puer prefecture of Yunnan Province (the bottom populations of Cluster I in NJ tree, see Figure 3 ), and Group 3 included populations from northern Vietnam, most of the Red River Delta and Wenshan prefecture of Yunnan Province (the upper populations of Cluster I in NJ tree, see Figure 3 ). The first two components of PCOA explained 15.94% of the total variation, and the first three components explained 22.62% of the total variation (data not shown). The results of PCOA and NJ tree tend to be uniform in whole, reflecting the geographical distribution of S. furcifera in the Greater Mekong Subregion. However, there were a few populations that did not fit into any group (e.g. site WS32 and YX20 did not fall into any group in PCOA), and the Red River Delta populations were split into Group 2 and Group 3 in the PCOA while belong to cluster I in NJ tree. The lack of fit was also reflected in the results of the Mantel test, where the geographical and genetic matrices did not have an overall correlation (r = 0.2230, p = 0.8448), indicating the role of geographic isolation did not shape the present population genetic structure of S. furcifera.
Discussion The ISSR marker approach provided means of examining bio-geographic relationships and genetic similarity within and among populations of S. furcifera in Yunnan Province, China and the potential original areas of this migratory pest in the adjoining south and southwest countries. ISSRs can be informative at various levels of genetic variation ( Hess et al. 2000 ), and it can be advantageous when time and material costs preclude the development of more robust markers (e.g., locus-specific SSRs). ISSR markers are also highly reproducible due to stringent annealing temperatures, long primers, and low primer-template mismatch ( Wolfe et al. 1998 ). While the detection using more sensitive techniques (autoradiography or silver staining) on Polyacrylamide gels may increase the resolution of co-migrating fragments ( Godwin et al. 1997 ) in comparison with allozymes and Random Amplified Polymorphic DNA, ISSRs can also reveal polymorphisms without more elaborate detection protocols ( Esselman et al. 1999 ). Thus, for biological questions where genomic fingerprinting is appropriate ( Abbot et al. 2001 ), ISSR is a valuable marker for rapid, large-scale screening of genetic variation in animal populations. The 14 selected ISSR primers were di-nucleotides and mostly poly (AC) or poly (AG). This was in correspondence with studies of other animal species (such as Drosophila melanogaster ) where the poly (AC) and poly (AG) repeats are common repeat motifs across animal groups ( Schug et al. 1998 ). Between the NJ and UPGMA clustering analysis, the dendrogram derived from the NJ method was preferred because it minimized the sum of branch lengths at each stage of clustering the operational taxonomic units and started with a star-like tree, which was less affected by the presence of admixture among populations (Ruiz-Linares 1994). The PCOA analysis proved relatively regional distribution of S. furcifera. However, a few populations were not grouped within its adjacent populations, which suggested that there was genetic heterogeneity within each geographical region, and further suggested that most of the S. furcifera in Yunnan are migrated from the adjoining south and southwest countries via different routes or at different times, and a small part of them may origin from the offspring of the overwintering individuals in southern Yunnan Province. Because S. furcifera is a migratory insect pest, the seasons when these insects were collected are very important. In the present study, all of the samples were collected from May to July. As S. furcifera in the Greater Mekong Subregion may carry over by the southwest atmospheric current and its migratory period in south China is from March to July ( National Coordinated Research Group for Whiteback planthopper 1981 ). So most of the pests collected may be migratory individuals. Based on AMOVA analysis, 79.84% of the total variation was found among regions, while 20.16% variance was attributable to population divergence within the regions. The population genetic structure of a species is affected by a number of evolutionary factors including its mating system, gene flow, and its mode of reproduction as well as its natural selection ( Hamrick and Godt 1989 ), and the mating system plays a critical role for the population genetic structure. ISSR markers can potentially distinguish many individuals, but it cannot provide direct information on the mating system due to their dominant nature of inheritance ( Wolfe and Liston, 1998 ). In addition, for the highly migratory insect, one has have to consider their population characteristics such as insecticide resistance, virulence against resistant rice varieties, and winged response to density which could vary considerably among populations collected from different geographic locations. Migration is a fundamental population process and a common feature of insect life cycles, the study of which is crucial to understanding the dynamics and persistence of populations of insects ( Dingle 1996 ). The first study of migration ecology and distribution of the individual species in the complex was conducted by Chapman et al. ( 2006 ). They studied seasonal variation in the migration strategies of the green lacewing Chrysoperla cornea species complex, and demonstrated the migratory capabilities of the individual species comprising the C. cornea group of lacewings, and indicated that understanding the population ecology of an insect species is necessary to investigate the complete migration syndrome. Xian et al. ( 2007 ) used significant El Nino-Southern Oscillation indices as key factors to build forecasting models for the early immigration of the brown planthopper, Nilaparvata lugens by step-wise multiple linear regression analysis. The results showed that these indices can implicate the medium and long-term forecast of N. lugens population dynamic. For the migratory insect, genetic variation is found in all components of the migratory syndrome, and selection for migration results in a change in the frequency of expression of these components, which can be analyzed and predicted using the mathematics of quantitative genetics ( Roff and Fairbairn 2007 ). Variability and the genetic basis for migratory behaviors in a spring population of the aphid, Aphis gossypii in the Yangtze River Valley of China was investigated by Liu et al. ( 2008 ). The tethered flight capacity, takeoff frequency, and takeoff angle of winged A. gossypii were measured, and the genetic basis of population differentiation in migration was investigated through bi-directional selection and cross-breeding experiments. The study provided further evidence that the intra-population variability of migratory behaviors in A. gossypii is of genetic origin, and that the migratory line produces winged offspring more readily than the sedentary line. Llewellyn et al. ( 2003 ) used microsatellites to study migration and genetic structure of the grain aphid, Sitobion avenae, in Britain related to climate and clonal fluctuation. The data sets support the view that the insect is highly migratory and an accurate picture relating genetic variability to flight behavior, including migratory ambit, in this group of insects can be built up using microsatellite markers. The present study is the first attempt of assessing the genetic diversity of S. furcifera using the ISSR marker technique. It demonstrated the validity and suitability of using ISSR markers to detect the genetic variation among populations of S. furcifera from different regions. Although there are difficulties in using conventional approaches to discern the accurate migratory route of S. furcifera (e.g. fluorescent marker dyes, radio-isotopes) not only due to the special geographic environment and meteorologic condition of Yunnan Province (high plateau, high altitude, complex hypsography, and various climate conditions), but also as their small size, short lifespan, large population sizes, rapid aerial population dilution and the very long distances over which these insects may fly ( Loxdale et al. 1993 ), the application of ISSR markers has the potential to overcome many of these challenges and provides an overall understanding of population relationships of the species. From a fundamental point of view, since genetic structuring of populations reflects the interaction of genetic drift, mutation, migration and selection, S. furcifera are of particular interest in this regard. In addition to demonstrating the usefulness of ISSR markers for DNA profiling, the genetic structures among populations of S. furcifera analyzed in this study enable us to infer its evolutionary relationship. Based on the results of the study, it can be speculated that S. furcifera migrates to Yunnan Province primarily by two routes, one from northern Vietnam and Laos to the Red River Delta which is in the southeastern Yunnan Province, China; and the second one from Myanmar to the southwestern areas of Yunnan Province such as Lincang and Baoshan prefectures. After that, these two migrators disperse, spreading and finally causing outbreaks in the whole rice area of the Province, even to the whole country. This speculation was supported by Wu et al. ( 1997 ), as they indicated that S. furcifera migrated to China via two routes: one was to southwestern Yunnan by southwest monsoon from the northern Thailand and Myanmar; and the other one was to the Red River Delta in Yunnan and Guangxi, Guangdong Province due to the southwest monsoon from Indochina. The present data does not clearly indicate the migration pattern of S. furcifera as there still is a dearth of information on genetic architecture. Phenotypic variation in migratory propensity has long been known, but the genetic basis of such variation is still relatively unexplored. Even more important, although it is recognized that migration is not a single trait but a suite of traits that include both larval and adult components, more data are needed on the functional and genetic relationships among traits ( Roff and Fairbairn 2007 ). Therefore, further studies in wing dimorphisms of the insect may lead to more effective research into migratory behavior and population dynamics in various geographic regions, furthering speculation on possible diffused direction and the timing of sudden break out. Thus, information on the migration patterns of important agriculture insect is essential to developing sustainable pest management strategies.
Associate Editor: Brad Coates was editor of this paper The white backed planthopper, Sogatella furcifera (Hemiptera: Delphacidae), is a serious pest of rice in Asia. In the present study, inter-simple sequence repeat (ISSR) markers were employed to investigate the genetic diversity and differentiation of 47 populations sampled from 14 prefectures of the Greater Mekong Subregion. A total of 14 selected primers yielded 121 bright and discernible bands, with an average of 8.6 bands per primer. According to the hierarchical analysis of molecular variance (AMOVA), the genetic variation among geographic regions (79.84%) was higher than that of among populations within region (20.16%), and the F ST value was 0.72, indicating a high level of genetic differentiation. Neighbor-Joining cluster analysis of the 47 populations showed two major clusters, one consisting of mostly southwestern Yunnan Province and Myanmar populations; and the other one consisting of southeastern and central of Yunnan Province plus Vietnam and Laos populations. No significant positive correlation was observed between genetic and geographic distances by Mantel test (r = 0.2230, p = 0.8448), indicating the role of geographic isolation did not shape the genetic structure of the sampled S. furcifera populations. This paper provides useful data for understanding and speculating the migration of S. furcifera and reveals available information to develop sustainable strategies for manage this long-range migratory pest. Keywords
Acknowledgements We thank Dr. N. S. Talekar from Yunnan Agriculture University, China for reviewing the original manuscript. This research was funded by the National Basic Research and Development Program, China (Grant No. 2006CB100204 and Grant No. 2009CB119200), National Natural Sciences Foundation of China (Grant No. 30860069) and Yunnan Provincial Natural Sciences fund (Grant No. 2009CD057). Abbreviations inter-simple sequence repeats; Neighbor-Joining; Numerical Taxonomy System; principal coordinate analysis; unweighted pair-group method with arithmetic averages
CC BY
no
2022-01-12 16:13:46
J Insect Sci. 2010 Jun 3; 10:52
oa_package/79/c8/PMC3014807.tar.gz
PMC3014808
20672980
Introduction The superorganism metaphor suggests that the subterranean nest a colony of ants constructs should be regarded as a functional part of the colony superorganism. The particular architecture of the nests of different species can be hypothesized to serve superorganismal functions in particular ways suited to the biology of each species. The study of nest architecture can therefore potentially lead to important understanding about how ant colonies work. Unfortunately, the study of subterranean ant-nest architecture is in its infancy. Although a few descriptive studies have begun to outline the range of architectural variation within and among species (reviewed by Tschinkel 2004a , 2004b ), understanding of the functional aspects of this variation is far in the future. The situation has recently improved, but most reports have provided only verbal descriptions or simple drawings based on excavations, and very few included a census of the colony or quantitative details of the architecture. The architecture of the nests of the fungus-gardening ants has received more attention than that of most other groups ( Jonkman 1980a , 1980b ; Moreira et al. 1995 , 2004a , 2004b ; Mueller and Wcislo 1998 ; Solomon et al. 2004 ; Fernández-Marín et al. 2005 ; Klingenberg et al. 2006 ; Verza et al. 2007 ). Nevertheless, ants clearly excavate species-typical subterranean nests, a conclusion strengthened by the more recent work of Tschinkel ( 1987 , 1999 , 2003 , 2004b , 2005 ), Mikheyev and Tschinkel ( 2004 ), and others ( Ruano and Tinaut 1993 ; Plaza and Tinaut 1989 ; Moreira et al. 2004a , 2004b ; Forti et al. 2007 ). Despite an enormous range of size, a large proportion of ant nests are composed of two basic elements, more or less vertical shafts connecting horizontal chambers ( Tschinkel 2003 ). The architectural variation among species is largely the result of variation in the form, spacing, and size of these elements. Nests with similar architecture can vary in depth from a few centimeters to 4 m or more ( Tschinkel 2003 ). Because nest excavation is a group activity, the manner in which the architecture results from self-organized behavior has stimulated experimental and modeling analysis of ant tunneling activity ( Buhl et al. 2006 ; Rasse and Deneubourg 2001 ). Gas gradients in ant nests have been modeled because they have been suggested as templates for nest construction ( Cox and Blanchard 2000 ; Tschinkel 2004b ). New study methods include x-ray computed tomography, which has been applied to the study of the growth of small Argentine ant nests in the laboratory ( Halley et al. 2005 ). Trace fossils interpreted as having been constructed by ants have also received considerable interest (for a review, see Hasiotis 2003 ). As in any young field, however, the structure and range of variation of the nests of a variety of ant species must be described in quantitative terms, as must the distribution of the ants within these structures given that the road to the universal leads via the particular. The present paper provides a description of the nest architecture and its variation for the ant, Odontomachus brunneus (Patton) (Hymenoptera: Formicidae), and together with several previous papers ( Tschinkel, 1987 , 1999 , 2003 , 2004; Mikheyev and Tschinkel, 2003 ), contributes to the beginnings of a systematic study of ant-nest architecture for its own sake.
Materials and Methods The study site All nests of O. brunneus studied were located in an area of sandhills longleaf pine forest 3.2 km southeast of the Tallahassee Regional Airport (30° 37′ 60′′ N, 84° 32′ 28′′ W). The site has a relief of about 10 m; excessively drained, deep sandy soils; and a forest of longleaf pine and turkey oak. The ground cover consisted of sparse wiregrass, shiny blueberry, scattered palmetto, other small shrubs, and scattered leaf-litter patches. The study spanned from August to December 2007. Plaster casting and excavation Nests of O. brunneus were initially located by the characteristic soil depots around the entrance. Identity was confirmed by collection of ants emanating from the nest. For casting, orthodontic plaster (Labstone, Modern Materials, http://heraeus-dental-us.com/en/ourproducts/laboratory_1/mondernmaterials/mondernmaterials_1.aspx ) was mixed with water to form a very thin slurry. The nest entrance was cleared with a portable vacuum cleaner, and a small berm was constructed around it. The plaster slurry was poured directly into the entrance until the nest filled. As the soil drew water from the slurry, more plaster slurry was added to keep the nest filled. After about an hour, the plaster had set sufficiently to be excavated. A pit 0.5 to 1.5 m in depth was dug to one side of the nest, and the cast was then excavated laterally from its side, upper regions first. Casts always broke during excavation and had to be reconstructed later in the laboratory. Metal casting and excavation A few nests were cast in molten aluminum or zinc. The metals were melted in a charcoal-fired kiln and poured directly into the nest entrance. The procedure is described by Tschinkel ( 2010 ). Excavation proceeded as for plaster casts. The advantage of metal casting is that the cast does not break during excavation. These casts were used as intuitive guides during reassembly of the plaster casts and to confirm their structures. Cast reconstruction, imaging and measurement The cast pieces were dried and cleaned in the laboratory and the nest reassembled; 5-min epoxy was used to cement the pieces together. The completed cast was laid on a black background and photographed digitally from at least two vantages with a scale. Stereo pairs of photographs (together with a suitable viewer or ocular technique) allow viewing of the cast in three dimensions. The scale in the images allowed various aspects of the casts to be measured. After completion of the photographs, the casts were broken into chambers and connecting shafts, and the chambers photographed with a scale from directly above. Measurements of chamber dimensions and area were made from these images. Dissolution of the casts and census of the ants Finally, the broken cast pieces of each nest were tied into fine-mesh fabric bags and placed in a bucket with slowly running hot water. The top, middle, and bottom thirds of the cast were bagged separately. In 3 to 4 weeks, the hot water dissolved all the plaster and left the remains of the ants in the fabric bags, along with all accompanying materials in the cast. The ant heads were separated from the debris, counted, and mounted on cards with double-stick tape for digital imaging with a scale. Head width across the eyes, head length, and head width at the narrowest part of the head were measured from these images with the included scale. Other significant materials, such as cocoons/brood or possible predators, were also examined and their distribution within the nest determined.
Results (Figures of casts are shown online in the Appendix “Casts of nests”.) The nests of O. brunneus were rather simply structured. Each consisted of a single, more or less vertical shaft connecting a varying number of chambers. Stereo images of these casts are shown in Figures 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, and 19. Surfaces of most casts were fairly rough, indicating rough inner nest walls. In several nests, the ants seem to have broken into the excavations of other animals and incorporated them into their own nests. Use of plant roots was also observed. The upper region of Nest 8 was probably originally a rodent burrow, and the lumpy chambers at the bottoms of Nests 2, 10, and 15 were probably made by other animals, as were the complex tunnels in the upper parts of Nest 15. Nests ranged greatly in size, comprising 2 to 17 chambers. Maximum depths ranged from 18 to 184 cm and total chamber area from 28 to 340 cm 2 (Figures 3, 7). Figure 20 shows all of the casts to the same scale and illustrates the changes of nest size, shape, and composition that occur as a nest grows from small to large. In general, all elements of the nest increased simultaneously, including maximum nest depth, mean chamber area, and number of chambers, making the nest proportions (nest “shape”) relatively size-free. Because the plaster casts were dissolved and the workers entombed in them censused, the worker census could be associated with nest characteristics. Not surprisingly, nest size increased with the number of workers in the colony, which ranged from 11 to 177. Each additional worker was associated with an increase in total chamber area of 1.7 cm 2 (total chamber area = 23 + 1.72 (no. of workers); r 2 = 0.69; p< 0.0005), and the mean chamber size increased by 0.1 cm 2 (mean chamber area = 8.37 + 0.104 (no. of workers) ( Figure 21 )). The relationship between chamber area and worker number held even when the latter were vertically cumulated into top, middle, and bottom thirds of the nest. Levels with more chamber area had significantly more workers in them (number of workers in level = 3.51 + 0.38 (area in level); R 2 = 26%; F 1,40 = 15.61; p < 0.0003), as expected from the positive relationship between total chamber area and total workers. A plot of the area per worker (not shown) revealed that this value was constant at about 2 cm 2 per worker across most colony sizes, with the exception of two colonies with very few workers (Nos. 5 and 15, excluded from the analysis below). These colonies had probably recently lost workers rather than having excavated relatively larger nests, or perhaps workers were simply outside the nest at the time of casting. Colonies therefore seem to excavate a similar area of chamber for every worker. The dorsal silhouette of workers of O. brunneus measures about 7.5 mm 2 in area if the legs, mandibles, and antennae are excluded, and about 26 mm 2 if the lateral extension of femurs is included. The worker dorsal silhouette (without legs) therefore occupies an average of 4.8% (SD 2%, two outliers excluded) of the chamber area available per worker, and with legs 15% (SD 8%, two outliers excluded). The nest the Florida harvester ant, P. badius , contained a mean of 1.4 cm 2 per worker (SD 0.74), of which worker bodies (without legs) took up about 18% (SD 8.4; unpublished data). Nests of the ant Camponotus socius contained an average of 1.1 cm 2 (SD 0.41) of floor area per worker, of which the worker body (without legs) occupied a mean of 16% (SD 5.4%; unpublished data). P. badius and C. socius are therefore about equally crowded, and both appear to be more than three times as crowded as O. brunneus . Several components of nest size also increased with nest size, measured as total chamber area or total number of chambers. Averaged over all chambers, the mean chamber area was about 15 cm 2 (SD 14.6 cm 2 ), but averaged by colony, it increased with colony size (mean area = 10.7 + 0.033 (total area); r 2 = 30%). Nests grew through deepening and the addition of more and progressively larger chambers ( Figure 22 ). For every 100 cm 2 increase in total chamber area, the nest was 36 cm deeper and had 3 additional chambers (max. nest depth = 63.1638 + 0.3605 (total area); number of chambers = 4.3805 + 0.0304 (total area)). Because all of these measures were correlated with each other, other ways of describing the changes associated with nest growth are also possible. For example, the addition of each chamber increased total chamber area by about 18 cm 2 , and each additional chamber averaged about 0.6 cm 2 larger than the previous chamber, so chambers in the smallest nests averaged about 9 cm 2 and those in the largest about 31 cm 2 . Moreover, the addition of each chamber was associated with an increase in nest depth of 8.7 cm. The chamber shapes ranged from nearly circular to somewhat oval or irregular ( Fig. 23 ), but with a few exceptions (mostly the bottom chambers), they did not deviate strongly from circularity; that is, they were not strongly lobed. More than 70% of chambers had circularities greater than 0.6. Chamber area was not evenly vertically distributed within the nest. For comparison of nests of differing sizes, chamber area was converted to percentage of the total area and depth to deciles (1 decile = 1/10 th of maximum nest depth), yielding a size-free estimate of nest “shape” ( Mosimann and James 1979 ). Figure 24 shows that, on average, a higher proportion of total area occurred at greater depth, i.e., that nests were bottom-heavy (one-way ANOVA: F 1,9 = 4.91; p < 0.00002). The size-free shapes of small, medium, and large nests did not differ significantly, so only the overall average is shown in Figure 24 . All nests had one or more chambers near the surface and usually ended in a chamber at the bottom. The spacing between the chambers was least near the surface and near the bottom and greatest at the middle depths (one-way ANOVA: F 1,9 = 3.34; p < 0.002) ( Figure 25 ), a trend that can also be seen in the images in Figure 20 . Although, during excavation, workers seemed to be more abundant in the upper and lower levels of the nest, this trend was not significant. Seasonal effects Most of the nest casts were made in September and December 2007; only one each was made in October and November. Worker size, as measured by head width, was greater (1.70 mm) in the November–December nests than in the September–October nests (1.62 mm) ( t -test: t 10 = 2.36; p < 0.05), perhaps as a result of improved nutrition later in the season, because total number of workers did not differ. No other measure differed by season. Worker head width, averaged by nest level, ranged from 1.53 mm to 1.8 mm and was isometric with head length (regression: HL = 0.31 + 1.02 HW; F 1,29 = 260; p < 0.00001; R 2 = 90%). Worker heads do not therefore change shape with increasing head size. Other nest contents In addition to workers and their parts, the dissolved casts yielded other materials, including seeds, parts of other ant species, other insect parts, and diverse plant material, as well as cocoons and larvae. O. brunneus often decorates its nest crater with caterpillar frass, seeds, and other debris, but whether this tendency is biologically meaningful is unknown. The insect parts found in the casts were probably the remains of prey. Cocoon distribution in nests and throughout the season varied but did not show any clear trends. Discussion No matter what their size, the nests of O. brunneus can be recognized by their characteristic appearance; that is, the size-free shape does not change much with nest size, as is apparent in Figure 20 . This independence of size-free shape from total size is also apparent in the nests of Pogonomyrmex badius and Camponotus socius ( Tschinkel 2004b , 2005 ) and means that workers need only follow simple, local iterative rules to produce a nest of similar shape but any size. In laboratory “sand sandwiches,” workers of Messor sancta excavated networks of tunnels, some features of which were invariant across network size ( Buhl et al. 2006 ). The nests of O. brunneus are simple vertical shafts connecting simple, horizontal chambers, a widespread architectural unit among subterranean ant nests. The ancestors of the ants probably dug such burrows, though probably with a single, or very few chambers. The chamber floors probably provide the work and living space, and their total area is thus proportional to the number of ants in the nest; about 2 cm 2 of floor space is provided per worker, of which a minor fraction is actually occupied by the worker's body. Available data show that O. brunneus is one-third as crowded as P. badius and C. socius . Such variation among species in crowding may affect the rates of interaction among workers and could thus be used to “tune” colony functions depending on rates of interaction, but because these calculations are means for the entire nest, whereas in reality, the workers are not distributed evenly in the nest, they are often much more crowded in the lower parts of the nest ( Tschinkel 1999 ). All but one of the nests used in the present study were at the same location, a very dry, open, longleaf-pine forest several meters above the water table. Nests at a moister, heavily oak-shaded site near a temporary pond were considerably shallower during the summer but deepened in the winter, when most of the ants could be found in the nest bottom (L. Hart and W. R. Tschinkel, unpublished data). This trend suggests that soil and physical conditions affect the characteristics of the nest. The degree to which soil and other abiotic conditions affect ant nest architecture is an unexplored subject. The nests of O. brunneus differ somewhat from those of several other species in being moderately more bottom-heavy than top-heavy. To date, the majority of ant nests are reported to have more chamber area near the surface than near the bottom and chambers closer together near the surface than near the bottom. A fairly common feature of the nests of O. brunneus was their use of cavities made by other animals, including rodents and other ants as well as hollow roots. Such cavities can be recognized because their architecture is very different from that produced by O. brunneus excavation. Whether the maker was evicted or had already abandoned the cavity is unknown, but the use of such cavities clearly saves work. This phenomenon has also been observed in other species of ants (unpublished data). The worker census included only workers that were in the nest at the time the cast was made. Any foragers afield at the time were not included, and their number is unknown. Filling subterranean ant nests with a casting material can provide more information than just the nest's architecture. It was used to census nests and to determine the distribution of workers within the vertical nest structure. By using paraffin wax to make nest casts, the workers, brood, and alates were fixed at their momentary locations within their ant nests (unpublished data). Melting these casts in sections provides an accurate picture of the distribution of all colony members, brood and food within the vertical nest structure. The recovered ants can also be used for other studies, such as morphometry. Compared to a simple excavation, such casting methods offer the advantage that the casting material finds and fills all the nooks, crannies, and cavities of the nest, capturing all the nest contents in place, something that is difficult to achieve during direct excavation of an uncast nest. The connection between nest architecture and colony function has received little attention, in part because most studies have been carried out in single-chambered laboratory nests that do not resemble the natural nest. Brian ( 1956 ) showed that ants in smaller groups rear brood more efficiently than those in larger groups, a result confirmed by Porter and Tschinkel ( 1985 ). Nest architecture combines with the tendency of all ants to sort themselves and their brood to produce social structure within the nest. In most species, workers move centrifugally away from the brood as they (the workers) age ( Hölldobler and Wilson 1990 ; Sendova Franks and Franks 1995 ), a movement that is connected to age polyethism. In deep nests such as those of the Florida harvester ant, Pogonomyrmex badius , and the winter-active ant, Prenolepis imparts , this movement sorts workers by age such that the youngest are located mostly in the bottom third of the nest and the oldest (defenders and foragers) near and on the surface ( Tschinkel 1987 , 1999 ). In view of the near universality of the centrifugal movement of aging workers away from the brood pile, nest architecture and spatial social structure seems likely to be functional and to contribute to colony fitness. Determining whether these links exist and how they function should be a central question in the study of ant nest architecture.
Discussion No matter what their size, the nests of O. brunneus can be recognized by their characteristic appearance; that is, the size-free shape does not change much with nest size, as is apparent in Figure 20 . This independence of size-free shape from total size is also apparent in the nests of Pogonomyrmex badius and Camponotus socius ( Tschinkel 2004b , 2005 ) and means that workers need only follow simple, local iterative rules to produce a nest of similar shape but any size. In laboratory “sand sandwiches,” workers of Messor sancta excavated networks of tunnels, some features of which were invariant across network size ( Buhl et al. 2006 ). The nests of O. brunneus are simple vertical shafts connecting simple, horizontal chambers, a widespread architectural unit among subterranean ant nests. The ancestors of the ants probably dug such burrows, though probably with a single, or very few chambers. The chamber floors probably provide the work and living space, and their total area is thus proportional to the number of ants in the nest; about 2 cm 2 of floor space is provided per worker, of which a minor fraction is actually occupied by the worker's body. Available data show that O. brunneus is one-third as crowded as P. badius and C. socius . Such variation among species in crowding may affect the rates of interaction among workers and could thus be used to “tune” colony functions depending on rates of interaction, but because these calculations are means for the entire nest, whereas in reality, the workers are not distributed evenly in the nest, they are often much more crowded in the lower parts of the nest ( Tschinkel 1999 ). All but one of the nests used in the present study were at the same location, a very dry, open, longleaf-pine forest several meters above the water table. Nests at a moister, heavily oak-shaded site near a temporary pond were considerably shallower during the summer but deepened in the winter, when most of the ants could be found in the nest bottom (L. Hart and W. R. Tschinkel, unpublished data). This trend suggests that soil and physical conditions affect the characteristics of the nest. The degree to which soil and other abiotic conditions affect ant nest architecture is an unexplored subject. The nests of O. brunneus differ somewhat from those of several other species in being moderately more bottom-heavy than top-heavy. To date, the majority of ant nests are reported to have more chamber area near the surface than near the bottom and chambers closer together near the surface than near the bottom. A fairly common feature of the nests of O. brunneus was their use of cavities made by other animals, including rodents and other ants as well as hollow roots. Such cavities can be recognized because their architecture is very different from that produced by O. brunneus excavation. Whether the maker was evicted or had already abandoned the cavity is unknown, but the use of such cavities clearly saves work. This phenomenon has also been observed in other species of ants (unpublished data). The worker census included only workers that were in the nest at the time the cast was made. Any foragers afield at the time were not included, and their number is unknown. Filling subterranean ant nests with a casting material can provide more information than just the nest's architecture. It was used to census nests and to determine the distribution of workers within the vertical nest structure. By using paraffin wax to make nest casts, the workers, brood, and alates were fixed at their momentary locations within their ant nests (unpublished data). Melting these casts in sections provides an accurate picture of the distribution of all colony members, brood and food within the vertical nest structure. The recovered ants can also be used for other studies, such as morphometry. Compared to a simple excavation, such casting methods offer the advantage that the casting material finds and fills all the nooks, crannies, and cavities of the nest, capturing all the nest contents in place, something that is difficult to achieve during direct excavation of an uncast nest. The connection between nest architecture and colony function has received little attention, in part because most studies have been carried out in single-chambered laboratory nests that do not resemble the natural nest. Brian ( 1956 ) showed that ants in smaller groups rear brood more efficiently than those in larger groups, a result confirmed by Porter and Tschinkel ( 1985 ). Nest architecture combines with the tendency of all ants to sort themselves and their brood to produce social structure within the nest. In most species, workers move centrifugally away from the brood as they (the workers) age ( Hölldobler and Wilson 1990 ; Sendova Franks and Franks 1995 ), a movement that is connected to age polyethism. In deep nests such as those of the Florida harvester ant, Pogonomyrmex badius , and the winter-active ant, Prenolepis imparts , this movement sorts workers by age such that the youngest are located mostly in the bottom third of the nest and the oldest (defenders and foragers) near and on the surface ( Tschinkel 1987 , 1999 ). In view of the near universality of the centrifugal movement of aging workers away from the brood pile, nest architecture and spatial social structure seems likely to be functional and to contribute to colony fitness. Determining whether these links exist and how they function should be a central question in the study of ant nest architecture.
The architecture of the subterranean nests of the ant Odontomachus brunneus (Patton) (Hymenoptera: Formicidae) was studied by means of casts with dental plaster or molten metal. The entombed ants were later recovered by dissolution of plaster casts in hot running water. O. brunneus excavates simple nests, each consisting of a single, vertical shaft connecting more or less horizontal, simple chambers. Nests contained between 11 and 177 workers, from 2 to 17 chambers, and 28 to 340 cm 2 of chamber floor space and reached a maximum depth of 18 to 184 cm. All components of nest size increased simultaneously during nest enlargement, number of chambers, mean chamber size, and nest depth, making the nest shape (proportions) relatively size-independent. Regardless of nest size, all nests had approximately 2 cm 2 of chamber floor space per worker. Chambers were closer together near the top and the bottom of the nest than in the middle, and total chamber area was greater near the bottom. Colonies occasionally incorporated cavities made by other animals into their nests. Keywords
Acknowledgements We are grateful to Martin Figueroa, Lauren Hart, Michael Paisner, and Emily Owens Silva for help in excavating casts.
CC BY
no
2022-01-12 16:13:47
J Insect Sci. 2010 Jun 14; 10:64
oa_package/24/69/PMC3014808.tar.gz
PMC3014809
20569130
Introduction Parasitoids find hosts by responding to cues from their surroundings. A good cue reliably signals the presence and quality of a host and is detectable over an appropriate distance ( Vet and Dicke 1992 ; Hilker and McNeil 2007 ). These cues are, to a large extent, volatile odors derived from the host or from the host plant as a result of injury or the presence of saliva triggering production of attractive volatiles ( Turlings et al. 1995 ; Felton and Eichenseer 1999 ; Kessler and Baldwin 2001 ). Herbivore eggs cause little or no damage to plants, so egg parasitoids must use indirect cues while foraging ( Hilker and Meiners 2006 ; Fatouros et al. 2008b ). Where eggs are closely associated with herbivory, egg parasitoids can use herbivore-associated odor cues. For example, bean plants with oviposition and feeding by the pentatomid bug, Nezara viridula produced volatiles that attract the egg parasitoid Trissolcus basalis ( Colazza et al. 2004 ). Plant odors alone can also be used by egg parasitoids ( Romeis et al. 2005 ), as is the case for Platygaster demades , which is attracted to the odors of apple and pear foliage even without signs of host activity ( Sandanayaka and Charles 2006 ). However, because most plant individuals do not have eggs on them, plant odor alone is an unreliable cue. Some egg parasitoids respond to odors of adult hosts ( Noldus et al. 1991 ; Conti et al. 2003 ; Fatouros et al. 2007 ) such as moth scales, marking pheromones, and sex pheromones that are deposited on plants or eggs during oviposition (i.e., DeLury et al. 1999 ). Finally, host oviposition can induce the plant's emission of volatiles that are attractive to parasitoids. Plants have been shown to respond in various ways to damage caused by oviposition or to chemical recognition of the surface of the eggs or adhesive. A literature review by Hilker and Meiners ( 2006 ) identified three studies of plants that produce volatiles that are attractive to parasitoids in response to oviposition including the elm leaf beetle on elm ( Meiners and Hilker 2000 ), the pine sawfly on pine ( Hilker et al. 2002 ) and Hemiptera on bean ( Colazza et al. 2004 ). No Lepidoptera have been found to induce volatile odors by oviposition, though the cabbage white butterfly does cause a local change in surface chemistry that arrests parasitoid foraging behavior ( Fatouros et al. 2005 ). Whatever the cues, over time, parasitoid response changes. This can be due to parasitoid age or physiological state (i.e., Amalin et al. 2005 ; Crespo and Castelo 2008 ). For instance, the patch residence time for the parasitoid Lysiphlebus cardui increases with parasitoid age, and younger parasitoids lay more eggs in second and third instars of the host, while older parasitoids show no preference ( Weisser 1994 ). Independent of age, parasitoid response to cues also changes with experience, especially due to learning in association with positive foraging experience (i.e. Bellows 1985 ; van Baaren and Boivin 1998 ; for review see Papaj and Lewis 1993 ). Hyposoter horticola Gravenhost (Hymenoptera: Ichneumonidae) is a parasitoid of the Glanville fritillary butterfly Melitaea cinxia L. (Lepidoptera: Nymphalidae). In the Åland islands of southwest Finland, Melitaea cinxia lays egg in clusters on the undersides of leaves of two plant species, Plantago lanceolata L. (Lamiales: Plantaginaceae) and Veronica spicata L. (Lamiales: Plantaginaceae) ( Kuussaari et al. 2004 ). M. cinxia spends up to an hour ovipositing a cluster of eggs. During that time, it touches the leaf with its tarsi and rubs the underside of the leaf with the ovipositor. Melitaea cinxia also attaches the eggs to the leaf with an adhesive substance ( Singer 2004 ). Although Hyposoter horticola is a parasitoid of larvae, it must find the hosts as eggs because it oviposits into host larvae that have not yet broken out of the eggshell. The host can only be used as a larva inside the egg, so as eggs get older, they get closer to the interval when they can be parasitized. H. horticola finds egg clusters during the two to three weeks before hatching, and monitors them until the eggs are briefly suitable for oviposition ( van Nouhuys and Ehrnsten 2004 ). The vast majority of the M. cinxia egg clusters are on undamaged plants. Based on landscape scale studies of this host-parasitoid interaction, virtually all of the host egg clusters, under natural conditions, are found by the parasitoids (and a fraction of the hosts in each cluster are parasitized), regardless of which plant species they are on and regardless of where they are in the landscape ( van Nouhuys and Hanski 2002 ; van Nouhuys and Kaartinen 2008 ). This report presents a set of experiments addressing the host-finding cues used by H. horticola . Young, medium, and old eggs, as well as host larvae and host plants, were tested to determine whether they emit volatiles that are attractive to H. horticola under laboratory conditions using a Y-tube olfactometer. Outside, in a field cage experiment at the scale of a habitat patch, the ability of parasitoids to find the eggs, using the same cues found to be important in the laboratory tests, was tested. The rationale for the field experiment stemmed from the observation that, while host eggs on V. spicata and P. lanceolata are both used quite successfully in the field, they elicit different responses from H. horticola in the olfactometer. Cues identified as attractive in an olfactometer are expected to correspond to cues used naturally in the field (recent examples include Lou et al. 2006; Dormont et al. 2007 ; Zahng et al. 2007 ). However, this is not always the case (i.e. Ngumbi et al. 2005 ), perhaps because in the field an attractive compound may be at low concentration or simply not perceived in a more complex chemical environment ( Hilker and McNiel 2007 ). Furthermore, additional cues may be present in the field, both visual and olfactory, that lead H. horticola to a different destination.
Materials and Methods Hosts, plants, and parasitoids For both the laboratory experiment (2004) and the field experiment (2006), parasitoids were obtained by placing laboratory-reared host egg clusters in natural populations of the host butterfly M. cinxia in Åland, Finland, the summer before each experiment. Eggs on plants were obtained as described below. When the egg cluster was 7 to 14-days-old, the infested plant was introduced in the field. After parasitism in the field, the infested plant, now with larvae instead of eggs, was retrieved, and the larvae were reared through a winter diapause until pupation the following spring. After emergence, adult H. horticola were fed a 1:3 honey:water solution and kept individually in plastic vials in a cool environment (9–11° C) until used. The host egg clusters used to collect H. horticola from the field (above), and used for both the olfactometer and field experiments, were obtained using laboratory-reared mated female Melitaea cinxia originating from the Åland islands. M. cinxia were put individually in outdoor oviposition cages with potted V. spicata and P. lanceolata plants. The plants were grown outdoors in pots from field-collected seedlings. After one day, the plant with an egg cluster on it was removed and replaced with an new plant. For the olfactometer experiment testing egg odor alone, the leaf with the eggs on it was cut from the plant after oviposition. When the leaf and eggs had dried, the eggs were removed with a tiny brush and placed in a filter paper cup. The egg clusters then were kept individually in Petri dishes in a growth chamber at a temperature of 11° C at night and 22° C during the day. For tests of plants with egg on them, the eggs were left on the plant, and the potted plants with eggs on them were kept under the same conditions as the eggs alone. Olfactometer experiments: Tested odor sources in olfactometer experiments To evaluate the response of H. horticola to the odor of host eggs, host larvae, and host food plants, the behavior of adult females was observed using a Y-tube olfactometer. Similar devices have been used to measure behavioral responses of many parasitoid species and mites to odor sources ( Janssen et al. 1995 ; Castelo et al. 2003 ; Colazza et al. 2004 ; Martínez et al. 2006 ). The olfactometer was a 20 cm Y-shaped glass tube connected to an air pump at one end and a plastic box that contains an odor source at the end of each Y-arm. Air was drawn through a carbon filter, and then from the arms of the tube toward its base. The speed of incoming air in each arm was maintained at a constant 0.7 cm/s throughout the experiments. To eliminate possible effects of visual cues on parasitoid behavior, the walls of the odor source-containing boxes were covered, so Hyposoter horticola could smell, but not see, the stimuli source. The entire olfactometer was in a white plastic box (50 × 40 × 25 cm) that was open at the top. This allowed H. horticola to move within a visually symmetrical environment and reduced disturbances caused by the observer's presence. All trials were conducted between 10:00 and 17:00 h. Before the experiments, H. horticola were removed from the cold, fed honey and water, and kept at ambient temperature for two hours, when they became fully active. H. horticola were categorized as young (from 15 to 20 days) or old (from 26 to 33 days). In natural populations H. horticola live at least 5 weeks ( van Nouhuys and Ehrnsten 2004 ). Sixty-eight unmated female parasitoids without oviposition experience were used. Unmated parasitoids were used because of the difficulty in making them mate in the laboratory. Although mating status could influence their behavior, H. horticola were generally responsive to foraging cues, and there was no reason to believe that their virginity biased their behavior. Because of the limited number of H. horticola available, they were used multiple times. For each trial, the parasitoid was chosen randomly from the 68 available parasitoids. Because many experimental trials were performed, each individual was used in an average of eight different trials, randomly spaced among experimental days. This procedure allowed us to perform many trials. However, each parasitoid had a different history of experience, and any effect of age could not be separated from the effect of general odor experience. Hyposoter horticola were housed individually, and each had an individual identification number (“wasp ID” in analysis). Host eggs and larvae In this experiment, young and old H. horticola were offered intact M. cinxia egg clusters of different ages as follows: (a) 1 week-old eggs — young H. horticola ( n = 30), (b) 1 week-old eggs — old H. horticola ( n = 30), (c) 2 week-old eggs — young H. horticola ( n = 53), (d) 2 week-old eggs — old H. horticola ( n = 30), (e) 3-week-old eggs — young H. horticola ( n = 26), and (f) 3 week-old eggs — old H. horticola ( n = 15). The egg clusters contained 100 to 150 eggs. There was no way to count the number of eggs, but by visual estimation it was determined that egg cluster size was not associated with egg cluster age. The order of the treatments was randomized, so in each experiment, H. horticola had different previous experience in the olfactometer. In each test, the eggs were placed on a piece of clean filter paper in one arm of the olfactometer, and the other arm contained only clean filter paper. Extracts from uninfested plants In this set of experiments, the parasitoid response to components of plant odor was tested. Leaves of P. lanceolata and V. spicata were gathered fresh from local, natural populations in the Åland islands. Distilled water, ethanol and hexane extracts of each plant were used as odor sources, and clean solvent was used as the control. Plant solutions were made by grinding 10 g of leaf in 50 ml solvent (200 mg/ml). Extracts and solvent were presented to H. horticola as saturated filter paper patches (2 × 2 cm). For each assay, the patches were replaced (one patch-pair per parasitoid), and the side of the Y-tube containing the odor sources was alternated. The treatments were as follows: (a) P. lanceolata hexane extract ( n = 45), (b) V. spicata hexane extract ( n = 45), (c) P. lanceolata ethanol extract ( n = 45), (d) V. spicata ethanol extract ( n = 45), (e) P. lanceolata water extract ( n = 52), and (f) V. spicata water extract ( n = 52). The order of the treatments was randomized, and both young and old H. horticola were used for each treatment. Host egg-infested plants To test whether leaves with M. cinxia eggs on them were attractive to H. horticola , leaves of P. lanceolata and V. spicata harboring M. cinxia egg clusters were placed in the odor source chamber, and clean leaves were used as controls. The leaves with eggs on them were cut off the plant just prior to use, and the control leaves were taken from an eggless plant. The egg clusters each contained 100 to 150 eggs, which did not appear to differ between plant species. Because the number of egg clusters on plants was limited, one egg cluster was used for five to 10 wasps. Again, the position of the odor sources was alternated between assays. H. horticola were presented with the following treatments: (a) P. lanceolata with eggs (5, 8, and 16) ( N = 31), and (b) V. spicata with eggs (9 and 16) ( n = 31). There were no young (5 day-old) eggs on V. spicata available at the time of the experiment. Field cage experiments with free-flying H. horticola H. horticola were observed foraging for eggs in a large semi-natural outdoor enclosure, in order to elucidate which odor cues might be used successfully in the field. There were seven treatments: P. lanceolata with no eggs (P), V. spicata with no eggs (V), P. lanceolata with M. cinxia eggs (PE), V. spicata with eggs (VE), each plant species with eggs 5 cm from the plant (P+E and V+E), and a pot with soil and host eggs but no plant (E). For the eggs alone treatment (E) and eggs near plant treatments (P+E and V+E), a cluster of eggs was gently transferred from a plant into a 1 × 1 cm open filter paper cup, and placed on bare soil in a pot. The egg clusters contained 100 to 150 eggs. Relatively old eggs (19 to 22 days) were used because at this age H. horticola were extremely interested in the eggs once they locate them. Upon encountering eggs, H. horticola attended to them for approximately 3 to 30 minutes, even if the eggs are not ready for parasitism. This behavior allowed reliable observation of H. horticola visiting the egg clusters ( van Nouhuys and Ehrnsten 2004 ). The experiment took place in a 26 × 32 m mesh-enclosed habitat patch. There were abundant naturally occurring nectar flowers for H. horticola to feed on, but there were no naturally occurring hosts or host food plants. The enclosure was previously used for behavioral experiments using M. cinxia ( Hanski et al. 2006 ) and H. horticola ( van Nouhuys and Kaartinen 2008 ). In each of two trials, there were two replicates of each treatment except the eggs alone (E), which was replicated four times, for a total of 16 observation points. The 16 observation points were set in a randomized grid, 5 × 6 m apart. Several days before the experiment, 23- to 30-day-old unmated adult female H. horticola were individually marked on the back of the thorax using craft paint. In order to do this, they were briefly anesthetized using CO2 gas. Twenty-two individually-marked females were released in the cage at 09:00 (while the cage was in the morning shade, and thus they were not active). Then each of the 16 observation points was observed by walking in a transect, every half hour during H. horticola foraging hours (09:30–18:00) for two days. The transect walker recorded the number and identity of the parasitoids found at each observation point. For the second trial, a new set of plants and eggs were placed in a re-randomized grid. A second set of 22 individually-marked females was released at 09:00, and the transect was walked every half hour for one day. No observations of H. horticola at the observation points were made on the second day of the second trial because most of the parasitoids disappeared, probably due to predation by an extremely large population of spiders inhabiting the grass and mesh walls of the cage. Statistical analysis For the eggs alone and plant extract olfactometer experiments (experiments 1 and 2) the proportion of H. horticola that went toward a given odor source was analyzed using Chi-square tests. In order to analyze the response of H. horticola to plants with eggs on them (experiment 3), taking into account potential variation due to H. horticola age, H. horticola experience, egg age, and the day of the trial, a logistic regression analysis was performed with egg age (1, 2, or 3 weeks-old), plant species ( P. lanceolata or V. spicata ), date of trial, and H. horticola age as factors. Wasp ID was included in the model as an offset covariate because each parasitoid was used more than once ( H. horticola was chosen randomly from the pool of 68). Date of trial was included because H. horticola behavior is affected by ambient temperature and light, which differed daily. The binary dependent variable took a value of 1 when H. horticola walked toward the eggs and 0 when H. horticola walked toward the control. For the free foraging experiment, the results from the two trials were combined because there was a small amount of data. A Poisson regression was performed on the counts of parasitoids visiting each treatment-type (PE, VE, P+E, V+E or E), and χ 2 goodness of fit tests were used as well. The software package R v. 1.8.1 ( Venables and Smith 2003 ) was used for the Poisson Regression analyses.
Results A total of 589 Y-tube behavioral assays was conducted. In 25.4% of these, H. horticola did not move into either arm of the olfactometer during the 10-minute observation. These inconclusive trials were not included in the results. Olfactory response to host eggs and larvae Eggs that were 1 and 3 weeks-old were attractive to H. horticola 2 37 = 6.74, χ 2 31 = 12.74, p < 0.05, respectively; Figure 1 ), but 2 week-old eggs and 1 day-old χ 2 56 = 0.02, χ 2 31 = 3.13, p > 0.05, respectively; Figure 1 ). Attraction varied according to H. horticola age or experience in the olfactometer. While 1 week-old eggs were most attractive to old H. horticola χ 2 18 =15.21, p < 0.05), 3 week-old eggs were more attractive to young H. horticola χ 2 20 = 10.71, p < 0.05). Olfactory response to extracts from uninfested plants The host plant P. lanceolata was attractive χ 2 45 = 4.26, p238 = 0.23, p > 0.05 for ethanol, χ 2 33 = 0.12, p > 0.05 for hexane; Figure 2 ). H. horticola were not attracted to any of the V. spicata χ 2 42 = 0.21, p 2 32 = 3.67, p 2 32 = 2.00, p > 0.05; Figure 2 ). Olfactory response to egg-infested plants A different pattern emerged when leaves were presented with eggs on them. Overall, V. spicata leaves with M. cinxia eggs were attractive, but P. lanceolata χ 2 23 = 4.17, p < 0.05 for V. spicata 2 21 = 0.73, p > 0.05 for P. lanceolata ; Figure 3 ). Further analysis of response to the plants with eggs was done using logistic regression. H. horticola age was non-significant. A test of the full model with the three remaining predictors (plant species, egg age, and date) against a constant-only model indicated that the predictors, as a set, reliably distinguished between H. horticola χ 2 (3, n = 46) = 221.12, p < 0.0001; Table 1 ). This analysis showed that the probability of H. horticola going to plants with eggs was significantly affected by plant species (z = 4.99; p < 0.0001; Table 1 ), with V. spicata being more attractive, and by egg age (z = 4.91; p < 0.0001; Table 1 ), with older eggs being more attractive, and by day of test (z = 3.70; p < 0.0001; Table 1 ). Response to host eggs and host plants in a field cage Six of the 22 parasitoids released in the first replicate were observed to have found eggs during the two days of observation. In the second replicate, five of the 22 parasitoids were observed to have found eggs during the one day of observation. Figure 4 shows the number of individual parasitoids observed to find eggs in each treatment. Together, there were 31 observations of H. horticola at eggs, with some individuals visiting the same or different plants multiple times. Excluding the plants without eggs (that were never observed to be visited by H. horticola ), each treatment was found by four different parasitoids χ 2 = 11.28, p = 0.02; Figure 4 ). Only one H. horticola discovered the eggs alone and the eggs next to V. spicata , whereas the V. spicata with eggs on it was visited by nine of the 11 parasitoids. It is important to note that there were twice as many of the E treatments available to be found. The numbers of H. horticola visiting the P. lanceolata with eggs on it and eggs next to it were intermediate and not different from the mean. These results indicated that P. lanceolata was equally attractive with and without eggs, whereas V. spicata was significantly more attractive with eggs on it and unattractive with eggs next to it ( Figure 4 ).
Discussion In this study, a parasitoid's response to components of its foraging environment was observed in two contexts, in an olfactometer and in a large field cage. The larval parasitoid H. horticola , which searches for host eggs, was attracted to the odor of eggs in an olfactometer. In the field experiment, however, eggs alone were not sufficiently attractive to be found. In the olfactometer, H. horticola responded differently to the two host plant species of M. cinxia . Plantago lanceolata appeared to be innately attractive, and the presence of host eggs did not increase its attractiveness. In contrast, V. spicata became attractive only when host eggs were present. This pattern was reinforced by the results of the field cage experiment, where most H. horticola found eggs on V. spicata , few found eggs near V. spicata , and eggs on and near P. lanceolata were found by an intermediate number of H. horticola . The results show, for the first time, that a larval parasitoid used egg-induced plant volatiles to find hosts, and that M. cinxia eggs or the process of oviposition induced such volatiles. Attraction of eggs alone In the olfactometer, H. horticola responded to eggs that were newly laid and eggs that were near hatching, but not to eggs at an intermediate stage. Perhaps initially there is an odor on the eggs that is produced by the adult M. cinxia , such as wing scales, sex pheromones, or accessory gland secretion (i.e. Noldus et al. 1991 ; DeLury et al. 1999 ; Lian et al. 2007 ). This odor may subside after several days. Later, a second odor may be perceived by H. horticola , perhaps released from the host itself as the embryo develops into a larva. The 2 week-old eggs being apparently undetectable warrants further study. Hyposoter horticola should benefit from finding eggs of any age, because it increases the time it has to forage by finding hosts that are not ready for parasitism and monitoring them until they become susceptible ( van Nouhuys and Ehrnsten 2004 ). Therefore, M. cinxia that produce non-odorous eggs should have a selective advantage. Older parasitoids that had been in the olfactometer several times responded to young eggs. Conversely, parasitoids that were young and less experienced were attracted to old eggs. The responses of the parasitoids to foraging cues changes with both wasp age and experience ( Vet and Dicke 1992 ; Papaj and Lewis 1993 ; Weisser 1994 ). Unfortunately, the design of the experiment made it unable to distinguish between the two. Regardless of whether H. horticola changed behavior due to experience or physiological age, the pattern should be investigated further. Finally, though H. horticola was attracted to the odor of host eggs in the olfactometer, only one wasp found eggs in the field enclosure. The eggs were old, and the parasitoids were young, which meant that the eggs would have been attractive in the olfactometer experiment. This suggests that odor produced by the eggs (or left on the eggs by the mother) did not act as a long-range cue. It may have been too weak or non-volatile to be perceived over distance or in the more complex chemical environment ( Hilker and McNiel 2007 ). The egg odor may instead be useful at a small spatial scale, perhaps arresting H. horticola upon alighting on the plant or for locating the egg cluster within the plant. Attraction of plants alone Parasitoids can respond to plant-produced odors even in the absence of an herbivorous host (reviewed by Hilker and McNeil 2007 ). Chemical components of undamaged P. lanceolata and V. spicata were extracted in three solvents: water, hexane and ethanol. Strongly polar compounds such as inorganic salts and ionic compounds dissolve only in very polar solvents such as water. Strongly non-polar oils and waxes dissolve only in non-polar organic solvents such as hexane. Ethanol dissolves compounds of intermediate polarity and is a good solvent for most lipids and ionic (inorganic reactives) and non-ionic compounds (organic substrates) ( Morrison and Boyd 1998 ). In the olfactometer, H. horticola responded only to the water extract of P. lanceolata , which must contain non-volatile or weakly volatile attractants. Somewhat surprisingly, no extract of V. spicata was attractive to H. horticola. The low-volatility, high-polarity compounds that can be extracted in water may be short range attractants or contact cues ( Jallow et al. 1999 ; Diongue et al. 2005 ; Heinz and Feeny 2005 ) produced by P. lanceolata and perceived by H. horticola . There is very little information in the literature demonstrating that compounds extracted using water as a solvent are attractive to herbivores or their parasitoids ( Tingle and Mitchell 1986 ; Brown and Anderson 1999 ; Peterson and Elsey 1995 ). In contrast, many volatile compounds that are attractive or deterrent to insects have been extracted using low or medium polarity solvents such as hexane and ethanol (i.e., Romeis et al. 1998 ; Brown and Anderson 1999 ; Degen et al. 1999 ; Jallow et al. 1999 ). Eggs and plants together H. horticola forage for eggs that are on plants. They would never experience the odor of eggs alone; the vast majority of host plants do not have eggs on them and presumably are not systematically searched by H. horticola . When offered eggs on leaves in the olfactometer, H. horticola responded positively only to the V. spicata / egg combination. The response could have been due to the odor of the eggs, but if that were the case, there should have been some response to the P. lanceolata /egg combination. Furthermore, the eggs ranged from 5 to 16 days-ld, putting most of them within the least attractive age. For all of the egg ages, the trend was the same. A more plausible explanation for the difference in response is that the eggs induced V. spicata to produce an attractive volatile odor. This has been found in several multitrophic level systems (reviewed by Hilker and Meiners 2006 ; Fatouros et al. 2008a ), but never before for Lepidoptera. In the field cage experiment, H. horticola were not seen on plants lacking eggs on or near them (V and P treatments). This was not surprising because even if they were attracted to plants, they would not be observed because individuals landing on empty plants would have left quickly. Given that the eggs arrest foraging H. horticola , the interpretable comparison is among the treatments including eggs. Host eggs next to P. lanceolata (P+E) and host eggs on P. lanceolata (PE) were found at equal frequency, suggesting that the plant was attractive, but that having eggs on the plant did not make it more attractive. P. lanceolata is known to produce volatiles ( Fons et al. 1998 ). Apparently, it produces an airborne odor that is attractive to H. horticola and is constitutive rather than induced. This volatile odor is probably not the short range or contact stimulant detected in the olfactometer, which must have had little or no volatility. V. spicata with eggs attached (VE) was frequently found by H. horticola , while only one found the eggs next to V. spicata (V + E), suggesting, as in the olfactometer experiment, that V. spicata changes in response to oviposition. Very little is known about the chemical defense of, and the signaling by, V. spicata . However, a second specialized parasitoid of M. cinxia , Cotesia melitaearum is more attracted to volatiles emitted from herbivore damaged V. spicata than from P. lanceolata ( van Nouhuys and Hanski 2004 ). Thus for both specialized parasitoids, V. spicata is the more attractive host food plant. Correspondence of foraging cues with foraging success In the Åland islands, P. lanceolata is present in all suitable habitat patches, as well as in lower densities in unsuitable grassy meadows and roadsides. Melitaea cinxia oviposits on only a tiny fraction of plant individuals. In contrast, V. spicata is present in a minority of habitat patches and is absent from all non-habitat. Where V. spicata is present, M. cinxia lays a proportionally higher fraction of eggs clusters on it than on P. lanceolata ( Kuussaari et al. 2000 ). Given this unequal distribution of plants, one might expect the opposite pattern of response to host cues than what was found in this study. That is, H. horticola would ideally use direct egg-associated cues while searching P. lanceolata because there is a high potential for fruitless searching on empty plants, whereas V. spicata itself might be a relatively reliable cue. Based on the results of both the olfactometer and field experiments, the rate of parasitism of hosts on P. lanceolata should be lower than on V. spicata . However, under natural conditions, H. horticola finds virtually all of the egg clusters, and about a third of the larvae in each are parasitized, regardless of which plant species the eggs are laid upon ( van Nouhuys and Hanski 2002 ). In fact, most egg clusters are found by multiple females ( van Nouhuys and Ehrensten 2004 ; van Nouhuys and Kaartinen 2008 ). This suggests that, though different cues are used for the two host plants, both are sufficient for finding host egg clusters. This contrasts strongly with the other specialist parasitoid, C. melitaearum , which experiences metapopulation level effects of differential response to cues associated with V. spicata and P. lanceolata ( van Nouhuys and Hanski 1999 ). There are two general conclusions from this study. One is methodological, cautioning the extrapolation of experimental results to explain natural patterns. In particular, interpreting foraging success from observed response to individual foraging cues may be misleading. In this study, H. horticola responded to the odor of host eggs in the olfactometer, but in the field cage, that odor alone was insufficient for finding host eggs. Also, H. horticola responded quite differently to hosts on one plant species over another in olfactometer experiments and the field cage experiment, but this difference is not reflected in patterns of parasitism that are observed in natural populations. The more conceptual conclusion is about the expectation of communication between plants and natural enemies of herbivores in multitrophic interactions. Of course, parasitoids of herbivores should use plant-associated cues to find their prey, and it is generally to a plant's advantage for this to occur ( Turlings et al. 1995 ; Kessler and Baldwin 2001 ; Tscharntke and Hawkins 2002 ). In this case, speculatively, V. spicata may invest more in defense than P. lanceolata because V. spicata experiences proportionally higher herbivory. Alternatively, if there is competition for resources among plants, and the less abundant V. spicata is a poor competitor, it may also invest more in defense. However, even among species that are quite specialized, such as the P. lanceolata , M. cinxia , H. horticola trophic chain, the coupling between a plant and a parasitoid can be weak. The weak coupling may be expected because the plant would not benefit directly from more reliable host-finding cues. Individual plants do not benefit from parasitism because the parasitized herbivore develops normally until the last instar, and the gregarious larvae consume more than the single plant used for oviposition ( Kuussaari et al. 2004 ). Furthermore, the plant does not need to invest in expensive signals because all egg masses are found ( van Nouhuys and Ehrnsten 2004 ). In this ecological and evolutionary context, and no doubt others, it is perhaps not surprising that parasitoid foraging cues differ among plant species, and that the natural pattern of parasitism is not predicted by H. horticola behavior in isolated experiments.
Parasitoids locate inconspicuous hosts in a heterogeneous habitat using plant volatiles, some of which are induced by the hosts. Hyposoter horticola Gravenhost (Hymenoptera: Ichneumonidae) is a parasitoid of the Glanville fritillary butterfly Melitaea cinxia L. (Lepidoptera: Nymphalidae). Melitaea cinxia lays eggs in clusters on leaves of Plantago lanceolata L. (Lamiales: Plantaginaceae) and Veronica spicata L. (Lamiales: Plantaginaceae). The parasitoid oviposits into host larvae that have not yet hatched from the egg. Thus, though H. horticola is a parasitoid of Melitaea cinxia larvae, it must find host eggs on plants that have not been fed on by the larvae. Using a Y-tube olfactometer, the response of H. horticola to odors of Melitaea cinxia and extracts of the attacked plant species were tested. Three week-old eggs (near hatching) were attractive to young H. horticola , but one week-old eggs were attractive only to old or experienced H. horticola . Melitaea cinxia larvae were not attractive. A water extract of P. lanceolata was attractive, but ethanol or hexane extracts were not. None of the extracts of V. spicata were attractive. Leaves of V. spicata were attractive only if harboring eggs, but P. lanceolata leaves with eggs were not. Free flying H. horticola in a large outdoor enclosure were presented with host and plant cues. As in the olfactometer, V. spicata was attractive only when eggs were on it, and P. lanceolata was somewhat attractive with or without eggs. This study shows for the first time that a parasitoid of larvae uses egg volatiles or oviposition-induced plant volatiles, to find host larvae, and that Melitaea cinxia eggs or traces of oviposition induce the production of these volatiles by the plant. Based on the results, and given the natural distribution of the plants and M. cinxia eggs, parasitism of Melitaea cinxia eggs on P. lanceolata would be expected to be low. Instead, under natural conditions, a fraction of the eggs in virtually all egg clusters are parasitized on both plant species. The mismatch between the experimental results and the natural pattern of host-parasitoid interactions is discussed in terms of the expected coupling foraging cues with foraging success. Keywords
Acknowledgments We thank K. Fedrowitz, K. Lindqvist, and K. Torri for field and laboratory assistance, and Nåtö Biological station and the Ålands Naturbruksskola for facilities. This work was funded by The Academy of Finland Centre of Excellence Program grant numbers 44887 and 213457, and the Finland-Argentina Research exchange program Grant FI/A03/B01. Marcela Castelo and Juan Corley are scientific researchers from CONICET.
CC BY
no
2022-01-12 16:13:46
J Insect Sci. 2010 Jun 4; 10:53
oa_package/ba/2c/PMC3014809.tar.gz
PMC3014810
21209809
1. Introduction The standard management of patients with glioblastoma multiforme (GBM) includes surgical resection followed by radiotherapy (RT) plus concomitant and adjuvant temozolomide [ 1 ]. Temozolomide (TMZ) is an alkylating agent and is generally well-tolerated treatment with fatigue, thromboembolic events, and lymphopenia as the most common side effects. Grade III or IV myelosuppression is a relatively uncommon side effect that is reported in 4% of patients [ 2 ]. We hereby describe a case of GBM treated with surgical resection followed by RT plus concomitant and adjuvant temozolomide, who developed a progressive irreversible aplastic anemia.
3. Discussion Adverse effects of TMZ are typical of traditional cytotoxic chemotherapy. Myelosuppression is often dose limiting with specific guidelines regarding dose reductions based on neutrophil and platelet counts. Myelosuppression tends to occur later in the treatment cycle; it is noncumulative and generally reversible within two weeks [ 3 ]. Grade 3 and 4 hematological toxicity was documented in only 7% of patients on concurrent therapy treated prospectively in the European Organization for Research and Treatment of Cancer (EORTC) and National Cancer Institute of Canada (NCIC) landmark study [ 2 ]. Gerber et al. described 52 patients developing severe myelosuppression after radiotherapy and concomitant adjuvant temozolomide therapy with high-grade gliomas [ 4 ]. Aplastic anemia is classified as inherited or acquired depending on etiology, and nonsevere or very severe on the basis of degree of pancytopenia. Aplastic anemia is thought to result from injury or destruction of pluripotential stem cells affecting all subsequent blood cell populations. Erythrocytes, granulocytes, and platelets may decrease to dangerously low levels. The pathophysiology of aplastic anemia is now believed to be immune mediated, with active destruction of blood-forming cells. Drugs and chemicals are among the most frequent causes triggering the aberrant immune response [ 5 ]. In the English literature, Doyle et al. described three patients developing severe myelosuppression after low-dose TMZ and RT. Two patients with aplastic anemia were given concurrent co-trimoxazole [ 6 ]. Villano et al. described a case of aplastic anemia after the 4th cycle of adjuvant TMZ, and the patient was on concomitant anticonvulsant [ 7 ]. In the case described here, the patient was also receiving phenytoin, which typically causes agranulocytosis rather than pancytopenia. Phenytoin was stopped but pancytopenia did not recover. It is important to be aware of the potentially life-threatening toxicity of every chemotherapeutic agent, including temozolomide. Temozolomide should be prescribed with recognition of the potential side effects reported here.
Academic Editor: Thomas R. Chauncey Radiotherapy and concomitant/adjuvant therapy with temozolomide are a common treatment regimen for children and adults with high-grade glioma. Although temozolomide is generally safe, it can rarely cause life-threatening complications. Here we report a case of a 31-year-old female patient who underwent surgical resection followed by radiotherapy plus concomitant temozolomide. She developed pancytopenia after adjuvant treatment with temozolomide. A bone marrow aspiration and biopsy showed hypocellularity with very few erythroid and myeloid cells, consistent with aplastic anemia. In the English literature, aplastic anemia due to temozolomide is extremely rare.
2. Case Report A 31-year-old female was admitted to the neurosurgery department with fainting and headache. MRI of the brain revealed a mass in the left frontal lobe. She underwent a left frontal craniotomy and complete excision revealing a glioblastoma multiforme (GBM). Later, she was started on antiepileptic medication, phenytoin, and referred for adjuvant treatment. She started concomitant brain RT with TMZ, administered at 150 mg/m 2 for 5 days every 4 weeks. After three cycles of adjuvant TMZ therapy, she was admitted to the medical oncology department with fever, and complete blood count showed pancytopenia with hemoglobin level 6.3 g/dL (13.6–17.2), platelet count 19,000/mm 3 (156,000–373,000), and total leucocyte count 100/mm 3 . TMZ was stopped. She was treated with blood products, growth factors, and antibiotics. Phenytoin was changed to carbamazepine. Her medical history revealed a fever continuing for 30 days with productive cough. Chest radiograph and CT scan revealed an infiltration at the superior segment of the left lower lobe with pleural effusion. Aspergillus sp. grew in bronchial washing and sputum cultures, cytology revealed no malignancy, and cultures were negative for tuberculosis. Forty days later, blood tests showed hemoglobin level 8.3 g/dL (13.6–17.2), platelet count 16,000/mm 3 (156,000–373,000) and total leucocyte count: 50/mm 3 . Clinical and radiographic findings were not improved. Bone marrow aspiration and biopsy were subsequently performed. The biopsy was severely hypocellular with very few erythroid and myeloid cells at any stage of differentiation and hardly any identifiable progenitor cells, suggesting features of aplastic anemia ( Figure 1 ). Cytogenetic analysis of marrow was not performed. She did not receive co-trimoxazole prophylaxis, and her vitamin B12 and folic acid levels were normal. The patient had a rapidly deteriorating course and died due to septicemia.
CC BY
no
2022-01-13 01:48:12
Case Rep Med. 2010 Dec 1; 2010:975039
oa_package/c9/79/PMC3014810.tar.gz
PMC3014811
21209810
Academic Editor: Robert S. Dawe We report a case of accidental cutaneous burns caused by salbutamol metered dose inhaler. A 9-year-old boy underwent dental extraction at a children's hospital and was incidentally noted to have burn injuries on dorsum of both hands. On questioning, the boy revealed that a few days ago his 14-year-old brother, who is an asthmatic, playfully sprayed his salbutamol metered dose inhaler on the back of both his hands with the inhaler's mouth piece being in direct contact with the patient's skin. On examination, there was a rectangular area of erythema with superficial peeling on the dorsum of both hands, the dimensions of which exactly matched those of the inhaler's mouthpiece. It is possible that the injury could have been a chemical burn from the pharmaceutical/preservative/propellant aerosol or due to the physical effect of severe cooling of the skin or mechanical abrasive effect of the aerosol blasts or a combination of some or all the above mechanisms. This case highlights the importance of informing children and parents of the potentially hazardous consequences of misusing a metered dose inhaler.
A 9-year-old boy underwent dental extraction at a children's hospital and was incidentally noted to have burn injuries on dorsum of both hands. Pediatric team was contacted to exclude nonaccidental injury. On questioning, the boy revealed that a few days ago his 14-year-old brother, who is an asthmatic, playfully sprayed his salbutamol metered dose inhaler on the back of both his hands with the inhaler's mouth piece being in direct contact with the patient's skin. The boy did not feel anything unusual immediately afterwards but noted redness in the sprayed areas the next morning. As the patient and his brother feared punishment from their mother, they did not tell anyone about the above. On examination, there was a rectangular area of erythema with superficial peeling on the dorsum of both hands, the dimensions of which exactly matched those of the inhaler's mouthpiece (Figures 1 and 2 ). The boy did not have any other injuries, and he was interacting appropriately with his mother. Neither the child nor his family was known to the social services, and there were no previous child protection issues. Once we were satisfied that this was an accidental injury, the pharmaceutical company which manufactured the inhaler was contacted. The pharmaceutical company was not aware of salbutamol causing cutaneous burn injuries. On reviewing the literature, a case of a 22-year-old asthmatic with history of mental illness presenting with self-inflicted cutaneous burn injuries due to salbutamol inhaler has been reported [ 1 ]. A 14-year-old girl with self-induced areas of hypo- and hyperpigmentation on her forearm as a result of applying ten blasts of an asthmatic aerosol inhaler directly to her skin has also been reported [ 2 ]. Similar salbutamol inhaler-induced burn injuries in children have been reported by Patel and Potter [ 3 ] and Arun et al. [ 4 ]. It is possible that the injury could have been a chemical burn from the pharmaceutical/preservative/propellant aerosol, due to the physical effect of severe cooling of the skin, mechanical abrasive effect of the aerosol blasts, or a combination of some or all the above mechanisms [ 1 ]. It is important that children and parents be informed of the potentially hazardous consequences of misusing a metered dose inhaler.
Conflict of Interests The authors declare that there is no conflict of interests.
CC BY
no
2022-01-13 01:48:12
Case Rep Med. 2010 Dec 27; 2010:201809
oa_package/ad/68/PMC3014811.tar.gz
PMC3014812
20672982
Introduction Termites can be roughly grouped into those species that nest within their food, usually wood, and those that nest elsewhere and must leave their nest in order to forage for food. Of the latter type, nests may be arboreal or subterranean, centrally located or dispersed into small, connected units. Most termites shun the open air, and travel to and from the foraging area by way of subterranean tunnels or covered galleries. Many species also cover the foraged material with sheet galleries before dining. Among ground-nesting termites, nests may be hidden below ground, or they may be conspicuous features of the landscape, such as the mounds of the southern African species of Macrotermes or Trinervitermes . Given a central nest, the need to forage for food and an aversion toward open air, it is obvious that many termites must create subterranean foraging tunnel systems. Such systems, however, have rarely been studied, and are usually hardly mentioned (if at all) in reviews of termite biology. Even an authoritative treatment, such as Noirot's ( 1970 ) review of the nests of termites, gives short shrift to how termites travel from their nests to their foraging areas. Typically, it is assumed that the termites travel in subterranean foraging tunnels (e.g. Sands 1961 ), and indeed, the few existing studies of subterranean foraging tunnels have revealed tunnel systems of remarkable size and scale ( Howse 1970 ; reviewed by Lee and Wood 1971 ). Most mound-building species exit their nests through subterranean foraging tunnels that run a few cm below the surface. In some species, the tunnels are short, and the termites travel some distance on the ground surface, but in others, the tunnels may extend 25 to 30 m (or even 60 m) from the mound. For example, the Australian termites Coptotermes lacteus, C. brunneus, C. acinaciformis and Nasutitermes exitiosus constructed systems with 9 to 30 tunnels emanating from the mound and extending 25 to 30 m to the dead wood on which the termites were feeding ( Ratcliffe and Greaves 1940 ; Hill 1942 ; Greaves 1962 ). In C. lacteus , tunnels were more or less radial, with few cross connections, but with shafts to deeper soil. In N. exitiosus , the radial tunnels were cross-connected. Hill ( 1925 ) noted subterranean passages with flattened lumena thickly floored with “rejectamenta” radiating outward from a nest of the Australian Mastotermes darwiniensis , but he did not trace these passages far. A particularly thorough study is that of Darlington ( 1982 ), in which the underground foraging passages of Macrotermes michaelsoni were exposed and quantified. Many termites do not build mounds that show above ground, but construct entirely subterranean nests, with tunnels to the surface. The African harvester termite, Hodotermes mossambicus , is well studied because of occasional subterranean encounters during the digging of trenches for construction ( Hartwig 1963 , 1965 ; Coaton and Sheasby 1975 ). These encounters revealed that nests are located an average about 1.4 m below ground, but can be as shallow as a few cm or as deep as 6.7 m. Large passages connect these subterranean nests to each other, and smaller passages give the termites access to the surface where they dump excavated soil and forage for grass. Foraged grass is first placed into small, superficial chambers for later transportion to the nests and consumption. None of the reports on subterranean gallery systems describe architectural details of the tunnels themselves or how they are constructed. This paper reveals the intricate and subtle architecture of the foraging tunnels of the Namibian harvester termite, Baucaliotermes hainesi (Fuller) (Termitidae: Nasutitermitinae), and describes how this complex system probably serves the foraging needs of the termites. Like other harvester termites, B. hainesi foragers cut pieces of grass on the ground surface, and carry these back to their nest. The range of this species is limited to southern Namibia and the northwestern Cape Province of South Africa (Coaton and Sheasby 1973).
Materials and Methods The study site The study site was located at latititude - 24.9702, longitude 15.9323 (according to Google Earth) in the NamibRand Nature Reserve, a private reserve of about 180,000 ha. The soil was red sand largely stabilized by the grasses, Stipagrostis uniplumis (Licht) De Winter and S. giessii Kers (Poales: Poaceae), with circular, bare areas 5 to 15 m in diameter termed “fairy circles” ( van Rooyen et al. 2004 ), and abundant animal trails crossing it in multiple directions. The site sloped gently from about 1100 m elevation at the base of Jagkop mountain to about 940 m just short of the Bushman Hills. Our two excavations were at approximately 1085 to 1090 m elevation. This area has an arid climate where rainfall averages between 50 and 150 mm per annum but is highly variable from year to year. Tunnel excavation and mapping Nests of B. hainesi were regularly visible at the surface as small mounds of cemented material 10 to 15 cm high. All excavation work was completed between October 22 and November 3, 2007. Tunnels were initially exposed by trenching around the nest to locate tunnels, and excavated outward from there. The looseness of the dry sand, combined with the relative hardness of the cemented sand tunnels facilitated exposure of the tunnels. The sand over the tunnels was loosened with a soft hand broom, and the loosened sand was blown away with a gasoline-powered lawn blower (Husqvarna Model 356 BTx) ( Video 1 , 2 , available online). This process produced a shallow trench 10–15 cm deep with the mostly intact tunnels in the bottom. Branches and intersections were sometimes followed, but, for many branches, only the initial few cm were exposed, leaving an unknown but substantial fraction of the entire tunnel system unexposed. Tunnels that were in use were distinguished from abandoned tunnels because the former remained intact upon excavation and/or contained live termites when broken ( Figure 1 ). Abandoned tunnels tended to break, and were often filled with sand. In this manner, large parts of the foraging tunnel systems of two focal nests were exposed, one located at (lat, long) - 24.96960, 15.93284 and the second at - 24.96981, 15.93403. The exposed tunnel systems were mapped by making a series of overlapping digital photographs (with a scale) from a uniform height (∼ 1 m), like aerial photographs, and then combining these into a photomosaic. A scale map was then made from each photomosaic. Nest census Before exposing the second tunnel system, the nest was excavated for census. The nest was carefully broken into pieces, beginning at the top, and all live termites, as well as grass pieces, were collected by aspiration and preserved in alcohol for later counting. The dissection and collection took two days. Termites from the mound and each quarter of the nest from the top downwards were preserved separately. A sample of 100 each of workers, soldiers and neotenic supplementary reproductives (there were no mature alates in the nest) were killed by freezing and air-dried for later determination of dry weight. Counts were carried out in the laboratory at Florida State University. The alcohol was drained off, and the total weight of (wet) termites from each nest portion was determined. Haphazard subsamples were then taken, weighed and the termites of each type counted. Multiplying these counts by the factor = (total weight/sample weight) gave estimates for each nest quarter, and the sum of these gave the total for the nest.
Results The brushing and blowing removed the semi-aggregated sand overburden to expose tunnels whose walls retained their integrity because they were constructed of cemented sand ( Figure 2 ). The tunnels are thus not simply hollows excavated in the sand, but have walls reinforced with what can be seen as “termite concrete” (which is of an unknown nature). Although the tunnels broke upon rough handling, with care, sections could also be removed for closer inspection, transport and photography. Tunnel architecture Tunnel architecture was complex. In cross section, most tunnels showed a raised central portion with deep pockets along both sides ( Figure 3 ). Sections of tunnels freed of loose sand were rarely simple tubes, but showed many bulges and bumps on their undersides ( Figure 4 ). Careful dissection of tunnels and bumps showed that the raised, central portion was a smooth roadway that ran the length of all tunnels, probably serving as the main travel path for the termites in the tunnels. Along both sides of this roadway were pockets of varying depth and geometry ( Figure 5 ). Many of these contained pieces of grass harvested from the surface by foragers, so it is reasonable to presume that the pockets serve as temporary depots for harvested grass waiting to be transported nestward, possibly by a different group of termites than the group that harvested the grass. ( Figure 6 shows a view of the underside of approximately 1 m of tunnel. Termites in the tunnels could gain access to the surface through vertical risers, 5 to 15 cm in height, that could be opened to the surface ( Figures 4 and 7 ). Riser openings were usually closed during the day, as this termite species forages mostly at night. Risers were very fragile, and it required great care during excavation to keep them intact. In most images, the former location of risers is seen as a double opening because two upward legs of the tunnel broke below their point of junction. The distance between risers averaged 50 to 75 cm among tunnels (SD 14 to 40 cm), suggesting that the termites rarely needed to travel more than one half to one meter on the surface. The tunnels did not run a uniform depth below the surface, but swooped up and down between the risers, with the high points at the riser junctions and the low points about midway between risers ( Figure 8 ). The internal runway therefore had “a roller coaster” or wave geometry. Measured from the riser-tunnel connection to the lowest upper tunnel surface between risers, the tunnel dip averaged 5 to 8 cm among tunnels, with a standard deviation of 1.5 to 2 cm. One dip was 21 cm, but the significance of this large deviation was unclear. Careful wetting of the upper surface of exposed sections of tunnel allowed for removal of the tunnel roof to expose the depot and riser structure of two approximately two-meter-long sections ( Figure 9 ). The upper image shows the tunnel before removal of the roof, and the lower, after. Depots can be seen along the entire length on both sides and were most likely to contain grass adjacent to risers. One section also contained a tunnel that descended to greater depth. Tunnels frequently intersected or branched, sometimes in rather complex configurations. Cut-offs that shortened travel distance at more or less-perpendicular intersections were common ( Figure 10 ). Near the nests, tunnel intersections tended to form rectilinear grids ( Figure 11 ). Occasionally, tunnels crossed without joining, a termite version of a fly-over. The tunnel systems Over the course of several days, large parts of the tunnel systems of the two focal nests were exposed . The total length of tunnels exposed was 76 m within an area of roughly 170 m 2 in the first excavation and 110 m of tunnel in an area of about 300 m 2 in the second excavation. Figures 12 and 13 show an approximately 120° panorama of each of these and reveal the scale of the termite enterprise. The exposed foraging tunnels lie in the bottom of the trenches visible in the images. Maps created from the photomosaics of these excavations are shown in Figures 14 and 15 . These reveal several key features: (1) the tunnels tend generally to radiate outward from the nest; (2) the many unexcavated side-branches suggest that the area is actually underlain by a dense network of intersecting tunnels, with no area more than a meter or so from a tunnel; (3) the frequent placement of risers to the surface means that the foraging area of the termites is more or less saturated with access points, and that the termites need travel only short distances on the ground surface; (4) the tunnels probably extend outward much farther than was excavated (there was no evidence that the tunnels ended where we stopped excavating); (5) the tunnels in the first excavation ( Figure 14 ) connected two live and one abandoned nest, suggesting that colonies of this termite may occupy more than one nest, and that nests are sometimes abandoned; (6) connected nests also suggests that the entire suitable habitat may be underlain by a network of foraging tunnels. Access to deeper soil In the second excavation, two structures looked like small subsidiary nests located along the tunnel system. Dissection showed them not to be nests, but rather large, vertical tunnels that seemed to descend to deeper soil ( Figure 16 ). The tunnel was not excavated below about 0.5 m, but there was no sign that the tunnel direction changed. The second nest mound that was excavated turned out to be an abandoned nest that was being used as a vertical tunnel to deeper soil ( Figure 17 ). In this case, the termites had also constructed a narrower tunnel to deep soil next to the abandoned nest. Most of the chambers in this abandoned nest had been filled with sand, and only the central core was being used as a vertical tunnel, along with the purpose-built tunnel next to the former nest. Dissection and census of a nest Before exposing the second tunnel system, the focal nest ( Figure 18 ) was excavated for dissection and census of the contained termites. The nest was constructed of a hard outer “carapace” of cemented sand and filled chambers, and an interior living space of sweeping surfaces and arches of a dark, smooth material (stercoral carton), with fairly constant spacing between surfaces ( Figure 19 ). The nest was home to about 45,000 termites, of which about 6,000 (13%) were immature (but very small immatures, and eggs were not counted), 32,300 (71%; 36 g, dry) were workers and 4,100 (9.1%; 3 g, dry) were soldiers. In addition, there were about 2,800 (6.2%; 5 g, dry) immature reproductives with wing buds. No primary reproductives and no “royal cell” were found, suggesting this may have been a subsidiary nest (or calie). The termites were not evenly distributed within the nest. The above-ground mound contained very few termites. About 63% of the termites were found in the second and third quarters of the nest, that is, the center or core, with only about 3% in the top quarter and 13% in the bottom quarter. However, because it took two days to dissect the nest, this distribution does not necessarily represent the natural distribution. A great deal of grass was found in the nest ( Figure 20 ), but these grass clippings were not evenly distributed. The top quartile (0– 10 cm) contained 3.6 g of grass, the next 10.5 g, the third 0.75 g and the bottom almost none. This distribution is probably the result of the depth at which the nest connects to the foraging tunnels, about 10– 15 cm below ground, combined with the consumption of the grass as it is moved deeper toward the core of the nest where the bulk of the termites were located. The total dry weight of grass in the nest was about 15 g, probably a small fraction of what was still in the tunnel system depots. Promirotermes sp. A species of smaller termite, Promirotermes sp., was found co-nesting with B. hainesi . Several chambers containing workers, soldiers and reproductives were located in the carapace surrounding the main B. hainesi nest. The relationship of this species to B. hainesi , the “host,” is unknown.
Discussion Like many other species of termites, B. hainesi operates on an impressive scale. Workers from each nest travel to and fro in the foraging tunnel system, harvesting grass from at least several hundred square meters. The scale of this endeavor is of similar magnitude as several other species of mound-building termites, including C. lacteus, C. brunneus, C. acinaciformis, N. exitiosus ( Ratcliffe and Greaves 1940 ; Hill 1942 ; Greaves 1962 ) and M. michaelsoni ( Darlington 1982 ). Lee and Wood ( 1971 ) suggest that the underground foraging networks of subterranean termites are probably of great ecological importance. No one who has been to Africa or Australia could argue with that claim. Previous reports on subterranean foraging tunnels gave few architectural details of their construction. The only two exceptions are Greaves ( 1962 ), who reported that the tunnels of C. acinaciformis , a wood-feeding species, were made of cemented soil with a simple, flattened lumen in which the termites traveled, and Darlington ( 1982 ), who described part of the foraging passage system of the fungus-gardening M. michaelsoni in great quantitative detail. The aeolian sands of the Namib Desert were ideal for exposing architectural details because the surrounding sand could be loosened with a soft brush and blown away, but the cemented sand that formed the tunnels remained intact, revealing subtle, complex and functional architecture. Such discrimination would have been difficult in more compacted or fine-grained soils. Ironically, one of the clearest recent exposures of a subterranean termite tunnel system involved fossil termite nests dating to the upper Miocene and Pliocene eras (3–7 million year ago) in Chad ( Duringer et al. 2007 ). These fossils were attributed to an ancestral fungus gardening Macrotermitinae, and they consisted of many small globular nests connected by rectilinear side tunnels to a straight main tunnel up to tens of meters long. The entire network of tunnels and chambers was all in a plane, with no evidence of vertical connections. In this regard, the layout seems somewhat similar to the nest arenas of the fungus gardening Odontotermes fulleri in which all chambers were located less than 30 cm below the surface ( Darlington 2007 ). Depots for foraged grass have been reported for another harvester termite, the widespread Hodotermes mossambicus , but the depots were small chambers around the nest perimeter or small chambers near the surface, rather than being part of the foraging tunnels ( Hartwig 1963 , 1965 ; Coaton and Sheasby 1975 ). Probably, this cache system evolved independently, as these species belong to different subfamilies and the depots have different structures. However, Darlington ( 1982 ) described and quantified depots along the foraging tunnels of M. michaelsoni and the Brazilian Syntermes molestus . The depots of M. michaelsoni were especially similar to those of B. hainesi , and Darlington speculates that termites foraging on dispersed food such as grass or litter ought to evolve tunnel system with caches because foraging must occur in episodes. She calculates that the volume of caches underlying an area was similar to the volume of forage gathered in that area in one night. The presence of caches of grass pieces in the depots of B. hainesi strongly suggests that the workers that harvest the grass on the surface are distinct from the tunnel transport workers, and that the system is to some degree a “bucketbrigade,” a system of greater efficiency than one in which each individual harvests and transports each piece of grass all the way to the nest. Leafcutter ants also use caching and “bucket-brigade” transport for leaf pieces ( Hart and Ratnieks 2001 ; Anderson et al. 2002 ), thus partitioning the task of foraging into cutting, caching and multiple transporting stages. Caching was more likely when traffic was heavy or bottlenecked, and incurred the cost of mismatching the leaf piece with the size of the subsequent transporting worker, thus slowing transport. Anderson et al. ( 2002 ) used simulations to test for optimality in such transport systems. It is likely that B. hainesi also tends to cache more grass pieces when cutting rate exceeds transport. The partitioning of foraging in this manner unlinks harvesting, a mostly nocturnal task which carries the risk of exposure to desiccation and predation, from transport, which is relatively safe within the tunnel system and can probably proceed more or less around the clock, as it does in M. michaelsoni . The obvious advantage of such a system may underlie the reason it has evolved in such diverse taxa as ants and termites (and humans). The results leave the spatial extent and size of the colony of this termite undetermined. We found no primary reproductives in the dissected nest and no structure that might be a “royal cell.” Combined with the fact that at least two live nests only about 6 meters apart were connected with tunnels suggests that a colony may consist of multiple nests, some possibly deep in the ground, as suggested by the existence of tunnels-to-depth. Fuller ( 1915 , as cited in Lee and Wood 1971 ) reported that adjacent mounds of Trinervitermes trinervoides were interconnected through subterranean tunnels. On the other hand, Darlington ( 1982 ) found the remains of dead soldiers and workers in the contact zone between foraging tunnel systems of neighboring mounds of M. michaelsoni , suggesting the occurrence of territorial battles. Ebeling and Pence ( 1957 ) described how Reticulitermes hesperus use fine soil particles mixed with saliva to line their tunnels. In light of the extreme aridity of the Namib Desert, and the fact that nest and tunnel construction require water, it seems inescapable that the termites have access to moist soil, probably at great depth. When first brought to the surface and dumped, soil excavated by H. mossambicus in the study area was damp (personal observation), yet no trace of dampness was detectable even in excavations over 2 m deep. Yakushev ( 1968 , as cited in Lee and Woods 1971 ) reports that some termite species may make tunnels to moisture as deep as 70 m. Photographs included in Hill's ( 1942 ) treatise of Australian termites show that Coptotermes acinicaformis and C. lacteus , both mound-builders, construct nests with a very thick “carapace,” much like B. hainesi . This feature is lacking in the other species examined in Hill's book. In contrast to the nests of B. hainesi , the nests of subterranean-nesting termites are often surrounded by an empty space rather than a “carapace” ( Noirot 1970 ). Perhaps the difference lies in the relative instability of the dry sands in which B. hainesi nests. This estimate of the nest population is surely an underestimate of the actual population, for it is likely that a substantial fraction of the termites were in the foraging tunnels at the time of collection. Even after removal of the nest, abundant termites were found in the tunnels during several days of excavation. Whether their home was in the collected nest or in another, possibly a deeper nest, could not be determined. Considering the density of foraging access points as well as the biomass of termites and the amount of grass pieces found in the nest and tunnels, it is likely that B. hainesi foraging has a considerable impact on the sparse grasslands of the eastern Namib Desert. This is more likely because conditions conducive to the growth of grasses may occur less than annually, and then only for short periods. Estimates for grass consumption in a “saturated” population of H. mossambicus in a more lush habitat (Zululand) ranged up to 1 to 3 metric tons per ha, practically the total yield of hay, but other estimates were much lower ( Coaton and Sheasby 1975 ). There are many reports of H. mossambicus creating bare spots through their harvesting activity. It has been suggested that this termite is the cause of the fairy circles mentioned in the Materials and Methods ( Becker 2007 ), but this claim is contested ( van Rooyen et al. 2004 ). Darlington ( 1982 ) estimated the nightly forage collected by M. michealsoni to be approximately 0.6 to 1.1 kg. Finally, Darlington ( 1982 ) showed that the surface access points in M. michaelsoni tunnels to be dense enough that termites need rarely travel more than 10 cm from an opening to forage. The actual density of access points in B. hainesi is unknown, but is clearly higher than indicated in Figures 14 and 15 , because many of the crossconnecting passages were left unexcavated. Likewise, Darlington ( 1982 ) estimated that the nest of M. michaelsoni has a total of 6 km of permanent foraging tunnels, but in view of unexcavated cross-passages and the difficulty of placing colony boundaries on B. hainesi , a corresponding estimate is undetermined for this study. B. hainesi colonies are much smaller than those of M. michealsoni , yet their work is still impressive. It is likely that similar tunneland- depot systems are characteristic of many harvesting termites.
Associate Editor: Robert Jeanne was editor of this paper The harvester termite, Baucaliotermes hainesi (Fuller) (Termitidae: Nasutitermitinae), is an endemic in southern Namibia, where it collects and eats dry grass. At the eastern, landward edge of the Namib Desert, the nests of these termites are sometimes visible above ground surface, and extend at least 60 cm below ground. The termites gain access to foraging areas through underground foraging tunnels that emanate from the nest. The looseness of the desert sand, combined with the hardness of the cemented sand tunnels allowed the use of a gasolinepowered blower and soft brushes to expose tunnels lying 5 to 15 cm below the surface. The tunnels form a complex system that radiates at least 10 to 15 m from the nest with crossconnections between major tunnels. At 50 to 75 cm intervals, the tunnels are connected to the surface by vertical risers that can be opened to gain foraging access to the surrounding area. Foraging termites rarely need to travel more than a meter on the ground surface. The tunnels swoop up and down forming high points at riser locations, and they have a complex architecture. In the center runs a smooth, raised walkway along which termites travel, and along the sides lie pockets that act as depots where foragers deposit grass pieces harvested from the surface. Presumably, these pieces are transported to the nest by a second group of termites. There are also several structures that seem to act as vertical highways to greater depths, possibly even to moist soil. A census of a single nest revealed about 45,000 termites, of which 71% were workers, 9% soldiers and 6% neotenic supplementary reproductives. The nest consisted of a hard outer “carapace” of cemented sand, with a central living space of smooth, sweeping arches and surfaces. A second species of termite, Promirotermes sp. nested in the outer carapace. Key Words
Acknowledgements We are grateful to the NamibRand Nature Reserve for allowing us to do this research in the reserve. It was heaven on earth. We are particularly grateful to Danica Shaw and Nils Odendaal for their generous and indispensable help in making arrangements, finding accommodations and generally getting us started and keeping us going. I am greatly indebted to Vivienne Uys of the South African Biosystematics Division, Plant Protection Research Institute. She not only identified termites for me, but provided me with several hard-to-find termite references. We thank Paul Komagab for helping with the daily chores of the research and excavations. We gained invaluable insights into the ecology and history of the area from Jürgen and Dorothe Klein, and from Albi Brückner.
CC BY
no
2022-01-12 16:13:47
J Insect Sci. 2010 Jun 14; 10:65
oa_package/78/24/PMC3014812.tar.gz
PMC3014813
20569131
Introduction Acoustic communication in adult Lepidoptera has been broadly studied and serves a variety of social and defensive functions ( Minet & Surlykke 2003 ). However, research on acoustic communication in larval Lepidoptera is currently limited. Caterpillars rely on communication during various stages of their life cycles for foraging, defense, aggregation, shelter building, or resource competition ( Costa & Pierce 1997 ; Fitzgerald & Costa 1999 ; Cocroft 2001 ; Costa 2006 ), but little is known about the mechanisms used to broadcast and receive signals ( Costa & Pierce 1997 ). Vision seems unlikely to be an important sensory modality because caterpillars have simple eyes, capable of discriminating only crude images ( Warrant et al. 2003 ). Consequently, most studied caterpillar communication systems focus on chemical and tactile modalities, where such cues are used mainly in species traveling in processions ( Fitzgerald 1995 ; Ruf et al. 2001 ; Fitzgerald & Pescador-Rubio 2002 ). There is increasing evidence that larval Lepidoptera employ an acoustic sense for communication, primarily in the form of vibration. Although anecdotal reports (e.g. Federley 1905 ; Dumortier 1963 ; Hunter 1987 ) suggest that the phenomenon is widespread, experimental evidence for vibrational communication in caterpillars is limited. Lycaenidae and Riodinae butterfly larvae use vibrations to maintain mutualistic relationships with ants ( DeVries 1991 ; Travassos & Pierce 2000 ). Vibrations are also employed in territorial encounters with conspecifics in four species of moth larvae ( Sparganothis pilleriana, Russ 1969 ; Drepana arcuata, Yack et al. 2001 ; Caloptilia serotinella, Fletcher et al. 2006 ; and Drepana bilineata, Bowen et al. 2008 ). Further research characterizing and testing the function of vibrational signaling in caterpillars is necessary for understanding its ubiquity and role in different families of Lepidoptera. Drepaninae, the largest subfamily of moths belonging to the Drepanidae ( Minet & Scoble 1999 ), offers a unique opportunity for studying the function and evolution of vibrational communication in caterpillars. Although vibrational signaling has only formally been described in two species to date ( D. arcuata and D. bilineata ), there is abundant suggestive evidence ( Dyar 1894 ; Federley 1905 ; Nakajima 1970 , 1972 ; Bryner 1999 ; Sen & Lin 2002 ; I. Hassenfuss, personal communication) that it is common and highly variable in the Drepaninae. Variation exists in the signal-producing structures, types of signals produced and territorial behaviour. Both species experimentally studied to date employ vibrational communication to resolve territorial disputes with conspecifics over silk leaf shelters ( Yack et al. 2001 ) or leaf territories ( Bowen et al. 2008 ). Both possess specialized sound-producing structures, a pair of modified setae (anal oars) on their terminal abdominal segment, to produce vibrational signals. There is evidence that many other Drepaninae species possess anal oars, which can be highly variable in both shape and size across species ( Fig. 1A ; Nakajima 1970 , 1972 ). Other species lack anal oars altogether ( Fig. 1B ) and may completely lack vibrational signals. Signaling in this second morphological form has yet to be experimentally analyzed. The goal of this study is to examine one of these species, Oreta rosea, a sympatric congener of D. arcuata and D. bilineata that lacks anal oars ( Fig. 1A ). To the authors' knowledge, there are no reports to date on territorial behaviour or vibrational signal production in this species. Since larvae of O. rosea live solitarily as late-instars (see Results ), we hypothesize that they will exhibit territorial behaviours. If they are territorial, then: ( i ) residents should maintain exclusive use of their territory, ( ii ) residents should defend their territories against conspecifics, and ( iii ) intruders should only rarely displace residents. The aim of this study is to test for territorial behaviour and vibrational signaling, and if present, compare it with previously studied species. Life-history traits relevant to territoriality and spacing will also be compared to provide insight into some of the factors underlying the evolution of signaling in the Drepaninae.
Materials and Methods Animals Oreta rosea Walker 1855 (Lepidoptera: Drepanidae) moths were collected from the wild at ultraviolet collecting lights between May and August 2007 in Dunrobin, Ontario, Canada. Females oviposited on cuttings of viburnum ( Viburnum lentago ) and larvae were reared indoors on V. lentago or V. opulus under a LD 18:6 photoperiod at 21–26°C. Early- (first and second) and late- (third to fifth) instar larvae were used for life-history and behavioural observations. Late-instars were further used for morphological analysis of sound-producing structures, laser vibrometry recordings and behavioural trials. General behaviour and life-history Behavioural observations relevant to communication and spacing were recorded daily. These included the position on the leaf, presence of silk on the leaf, mode of feeding, and interactions between individuals. Photographs of eggs, larvae and adults were obtained with an Olympus dissection microscope (SZX12; www.olympus.com ) equipped with a Zeiss camera (AxioCam MRc5; www.zeiss.com ), or with a Nikon Digital SLR camera (D80; www.nikon.com ). Signal characteristics Vibrational signals were monitored and characterized using two recording methods - a microphone and laser-doppler vibrometer (LDV). Both methods involved recording late-instar larvae with a videocamera and a microphone or LDV during encounters with conspecific intruders (see below). Vibrations measured using a Polytec LDV (PDV 100; www.polytec.com ) were digitized and recorded onto a Marantz Professional portable solid state recorder (PMD 671; www.marantzpro.com ; 44.1 kHz sampling rate). Vibrations perpendicular to the leaf surface were measured at the location of a circular piece of reflective tape (2.0 mm in diameter) positioned 1 – 2.5 cm from the resident caterpillar. All recordings were made in an acoustic chamber (Eckel Industries, www.eckelacoustic.com ). These recordings were used to determine the types of signals produced and to measure temporal and spectral characteristics of signaling. Temporal characteristics, including mean signaling bout duration, mean interval duration between signaling bouts and number of signals per bout were measured using Raven Bioacoustics Research Program (Cornell Laboratory of Ornithology; www.birds.cornell.edu/brp/ ). Bouts were defined as any combination of signals that was preceded and followed by feeding, walking or at least 1 s of inactivity. Durations of each signal type were calculated from 20 individuals. Power spectra were made using a 512-point Fourier transform (DFT, Hann window) in Raven Bioacoustics Research Program. Signals were not filtered and a power spectrum of background noise was included for comparison. Morphology Structures associated with signal production and the last abdominal segments (A8–A10) were examined in early- and late-instars preserved in 80% ethanol. For scanning electron micrographs, mandibles and head capsules were dissected, mounted on aluminum stubs and air-dried. Specimens were sputter-coated with gold-palladium and examined using a JEOL scanning electron microscope (JSM-6400; www.jeol.com ). Signal function Once it was established that O. rosea produces vibrational signals, we tested the hypothesis that signaling functions to advertise occupancy of leaves. Twenty-two encounters were staged between a resident larva and an introduced conspecific intruder of approximately the same size, as described in Bowen et al. ( 2008 ). Briefly, late-instar larvae were selected at random from 2 broods of wild-caught females. Residents and intruders were isolated on a leaf or in a container with viburnum twigs, respectively, for at least 30 min prior to the trial. Leaves were chosen based on size (mean ± SD: 8.4 ± 2.1 × 3.4 ± 1.2 cm) and the absence of feeding scars, or other types of leaf damage. Trials were videotaped from 1 minute before the intruder was introduced until 1 min after one contestant left the leaf (i.e. when one contestant ‘won’ the encounter). If there was no winner within 30 minutes, the trial was deemed a ‘tie’. This time was chosen based on previous trials with related species ( D. arcuata, Yack et al. 2001 ; D. bilineata, Bowen et al. 2008 ). After each trial, the weight of each caterpillar was recorded and individuals were isolated in a separate container so they would not be reused in another trial. All trials were recorded using a Sony High Definition Handicam (HDR-HC7; www.sony.com ) and a remote Sony audio microphone (ECM-MS907) placed 1–2 cm behind the leaf or with the LDV. Videotapes from 22 trials were analyzed to measure the durations and outcomes of contests, and to monitor changes in signaling rates in both residents and intruders throughout each trial. Durations of trials in which the intruder signaled were compared to those in which only the resident signaled using a Wilcoxon rank sum test. To compare average signaling rates of residents and intruders during encounters, signals from 21 encounters (excluding one trial where the intruder won) were counted at 5-s intervals during the 80-s period prior to and the 80-s period following the time at closest distance between the resident and intruder. The distances between the head of the intruder and closest point of the resident were measured at each interval using ImageJ software ( http://rsb.info.nih.gov/ij/ ). In 18 trials where the intruder came within at least 0.5 cm of the resident, signaling rates with respect to decreasing distance between individuals were recorded. Rates were measured at three stages — far (20-s interval immediately following the point when the head of the intruder passed the junction of the petiole and the leaf), mid (20-s period following the mid-way point between the far and close distances) and close (20-s period following the point when the intruder first made contact with the resident, or in trials where contact was not made, when the intruder came the closest within 0.5 cm of the resident). Time intervals did not overlap in any of the trials. Signal escalation was analyzed by calculating the mean number of signals at each distance category for each type of signal and each individual. The data were square-root transformed and the means were compared using an analysis of variance (ANOVA). Post hoc analyses were conducted using a Tukey-Kramer HSD. A grand mean of signaling rates per signaling type at each distance category was calculated to create a histogram. Overall signaling rates for O. rosea were calculated by taking the mean of all signaling types at all distance categories for comparison with D. arcuata and D. bilineata. Comparison with D. arcuata and D. bilineata In order to compare signaling between species that possess anal oars and O. rosea, signaling rates for D. arcuata and D. bilineata were obtained from staged encounters from previous studies using similar methods ( Yack et al. 2001 ; Bowen et al. 2008 ). Types of signals produced, patterns of signaling, signal escalation and signaling rates were compared between species. Overall signaling rates were compared between species using an ANOVA. Post hoc analyses were conducted using a Tukey-Kramer HSD.
Results General behaviour and life-history Adult females ( Fig. 2A ) lay eggs singly or in small rows of 2–10 on the upper and under surface of the leaf ( Fig. 2B ). All instars live solitarily on the leaf. Early-instars occupy individual feeding areas at leaf edges, skeletonizing the leaf surface ( Fig. 2C ). Lateinstar caterpillars occupy their own leaf ( Fig. 2D ) and will lay down a mat of silk on the leaf surface, but make no shelter. They begin feeding at the tip and will consume almost the entire leaf. If approached by a conspecific, leaf occupants of all instars will produce vibrational signals. Signal characteristics Microphone and LDV recordings revealed that O. rosea larvae produce three types of vibrational signals: mandible drumming, mandible scraping and lateral tremulation (Video). Signaling was initiated when a resident of a leaf is approached by a conspecific. Signaling was not ever observed in response to agitating the leaf or disturbances caused by a paintbrush. Overall, signaling typically occurred in bouts ( Fig. 3A ), lasting 2.2 ± 1.7 s (range = 0.4 – 6.5 s, n = 71 bouts from 16 individuals). Bouts typically comprised more than one signal, averaging 4.0 ± 2.0 signals per bout (range = 1.0 — 11.0, n = 71 bouts from 16 individuals). Time intervals between bouts were highly variable, ranging from 1.7 – 15.4 s (mean ± SD = 5.1 ± 3.6 s, n = 63 intervals from 15 individuals). Spectral analysis revealed that all signals are broadband with most energy ranging from 0.5 – 2.0 kHz ( Fig. 3C ). Mandible drumming . Mandible drumming ( Fig. 3 ) is produced by rapidly hitting the leaf surface with the serrated edges of open mandibles ( Fig. 4 ) to produce a short, percussive signal. Mandible drumming was found to be used more frequently as the intruder approached the resident. The mean ± SD duration of a single drum is 66.9 ± 20.1 ms (range = 41.6 – 119.8 ms, n = 71 signals from 19 individuals). Mandible scraping . Mandible scraping ( Fig. 3 ) is produced by a movement of the head, thorax and first two abdominal segments in a lateral arc in one direction, dragging the mandibles across the leaf surface to produce a scratching noise. Often the caterpillar will scrape in the other lateral direction immediately after the first scrape. Distance and duration of the scrape can be highly variable depending on proximity of the conspecific and other factors, such as proximity of the leaf edge. Mandible scraping was also found to be used more frequently as the intruder approached the resident. The mean ± SD duration of a single scrape is longer than that of a mandible drum, lasting 125.6 ± 21.4 ms (range = 70.0 – 157.2 ms, n = 69 signals from 17 individuals). Lateral tremulation . Lateral tremulation ( Fig. 3 ) was only observed in about half the individuals (in 40.9% of trials) and consists of quick, short, successive lateral movements of the head and thorax while the rest of the body remains motionless. A lateral tremulation event is distinguished from a mandible scrape by its much shorter, highly repetitive movement, where the mandibles never touch the leaf surface. A single lateral tremulation event lasts on average 2.0 ± 0.6 s (range = 1.3 -–3.1 s, n = 32 signals from 9 individuals), and although highly variable, is much longer than a single mandible scrape or drum. One lateral tremulation event typically occurred at the beginning of a bout, followed by any combination of mandible drums and scrapes. Bouts rarely contained more than one lateral tremulation event. Signal function A total of 22 encounters were staged between a resident and an intruder of equal weight. Weights of the contestants ranged from 7.5 – 244 mg (mean = 88.2 ± 74.1 mg, n = 44), but were similar between contestants in a given trial (mean difference = 18.0 ± 17.6 mg, paired t -test, t = 1.23, P = 0.23). Residents won 91.0% of trials, intruders won 4.5% and 4.5% were ties. Contests lasted 457.4 ± 330.7 s in trials where a winner was decided ( n = 21). The only contest won by an intruder was of average duration (510.0 s). Residents remained silent until they detected an intruder ( Figs. 5 , 6 ). Residents signaled in 84.2%) of trials where signaling occurred, and were the first to signal in 78.9% of trials, at a latency of 200.9 ± 193.3 s ( n = 15) from the beginning of the trial and at a mean distance of 6.97 ± 9.91 mm ( n = 15) from the intruder's head to the closest point on the resident's body. Residents remained in the same approximate position on the leaf during trials. Signaling did not occur at all in three trials. Intruders signaled in 47.4% of trials where signaling occurred, but were the only contestants to signal in 15.8% of trials. Overall, residents signaled at significantly higher rates than intruders ( Fig. 5 ; paired t test, t = -3.84, P = 0.001, n = 21). The rate of signaling in residents escalated as the intruder approached ( Fig. 6B ). Very little signaling was observed at far and mid distances, except for the occasional mandible drum and lateral tremulation event. Overall, signaling was significantly higher at close distances, where both mandible drumming and mandible scraping did not change from far to mid distances but increased significantly from mid to close distances ( Fig. 6B ; ANOVA; MD: F = 22.6, P = 0.001; MS: F = 6.1, P = 0.43; V: F = 22.6, P < 0.001). Lateral tremulation did not vary significantly with distance, perhaps due to the fact that it was rarely observed in comparison to the other signals ( Fig. 6B ; ANOVA, F = 2.8, P = 0.07). A fourth type of behaviour that lacks a vibrational signal was observed in 31.8 % of trials ( Fig. 5 ). Lateral tail contact involves a quick lateral movement of the elongated caudal projection, usually towards the intruder. Lateral tail contact is typically observed when the intruder touches the resident near its abdominal end, and the resident swings its tail back and forth multiple times, making contact with the intruder. Lateral tail contact was found to increase significantly from mid to close distances (ANOVA, F = 4.9, P = 0.01). Biting was never observed. Comparison with D. arcuata and D. bilineata Drepana arcuata, D. bilineata and O. rosea are all solitary in their late-instars and defend territories against conspecifics. D. arcuata is the only species that makes a silken leaf shelter, while the others produce minimal silk by laying mats on the leaf surface ( Fig. 6A ). Morphological analyses revealed that the mandibles are similar in position and general appearance between species, and confirms the lack of anal oars in O. rosea, which are present and important signal producing structures in D. arcuata and D. bilineata. Consequently, O. rosea does not produce an anal scraping signal. It does, however, produce a lateral tremulation signal, which is not found in either of the other species ( Fig. 6B ). Mandible drumming is produced by all species and mandible scraping is produced in O. rosea and D. arcuata ( Fig. 6B ). Signaling patterns are similar between species, all occurring in bouts, although the structure of bouts differs. The pattern of signaling within bouts in O. rosea is highly variable, whereas patterns of signaling in D. bilineata and D. arcuata are more regular, often beginning with an anal scrape followed by one or more mandible drums/scrapes. In terms of signaling rates, O. rosea signals significantly less than D. arcuata, producing significantly fewer mandible drums and mandible scrapes ( Fig. 6B ; ANOVA; MD: F = 41.1, P < 0.001; MS: F = 30.1, P < 0.001). When compared to D. bilineata, O. rosea mandible drums significantly less (ANOVA, F = 41.1, P < 0.001). Lateral tail contact was also compared between species, and it was found that O. rosea contacts conspecifics with its caudal projection at similar rates to D. bilineata (Independent Mest, t = -0.61, P = 0.54, two-tailed) whose caudal projection is about 10 times smaller ( Fig. 6A ). Unlike D. arcuata and D. bilineata, O. rosea was typically not observed to contact a conspecific with its head. Combined signaling rates between species (not including lateral tail contact) differ significantly between species, D. arcuata signaling significantly more than D. bilineata and O. rosea, and D. bilineata signaling significantly more than O. rosea (ANOVA, F = 75.9, P < 0.001).
Discussion The purpose of this study was to examine a variation on a theme — vibrational signaling in hook-tip moth caterpillars. The Drepaninae subfamily shows interesting diversity in vibrational signaling and morphology of the terminal abdominal segment. While all species lack anal prolegs ( Minet & Scoble 1999 ), only some possess specialized soundproducing structures, anal oars. The present study is the first to describe vibrational signaling in a species of Drepaninae that does not possess these structures. Despite the lack of anal oars, our results show that O. rosea produces three types of substrate-borne signals upon encountering a conspecific — mandible drumming, mandible scraping and lateral tremulation. The only morphological structure employed by O. rosea to produce vibrational signals are the mandibles, which do not appear to be specifically differentiated for sound production. There is mounting evidence demonstrating that the use of mandibles for acoustic signaling may be common in caterpillars ( Yack et al. 2001 ; Brown et al. 2007 ; Fletcher et al. 2006 ; Bowen et al. 2008 ; Bura et al. 2009 ). Although mandible drumming and scraping have already been described in two other species of Drepaninae ( Yack et al. 2001 ; Bowen et al. 2008 ), lateral tremulation has not been reported until now. Signal function Results from staged encounters support the hypothesis that vibrational signaling in O. rosea is used to advertise occupancy of leaf territories. Our findings are also consistent with territorial displays in other animals ( Huntingford & Turner 1987 ). Signaling is produced in the presence of a conspecific and acoustic displays are restricted to a territory. Residents are typically the first to signal during an encounter, signaling significantly more than intruders, and winning significantly more encounters (more than 90% of trials in this study). Signaling rates also escalate as the intruder approaches. Alternative signal functions observed in other acoustically communicating larvae include aposematic warning signals ( Brown et al. 2007 ; Bura et al. 2009 ), mutualistic relationships with ants ( DeVries 1991 ; Travassos & Pierce 2000 ) and conspecific recruitment ( Fletcher 2007 ). The aposematic signaling hypothesis can be discounted in O. rosea larvae because they have no obvious noxious defenses and are palatable to predators (e.g. predatory stink bugs, leopard geckos, tarantulas; unpublished data). Furthermore, O. rosea larvae were not observed to produce vibrational signals during encounters with predators (unpublished data). The latter alternative hypotheses can also be discounted in O. rosea larvae as they do not produce secretions, are not associated with ants and are not gregarious at any stage of their life cycles. In the future, playback experiments may provide further insight into the function of vibrational signaling in these caterpillars. Comparison between species and insights into evolution Ritualized vibrational signaling in O. rosea and other species of Drepaninae is thought to have evolved to avoid the costs associated with physical fighting, as territorial encounters in other larval species often end in serious injury or death to one of the contestants ( Weyh & Maschwitz 1982 ; Okuda 1989 ; Berenbaum et al. 1993 ). The investment in leaf defense may be proportional to investment in nest production, because leaf shelters are expensive to build and valuable to own ( Berenbaum et al. 1993 ; Cappuccino 1993 ; Costa & Pierce 1997 ). This is exemplified in the Drepaninae studied to date, where D. arcuata, the only species that produces a leaf shelter, invests significantly more in leaf defense via vibrational signaling than O. rosea and D. bilineata. Of the three species, D. arcuata is also the only species that lives gregariously in the early-instar stage. Therefore, the chances of encountering a wandering caterpillar from the same brood is expected to be higher in D. arcuata than in O. rosea and D. bilineata because the latter disperse earlier in development. Therefore, higher rates of ritualized signal production in D. arcuata may have evolved to avoid incurring physical injury to relatives. Oreta rosea and D. bilineata share similar life-histories in that they live solitarily at all instars, do not build leaf shelters, and produce comparable amounts of silk. Signaling rates would thus be expected to be similar between these two species if signaling was linked to nest investment. However, this is not the case as D. bilineata signals at a significantly higher rate than O. rosea. To determine the cause for the difference in signaling rates between these two species, future comparative studies examining caterpillar behaviour in natural conditions are required to assess other lifehistory traits that may be linked to signaling. Comparison of territorial behaviour in O. rosea and D. bilineata also suggests that the elongated caudal projection found in O. rosea did not evolve for a defensive function against conspecifics because no significant difference was found in rates of lateral tail contact between species, despite the distinct difference in caudal projection size. This does not discount its use as a defense against heterospecifics, such as parasitoids, and further studies examining its specific function are needed. The present study contributes to the understanding of vibrational signaling in the Drepaninae, describing signaling in a novel morphological form. It also provides evidence that signaling in the Drepanoidea may be widespread and highly variable. Each species possesses unique characteristics that can contribute to their vibrational signaling repertoire. Further behavioural and morphological observations in a number of Drepanoidea species mapped onto a molecular phylogeny are now underway and will provide additional insights into the ultimate and proximate mechanisms underlying the evolution of ritualized signaling in these caterpillars.
Abstract Vibrational communication in hook-tip moth caterpillars is thought to be widely used and highly variable across species, but this phenomenon has been experimentally examined in only two species to date. The purpose of this study is to characterize and describe the function of vibrational signaling in a species, Oreta rosea Walker 1855 (Lepidoptera: Drepanidae), that differs morphologically from previously studied species. Caterpillars of this species produce three distinct types of vibrational signals during territorial encounters with conspecifics — mandible drumming, mandible scraping and lateral tremulation. Signals were recorded using a laser-doppler vibrometer and characterized based on temporal and spectral components. Behavioural encounters between a leaf resident and a conspecific intruder were staged to test the hypothesis that signaling functions as a territorial display. Drumming and scraping signals both involve the use of the mandibles, being hit vertically on, or scraped laterally across, the leaf surface. Lateral tremulation involves quick, short, successive lateral movements of the anterior body region that vibrates the entire leaf. Encounters result in residents signaling, with the highest rates observed when intruders make contact with the resident. Residents signal significantly more than intruders and most conflicts are resolved within 10 minutes, with residents winning 91% of trials. The results support the hypothesis that vibrational signals function to advertise leaf occupancy. Signaling is compared between species, and evolutionary origins of vibrational communication in caterpillars are discussed. Keywords
Acknowledgements We thank Lynn Scott for collecting wild moths, and Veronica Bura and Tiffany Eberhard for their help with data collection. We would also like to thank Dr. Ivar Hasenfuss for providing unpublished observations on other Drepanoidea species. This project was funded by an NSERC Discovery Grant, CFI and OTI grants to JEY, and an NSERC Postgraduate Scholarship to JLS. Abbreviations mandible drum; mandible scrape; lateral tremulation event; laser-doppler vibrometer
CC BY
no
2022-01-12 16:13:46
J Insect Sci. 2010 Jun 4; 10:54
oa_package/d4/bd/PMC3014813.tar.gz
PMC3014814
20572791
Introduction As a member of the caspase (cys-teiny-laspartate specific proteinase) family, interleukin -1 beta-converting enzyme (ICE) was discovered in mammals and named caspase-1. It is considered the initiator in caspase-dependent apoptosis. ICE was identified as a CED-3-like protein in Caenorhabditis elegans ( Yuan et al. 1993 ). In lepidopteran insects, ice was identified as a pro-death factor in the Heliothis virescens midguts developmental apoptotic process ( Parthasarathy and Palli 2007 ). According to the reported sequences in GenBank, three silkworm ice homologs — ice , ice-2 and ice-5 — were described (Accession numbers: ice , AY885228; ice-2 , DQ360829; and ice-5 , DQ360830). In a previous study ( Song et al. 2007 ) ice-2 and ice-5 were cloned with an open reading frame of 852 and 936 base pairs (bps), respectively. Many agents that induce apoptosis are either oxidants or stimulators of cellular oxidative metabolism ( Haddad 2004 ). H 2 O 2 is a reactive oxygen species. In general, reactive oxygen species are harmful to living organisms because they tend to cause oxidative damage to proteins, nucleic acids, and lipids ( Hermes-Lima and Zenteno-Savín 2002 ). They also can induce various biological processes ( Suzuki et al. 1997 ) and have been proposed as common mediators for apoptosis ( Haddad 2004 ). H 2 O 2 is an oxidant that triggers caspase activation and subsequent apoptosis ( Blackstone and Green 1999 ). Therefore, the oxidative damage model based on H 2 O 2 could be efficient for elucidating the roles of ice-2 and ice-5 in H 2 O 2 induced apoptosis. Kidd ( 1998 ) reported that H 2 O 2 -mediated caspase activation was dependent on the release of cytochrome c from mitochondria, suggesting a key role for this peroxide in mitochondrial permeability and leakage. Before the release of cytochrome c from the mitochondria, the mitochondrial membrane potential was lost ( Twomey and McCarthy 2005 ). This study attempted to characterize the genes of ice-2 and ice-5 in the early phase of H 2 O 2 induced apoptosis and to observe morphological and mitochondrial membrane potential changes in cells of Bombyx mori L. (Lepidoptera: Bombycidae). Meanwhile, time course transcriptional profiles of the two genes were investigated by quantitative realtime PCR. This report will provide new insight into the function of ICEs in insects. Additionally, damage caused by H 2 O 2 and UV irradiation were compared in this paper and may provide insight into the role of insect ICEs during the apoptosis processes.
Material and Methods B. mori cell culture B. mori ovary-derived cells that were a gift of Dr. Xiangfu Wu (Chinese Academy of Sciences, Shanghai Institute of Biochemistry and Cell Biology) were cultured in TC-100 insect cell culture medium (Gibco brand, Invitrogen, www.invitrogen.com ) supplemented with 10% fetal bovine serum at 27° C. H 2 O 2 was applied to the B. mori cells, which then were plated at a density of 2 × 106 cells in 6-well plates (Corning, www.corning.com ). They were incubated for 3–5 days at 27° C, and then used for further studies. Hydrogen peroxide treatment Apoptosis was induced in B. mori cells by exposure to different concentrations (0.09 – 90 μM) of H 2 O 2 , and the median lethal dose (LD 50 ) was calculated. While incubating at the LD 50 H 2 O 2 concentration, B. mori cells were observed microscopically at specified intervals for the appearance of apoptotic bodies, and were collected at regular intervals. UV irradiation treatment The cells, with a very thin layer of phosphate buffered saline were irradiated for 20 s under UVA and UVB lamps at different UV doses (50 - 5 mJ/cm 2 ). The total dosage was measured by a radiometer (International Light, Inc., www.intl-lighttech.com ) fitted with a UV detector. At the LD 50 H 2 O 2 concentration, LD 50 , B. mori cells were observed microscopically at specified intervals for the appearance of apoptotic bodies, and were collected at regular intervals. MTT assay for cell mortality The 3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyl tetrazolium bromide (MTT) assay was used to detect mortality and was carried out according to Fornelli et al. ( 2004 ). Five mg/ml MTT was dissolved in phosphate buffered saline, and 20 μl of this stock solution was added to the culture wells. The incubation time with MTT was 3 h at 27° C. The supernatant was removed, and 150 μl of dimethyl sulfoxide was added to each well before reading optical density at 580 nm with fluorescence spectromety (Spectra Max, Gemini EM, Molecular Devices, www.moleculardevices.com ). Mortality = 1-viability. JC-1 assay for mitochondrial membrane potential Change in the potential of the mitochondrial membrane was assessed in live B. mori cells by using the lipophilic cationic probe 5,5′,6,6′-tetrachloro-1,1′,3,3′-tetraethylbenzimidazolcarbocyanine iodine (JC-1) ( Smiley et al. 1991 ). For quantitative fluorescence measurement, cells were rinsed once after JC-1 staining and scanned with fluorescence spectrometry at 485-nm excitation and 530 and 590 nm emission, to measure green and red JC-1 fluorescence, respectively. Each well was scanned at 25 areas rectangularly arranged in 5 × 5 pattern with 1-mm intervals and an approximate beam area of 1 mm 2 (bottom scanning). RNA extraction Total RNA was extracted from the collected cells using Trizol (Invitrogen) according to the manufacturer's protocol. Contaminated genomic DNA was removed by Rnase-free Dnase I (Promega, www.promega.com ). The concentration of the RNA was assessed using the Genspec III spectrophotometer (Hitachi Genetic Systems, www.biospace.com ), and the integrity of the RNA was assessed by running 2 μl of RNA on a 1% ethidium bromide/agarose gel. The RNA was stored at -70° C until needed. Reverse transcription 2 μ g DNase-treated RNA was reversetranscribed to single stranded cDNA in a 20 μl reaction containing 0.2 μmol/L oligo-dT, 0.5 mmol/L of each dNTP, 5 μl M-MLV 5 × reaction buffer, and 200 U M-MLV reverse transcriptase (Promega). The thermal cycling profiles were as follows: 65° C for 5 min, 37° C for 60 min, and 75° C for 5 min. The resultant cDNA was stored at -20° C until needed. Quantitative real-time PCR Primers used for the real-time PCR amplification of ice-2 , ice-5 and B. mori actin were selected based on the sequences available in GenBank. Primers were designed for specific detection (for ice-2 Forward: 5′ tctgttgacggttatctttc 3′ and Reverse: 5′ tattgttggtctcctgacat 3′; for ice-5 , Forward: 5′ tgttgacgagcttgtgactc 3′ and Reverse: 5′ caccatcgtgatcatatgca 3′). Primers for B. mori actin A3 (Forward: 5′ atccagcagctccctcga gaagtc t 3′ and Reverse: 5′ acaatggagggacca gactcgtcgt 3′) were used as an endogenous reference gene in real-time PCR. Real-time PCR amplifications were performed to examine the relative expression of ice-2 and ice-5 in treated B. mori cells in the sequence detection system (MX3000P, Stratagene, www.stratagene.com ). Duplicates of 0.5 μl cDNA from each reverse transcription reaction were used as templates. The reactions were performed in a total volume of 50 μl using SYBR premix EXTaqTM perfect realtime kit (TaKaRa, www.takara-bio.com ) as recommended by the manufacturer. The following MX3000P thermocycling program was used: denaturation program (3 min at 95° C), amplification and quantification program repeated 40 times (10 s at 95° C, 30 s at 58° C and 20 s at 72° C with a single fluorescence measurement), and melting curve program (55° C to 95° C with a heating rate of 0.1° C/s). Relative expression levels of ice-2 and ice-5 were calculated with the comparative Ct (2 - ΔΔ Ct) method. Means and standard errors for each time-point were obtained from the averages of three independent sample sets. Statistical analysis Data are presented as the mean ± SD or mean ± SE of results of two or three separate experiments, as specified in the figure legends. Statistical significance was calculated (SPSS11.5, SPSS Inc., www.spss.com ) with one-way ANOVA and one-sample T test. The p value lower than 0.05 was considered as significant.
Results Sequence analysis of ice-2 and ice-5 Sequence analysis suggested that B. mori ice2 and ice-5 resemble human caspase-3, which plays a role as an effector and depends on the release of cytochrome c from the mitochondrion. Interestingly, expression of the ice isoform was not detected in the previous study, since no copies of ice were Moreover, the isoforms, ice-2 and ice-5 , were transcribed from the same gene but spliced differently under UV irradiation, and they both have a QACRG active site that belongs to the caspase family ( Song et al. 2007 ). Sequence analysis revealed that ice-2 had seven exons, while ice-5 had eight. The difference between the two genes was that ice5 contained an extra exon with 84 bp, and the 28 amino acids are unique to ice-5 ( Figure 1 ). LD 50 values for H 2 O 2 and UV irradiation that induce cell apoptosis Apoptosis was induced in B. mori cells by exposure to different concentrations (0.09 – 90 μM) of H 2 O 2 , and the LD 50 value was calculated using the MTT assay. The same test was repeated with UV irradiation. Table 1 shows that the best concentration of H 2 O 2 was 3 μM because mortality (49.074%) of 3 μM-treated B. mori cells was nearest to LD 50 . The best dose of UV irradiation was 20 mJ/cm 2 , with a mortality rate of 45.961%, which was the nearest to LD 50 . Morphological change of cells after H 2 O 2 stimulation Using a microscope, B. mori cells were observed after H 2 O 2 stimulation at regular intervals from 0 to 12 h. As time passed, the morphology of the cells changed. However, in the first 4 h after stimulation, there were a few cells that had different morphology from the normal cells ( Figure 2 ). Then some cell membranes wrinkled and the cells became smaller than normal cells by 5 h after stimulation. By 6 h after stimulation, wrinkling was more obvious. Bubble-like bodies appeared around wrinkled cells at 9 h post-stimulation. Vesicles formed in cell membranes, and apoptotic bodies were observed from the 10 h to 12 h phase. Change in mitochondrial membrane potentials B. mori cells were acutely exposed to 3 μM H 2 O 2 and were tested at different times using the JC-1 assay. The results showed that during the first 5 h, the 590:530 fluorescence ratio of JC-1 dye declined sharper than that during the following 7 h, and the change could be omitted compared to the later change ( Table 2 ). The red-green JC-1 fluorescence ratio started to decrease at 0.5 h after H 2 O 2 stimulation. After dramatically declining, the red-green JC-1 fluorescence ratio tailed off steadily from 6 h to 12 h after-stimulation. Expression profiles of the ice-2 and ice-5 genes The relative expression of mRNA of ice-2 and ice-5 of H 2 O 2 stimulated B. mori cells was analyzed by quantitative real-time PCR. The ice-2 gene was highly expressed at two time points, 0.5 and 5 h after H 2 O 2 stimulation, while the expression level of ice-5 peaked at 0.5, 3, and 5 h after H 2 O 2 stimulation ( Figure 3 ). In other times, however, very low levels of both ice-2 and ice-5 mRNAs were detected. The mRNA level of ice-5 was higher than that of ice-2 at the majority of time stages from 0 to 6 h, except for the 5 h time point. Comparisons between damage from H 2 O 2 and UV irradiation Although at 5 h post stimulation, the images of dying B. mori cells treated with H 2 O 2 were distinct from UV treated cells, they both had similar appearances at 12 h ( Figure 4 ). Apoptotic bodies could be found easily under a microscope at 200× magnification. Moreover H 2 O 2 treated cells formed membrane vesicles at 9 h, while UV treated cells started to vesicluate at 5 h, when the response of the cells to the stimuli was first detected. Additionally, throughout the process, the change in the fluorescence ratio of H 2 O 2 treated cells (10.413) was more obvious than that of the UV treated cells (4.938) ( Table 2 ). In H 2 O 2 treated cells, the fluorescence ratio declined at 0.5 h, but it declined at 6 h in UV treated cells ( Table 2 ).
Discussion As previously reported, the decrease of mitochondrial membrane potential started at the very beginning of the treatment and preceded the morphological change of the cells. This implies that apoptosis induced by H 2 O 2 might relate to the intrinsic apoptotic pathway via effects on the mitochondria. The peak levels of ice-2 and ice-5 were reached when the cellular morphology was still unchanged but the mitochondrial membrane potential had already changed considerably ( Figures 2 and 3 , Table 2 ), suggesting that the activation of B. mori ice-2 and ice-5 might be related to the release of cytochrome c from the mitochondria. Later, at 5 h after stimulation, changes in all the data were obvious. First, cell membranes were triggered to wrinkle, and cells became smaller than the ordinary cells. At the same time, the mitochondrial membrane potential steadily declined, beyond the dramatic decrease during the first 5 h. There was also another increase in the expression of ice-2 and ice-5 . In Spodoptera frugiperda cells, oxidant treatments resulted in the release of cytochrome c followed by the activation of caspase-3 ( Sahdev et al. 2003 ). Therefore, B. mori ICEs might be regulated by H 2 O 2 , and related to the dysfunction of mitochondria, ice-2 and ice-5 may also be initiators associated with mitochondria initially, and then be effectors following the dysfunction of mitochondria in H 2 O 2 induced apoptosis. The fact that the genes of ice-2 and ice-5 were different from each other by just one exon implied that different mRNAs are present. This is likely related to the different patterns in their expression profiles. From 0 to 0.5 h after exposure to H 2 O 2 , while the level of ice-2 increased from low to high, the level of ice-5 increased from being undetectable to the highest level ( Figure 3 ). Then, after expressing stable levels for a while, ice-2 rose to its highest level, and ice-5 reached its second peak, suggesting that ice-5 may play a more active role in the early phase of H 2 O 2- induced apoptosis than ice-2 , and that they may have complementary functions, ice-2 and ice-5 might induce their own expression of in the later phases of apoptosis. Based on the expression profiles, the levels of both ice-2 and ice-5 decreased significantly at 1 h after H 2 O 2 stimulation, and the level of ice-2 remained low from 1 to 4 h after H 2 O 2 stimulation. In contrast, the level of ice-5 fluctuated from low to a medium during this period. This was quite different from the profile of UV induced apoptosis ( Figure 5 ). During UV induced apoptosis, from 1 to 4 h post treatment, ice-5 was almost undetectable. This difference may have resulted in the changing morphology of B. mori cells at 5 h after stimulation. The unique expression patterns of ice-2 and ice-5 suggest that the single exon difference between them may be the reason for the unique role of ice-5 in the apoptotic pathway. In addition, the total reduction in fluorescence ratio of H 2 O 2 treated cells is about 3 times more than the reduction in fluorescence ratio of UV treated cells. This suggests that H 2 O 2 -induced damage led to a more serious loss in the potential of the mitochondrial membrane ( Table 2 ). This may have happened because UV irradiation damage to cells is only partly due to oxidative damage causing mitochondrion dysfunction ( Kannan and Jain 2000 ). When the UV irradiation causes DNA mutation, DNA repair mechanisms might function to restore some mutations, so that both ice-2 and ice-5 were less active in UV stimulated cells. In conclusion, ice-2 and ice-5 synchronal expression profiles indicate that activation of ice-2 and ice-5 may be related to mitochondrial dysfunction after H 2 O 2 -induced damage and that ice-2 and ice-5 might cooperate in the early phases of both H 2 O 2 and UV induced apoptosis in a B. mori cell line. The comparison between relative expression profiles of H 2 O 2 and UV-induced apoptosis suggests that the absence in ice-2 of an 84bp exon that exists in ice-5 might be the reason for lower activity of ice-2 than of ice-5 in the H 2 O 2 induced apoptosis pathway. Because UV irradiation not only induces the generation of OH and H 2 O 2 ( Kannan and Jain 2000 ), but also can cause mutation of DNA, UV induced apoptosis is more complex than H 2 O 2 -induced apoptosis. This phenomenon would occur uniquely in UV irradiationinduced apoptosis and is a topic for further study.
Associate Editor: Kostas latrou was editor of this paper Caspase family proteins play important roles in different stages of the apoptotic pathway. To date, however, functions of Bombyx mori L. (Lepidoptera: Bombycidae) caspase family genes are poorly known. This paper focuses on the morphology, mitochondrial membrane potential, and expression profiles of two novel B. mori caspase family genes ( ice-2 and ice-5 ) in 3 μM hydrogen peroxide (H 2 O 2 ) damaged B. mori cells, which were separated from the ovary of B. mori . In addition, comparisons were made between damage caused by H 2 O 2 and by ultraviolet (UV) irradiation. The results showed that the potential change of the mitochondrial membrane occurred at 0.5 h after H 2 O 2 stimulation, which was sooner than occurred in the UV treated model where the obvious decrease appeared at 6 h after stimulation. In addition, the total change in the potential of the mitochondrial membrane in H 2 O 2 treated B. mori cells was larger than with UV treated cells during the whole process. Analysis of fluorescent quantitative real-time PCR demonstrated that ice-2 and ice-5 might be involved in both H 2 O 2 and UV-induced apoptosis in B. mori cells. Notably, after exposure to H 2 O 2 , the expression patterns of ice-5 were remarkably higher than those of ice-2 , while the result was the opposite after exposure to UV irradiation. The data indicate that apoptosis induced by H 2 O 2 was directly related to the mitochondrial pathway. The two isoforms of B. mori ice may play different roles in the mitochondrion associated apoptotic pathway in B. mori cells, and the apoptotic pathway in H 2 O 2 induced B. mori cells is different from the UV induced apoptotic pathway. Keywords
Acknowledgements This work was supported by the 973 National Basic Research Program of China ( 2005 CB121005); The Six-Field Top programs of Jiangsu Province; National Natural Science Foundation of Jiangsu Education Committee (06KJD180043); and Innovation Foundation for Graduate Students of Jiangsu Province.
CC BY
no
2022-01-12 16:13:46
J Insect Sci. 2010 May 8; 10:43
oa_package/6e/24/PMC3014814.tar.gz
PMC3014815
20672983
Introduction The insect juvenile hormones (JH) represent a family of acyclic sesquiterpenoids that regulate a diversity of processes in the insect life cycle ( Nijhout 1994 ; Riddiford 1996 ; Gade et al. 1997 ; Lafont 2000 ; Goodman et al. 2005 ). JH affects insect development by maintaining the larval stage and inhibiting metamorphosis. In adults, JH is involved in regulating reproductive physiology ( Riddiford 1996 ). Although well-studied from the physiological standpoint, the molecular mechanisms underlying JH action remain largely unknown ( Jones 1995 ). Several molecular mechanisms for JH action have been proposed ( Wheeler et al. 2003 ; Goodman et al. 2005 ). It has been suggested that JH acts through a specific nuclear receptor complex that modulates gene expression at the level of transcription ( Riddiford 1996 ). This hypothesis is supported by the lipophilic nature of JH and its chemical similarity to the retinoids, compounds known to activate specific nuclear transcription factors, including the vertebrate retinoid X receptor ( Mangelsdorf et al. 1995 ). Due to its lipophilicity, one might expect the hormone to easily pass through the cellular membrane and interact with cytosolic or nuclear transcription factors; however, there is increasing evidence that suggests JH may act at the membrane level triggering a membranereceptor-mediated signal transduction pathway. In the male accessory glands of Drosophila melanogaster , it has been demonstrated that JH acts via protein kinase C and calcium to stimulate protein synthesis ( Yamamoto et al. 1988 ). This interaction with protein kinase C is a classical signal transduction pathway that involves membrane receptors and G-coupled proteins ( Sevala et al. 1989 ; Pszczolkowski et al. 2005 ; Kethidi et al. 2006 ). Thus, JH may regulate gene expression at multiple levels and through multiple mechanisms. Genome-wide gene expression analysis by microarray is the method of choice to identify insect genes that are affected by JH treatment. A Drosophila microarray chip is currently available that contains probes for 14,010 putative open reading frames (ORF) within the genomic DNA of this model insect (Affymetrix, Inc). While microarray technology is widely used for expression analysis, the technique exhibits problems that are becoming increasingly apparent. Microarray is a reliable method to detect changes in expression of high abundance genes but the accuracy of identifying changes in low abundance gene transcripts is somewhat problematic ( Beckman et al. 2004 ; Morey et al. 2006 ). Of particular concern is the ability of microarray analysis to correctly identify changes in low abundance genes or down-regulation of medium abundance genes. Both of these problems arise from the interference of background fluorescence with the low intensity signal from low abundance genes or lower expression of medium abundance genes ( Beckman et al. 2004 ). The accuracy of microarray can be optimized by defining a threshold of reliability based on fold-change and P-value from the chip analysis software, but problems with false positives and false negatives remain ( Morey et al. 2006 ). Real-time quantitative reverse-transcription quantitative PCR (real-time RT-qPCR) has become the standard technology to verify microarray gene expression profiling. Realtime RT-qPCR has many advantages over microarray for the quantification of specific gene transcripts such as the affordability of performing multiple biological replications and normalizing expression to validated reference RNAs that are known to be invariant under experimental conditions. A major advantage of real-time RT-qPCR is a greatly expanded dynamic range. Microarray analysis can reliably detect expression differences over a three-order of magnitude range (1000-fold) while the dynamic range of real-time RTqPCR extends over seven orders of magnitude (10 million-fold) ( Beckman et al. 2004 ). This enables the accurate measurement of differences over a much larger range of gene expression levels including medium and low abundant transcripts. Two strategies are commonly employed to enumerate the results obtained by real-time RT-qPCR; the standard curve method (absolute quantification) and the comparative threshold method (relative quantification). Absolute quantification relies on the inclusion of a standard curve on each reaction plate and results in determination of the actual quantity of the target transcript expressed in copy number or weight. This method has the advantage of correcting differences in primer efficiencies. The disadvantage of absolute quantification is the significant reduction in the number of experimental samples that can be run on a single plate. Relative quantification determines changes in steady-state mRNA levels of a gene across multiple samples and biological replicates by determining the change in gene expression relative to a control RNA that is designated as the calibrator ( Pfaffl 2001 ; Rasmussen 2001 ). With this method, target transcript amounts are expressed as a relative expression ratio (RER) relative to the calibrator. Both methods require the normalization of target gene expression using multiple stably expressed internal control mRNAs. These reference gene mRNAs must be shown to be stable under the experimental conditions being examined and are evaluated using software programs such as BestKeeper or geNorm ( Vandesompele et al. 2002 ; Pfaffl et al. 2004 ). As with any quantitative measure, care must taken with real-time RT-qPCR to insure that the necessary controls and evaluations have been performed. These include: assessment of RNA quality, assessment of DNA contamination, determination of primer efficiencies and sensitivities, and the use of multiple stable reference RNAs. A recent survey of real-time RT-qPCR publications revealed that only 30% of the published analyses examined satisfied all of these criteria ( Bustin 2005 ). In this work, we analyzed genome-wide JH III induced expression changes in Drosophila S2 cells by microarray. As a control for the lipid component of JH III, methyl linoleate (MLA) was used, as it is a lipid with physical characteristics similar to JH III but is not hormonally active in insects. Microarray expression differences of a select set of genes using real-time RT-qPCR were validated using several reference transcripts that were stably expressed in S2 cells under experimental conditions.
Materials and Methods Purification and quantification of JH homologs JH III and MLA were purchased from Sigma Chemicals. The biologically active enantiomer (10 R ) of JH III was isolated from a racemic mixture by chiral HPLC chromatography ( Cussonetal. 1997 ). Cell culture Drosophila S2 cells (Invitrogen) were maintained in SF900 serum-free medium (Invitrogen) at 27° C. Cells (5×10 5 /ml) were seeded into 60 mm petri dishes (Nunc) containing 3 ml of medium and allowed to grow for 36 h. Cells at approximately 80% confluency were challenged with 250 ng/ml (10 R ) JH III using charcoal-stripped 0.1% bovine serum albumin (BSA) as a carrier for 4 h. Control cells were treated with 0.1% BSA alone or 0.1% BSA with 250 ng/ml MLA and harvested after 4 h of treatment. Isolation of RNA Cells were lysed directly in the culture dishes and total RNA extracted using the RNeasy mini kit (Qiagen). RNA was exhaustively treated with 2 U of Turbo DNAase (Ambion) for 1 h at 37°C and quantified by UV spectrophotometry (NanoDrop, Inc.). RNA quality was determined by electrophoresis of samples on denaturing agarose gels. Residual DNA contamination was quantified using real-time RT-qPCR and primers specific for the Drosophila rp49 gene ( Table 1 ). Those RNA samples showing threshold cycle (C q ) values ≥33 cycles were deemed to be free of DNA contamination. Microarray analysis Total RNA was made into cRNA using standard reagents (Affymetrix, Inc). Duplicate total RNA pools from two independently treated S2 cultures were taken for each sample, and resulting single-dye labeled cRNAs were hybridized to Drosophila Genome Arrays (Affymetrix, Inc). Arrays were washed with a custom array washer, and scanned with an Affymetrix 3000 scanner. Cell intensity files were analyzed using the Rosetta Resolver algorithm (Rosetta Biosoftware) and comparisons were performed using Resolver's ratio ANOVA function. Resolver ANOVA analysis is similar to standard ANOVA but uses two inputs, expression measurement quantity and estimated error of measurement quantity. This additional input provides more reliable variance measurements, a necessity when the number of replicates is small ( Rajagopalan 2003 ). This error estimate also brings extra degrees of freedom to the analysis, allowing for fewer false positives and false negatives. Real-time RT-qPCR primer design Primers were designed based on D. melanogaster mRNA sequences obtained from FlyBase that were imported into Beacon Designer software (Premier Biosoft International); a program designed to generate primer pairs suitable for real-time RT-qPCR. The SYBR Green module with program setting ‘avoid template structure’ was chosen to limit primer sequences to regions of little secondary template structure. Primers were obtained from IDT (Integrated DNA Technologies) and their sequences are shown in Table 1 . Both reference and target primers exhibited comparable efficiencies as determined using a dilution series of target DNA. Primer efficiencies were determined from dilution curves using the formula: E = 10 (-1/slope) ( Pfaffl 2001 ; Rasmussen 2001 ), with the slope determined by the iCycler iQ software ( Table 1 ). cDNA synthesis and real-time RT-qPCR First-strand cDNA synthesis was performed using the iScript cDNA synthesis kit according to the manufacturer's instructions (Bio-Rad Laboratories). Briefly, the reaction was performed with 1.0 μg total RNA in 15 μl RNase-free water, 4 μ l 5X iScript reaction mix with a blend of oligo dT and random hexamer primers, and 1 μ l iScript reverse transcriptase. The reaction conditions were performed at 25°C for 5 m, 42°C for 30 m, 85°C for 5 m, and the cDNA was stored at 4°C. Expression of mRNA was analyzed by realtime RT-qPCR using the iCycler iQ detection system (Bio-Rad Laboratories). Samples were performed in triplicate in 25 μl reactions; 12.5 μl iQ SYBR Green Supermix (Bio-Rad Laboratories), 0.2 μ M forward and reverse primer, and 11.5 μ l of 1:10 diluted cDNA sample. The threshold cycle (C q ) is the PCR cycle at which the fluorescence of the PCR product exceeds an arbitrary threshold. The C q of the target transcript in RNA from JH IIIchallenged S2 cells was compared with the target transcript C q generated by RNA from S2 cells treated with MLA. Target gene abundance was normalized to three internal reference transcripts that were shown to be invariant using BestKeeper ( Pfaffl et al. 2004 ) and geNorm ( Vandesompele et al. 2002 ) software. The RER was calculated as the difference between the C q values and was determined using the equation 2 -ΔΔCt as previously modified ( Rotenberg et al. 2006 ). PCR conditions were: 95° C for 3 m, 40 cycles of 95° C, 10s; 50° C, 45 s and 1 cycle of 95° C, 1 m; 55° C 1 m then followed by a dissociation curve with 80 cycles of 55° C, 10 s with a 0.5° C increase per cycle. To assure PCR accuracy, PCR reaction products were sequenced directly and compared to the expected target sequence. Statistical analysis of RER values was performed with GraphPad Prism software using the unpaired two-tailed t-test function (GraphPad Software, Inc).
Results Analysis of JH III effect on genomic expression in Drosophila S2 cells. Drosophila microarray chips were challenged with RNA from three treatments of S2 cells: (10 R ) JH III treatment, MLA treatment, or no treatment (no-treatment control). Introduction of lipophilic compounds such as JH III or MLA to culture medium devoid of serum poses a dispersal problem. JH III and MLA are surface active and bind nonspecifically to hydrophobic surfaces ( Kramer et al. 1976 ; Giese et al. 1977 ). To overcome this problem, we used 0.1% BSA that serves as a carrier molecule to reduce nonspecific binding in all treatments. MLA is a lipid with physical similarity to JH III but lacks hormonal activity. As shown in Figure 1 , MLA has a molecular structure comparable to JH III containing two double bonds and an O-methyl ester. In preliminary experiments, MLA was used as a lipid control for JH I due to the related chemical structure and identical molecular weights (WG Goodman, unpublished data). MLA demonstrated no hormonal activity in a Manduca sexta bioassay. In addition, JH I but not MLA induced the expression of hemolymph juvenile hormone binding protein mRNA in M. sexta when analyzed by real-time RTqPCR. MLA was found to have no effect on D. melanogaster eclosion success (JR Lindholm and WG Goodman, unpublished). In the present work, MLA was used to control for any potential effects on gene expression caused by the non-specific cellular metabolism of the JH III added to the S2 cells. Comparing genomic expression from JH III treated cells to control cells (Appendix 1 available online) resulted in numerous putative ORFs showing differences. The following criteria were used to identify potentially significant changes between the two treatments: differences in expression ≥ 2--fold and P-value ≤ 0.01. Using these criteria, 120 of 14,010 (0.86 %) putative Drosophila genes demonstrated differences in expression with 14 ORFs showing an increased expression and 106 ORFs a reduced expression (Appendix 2 available online). Comparing MLA-treated S2 cells to the notreatment control (Appendix 3 available online) revealed that 63 of 14,010 (0.45%) ORFs displayed significant changes including 10 up-regulated genes and 53 down-regulated genes ( Table 2 ). Comparing RNA from JH IIItreated S2 cells to MLA-treated cells (Appendix 4 available online) reduced the number of ORFs demonstrating significant differences as only 32 of 14,010 (0.23%) putative Drosophila genes exhibited significant differences with 13 genes showing a > 2-fold increase and 19 ORFs showing a decreased expression ( Table 3 ). Most of the JH III up-regulated genes relative to MLA were loci of unknown function ( Table 3 ). However, Epac (FBgn0033102), a gene encoding a guanine nucleotide exchange factor of Rap1 small GTPase, showed a >3fold increase in expression. FBgn0036313 (a serine/threonine kinase) was induced ∼3-fold as were several ORFs with unknown function (FBgn0040887, FBgn0037057, and FBgn0040 603). Among the down-regulated genes were heat shock protein 70 (FBgn0023529), a transcription factor (FBgn0039923), and cecropin A2 (FBgn0000277). A large (-5 to 15-fold) down-regulation of the Drosophila 18S rRNA was apparently affected by JH III. Verification of microarray analysis To verify the expression levels derived from the microarray analyses, Drosophila S2 cells were treated with (10 R ) JH III (250 ng/ml) or MLA (250 ng/ml) for 4 h and analyzed for target transcript RER using real-time RTqPCR. Suitable internal reference gene primers were chosen based on genes that were unaffected by the addition of JH III in the microarray analyses (Appendix 4 available online). These were FBgn0023529, which has ATP binding activity and is involved in response to stress, FBgn0034354, which is a glutathione transferase involved in a toxin defense response, and FBgn0002626 (ribosomal protein 49/RpL32 ), which is one of the most commonly employed standards used to normalize gene expression in Drosophila . Two criteria were used for reference gene characterization: i) primer efficiencies close to 2.00 (100% efficient); and ii) stable expression in total RNA from MLA-treated and JH III-treated S2 cells. Both reference and target primers exhibited comparable efficiencies as determined using a dilution series of target DNA derived from D. melanogaster ( Table 1 ). Reference transcript stability was determined using the BestKeeper ( Pfaffl et al. 2004 ) and geNorm ( Vandesompele et al. 2002 ) programs. BestKeeper is an Excel-based tool designed to determine the correlation between the raw values of real-time RT-qPCR for a particular internal reference gene of interest and the geometric mean (the BestKeeper Index ) of all of the reference genes tested under various treatments. The program performs pairwise Pearson correlations between the C q values of a candidate gene and the Bestkeeper Index and reports the measure of the strength of the relationship as an r-value. Ultimately, a strong and significant ( P < 0.05) correlation (r-value) between the index and the reference RNA candidate determines its stability. The BestKeeper Index values were determined from a data set consisting of C q values of potential reference transcripts from both treatments (i.e. multiple RNA samples from MLA-treated and JH III-treated S2 cells). The stability of the reference genes rp49/RpL32 , FBgn0023529, FBgn0034354 and transcripts (as defined by their respective primer sets) was high (0.96 > r > 0.73). However, the FBgn0034354 transcript consistently exhibited the lowest correlation to the BestKeeper Index and therefore was the least consistent of these three reference RNAs. The raw expression data from each internal reference gene was also analyzed using geNorm software ( Vandesompele et al. 2002 ). Average expression stability (M) for the reference genes was less than 0.14, which indicates a high degree of constancy under our experimental conditions. Further, geNorm analysis of the optimal number of potential internal reference genes suggested that three reference genes were appropriate for data normalization. We have empirically confirmed the stability and utility of these reference transcripts by calculating the RER of several JH III induced genes individually with all three reference genes and found no statistically significant difference in the RERs (data not shown). The RER was calculated for several of the loci that were significantly changed upon treatment with JH III ( Table 3 ). For this analysis, RNA was isolated from three independent (10 R ) JH III- or MLA- treated cell cultures. Target gene RNA abundances were normalized to the rp49/RpL32 reference transcript. The RER was calculated using control (MLA treatment) RNAs as a calibrator. The mean of normalized target gene abundance from all three MLA-treated samples was calculated and this value was designated as the ‘calibrator’. Next, the individual target gene abundances from both the JH III- and MLA-treated samples were divided by the calibrator. This allowed the calculation of a RER for each sample replication which was used to compare the means of the treatment RERs statistically. Relative to the MLA-treated cells, the RER of all three JH III up-regulated transcripts were confirmed to be significantly increased ( Table 4 ). Epac and two loci of unknown function (FBgn0040887 and FBgn0037057) were tested. In contrast, only one of six genes (FBgn0034199) indicated to be down regulated by the microarray data was confirmed to be significantly reduced by real-time RT-qPCR ( Table 4 ). Heat shock protein 70 (FBgn0023529) was predicted to be reduced 2.2-fold by JH III action from microarray data but was found to be increased 1.35-fold by real-time RT-qPCR analysis. The remaining four predicted downregulated transcripts were not found to be statistically different (P > 0.05) from the MLA control by two-tailed t-test ( Table 4 )
Discussion Microarray analyses indicated that the expression of a number of genes was modified positively or negatively in response to a JH III challenge of Drosophila S2 cells (Appendix 1); however, when MLA was tested under identical conditions, expression levels for many genes displayed a similar profile for both JH III and MLA (Appendices 1 and 3). Less than a 0.5% of the genes examined by microarray analyses demonstrated a highly significant altered level of expression in the presence of JH III but not in the presence of MLA (Appendix 3, Table 4 ). The up-regulated genes included Epac , several loci of unknown function, a serine/threonine kinase, branchless , and phosphoenolpyruvate carboxykinase . Down-regulated genes included several genes of unknown function, a pre-mRNA splicing factor, cecropin A2 and 18S rRNA . Despite the advantages of microarray technology, real-time RT-qPCR remains the most accurate method to analyze mRNA expression and to verify key relationships identified by microarray analysis ( Hembruff et al. 2005 ). For real-time RT-qPCR to quantitatively assess expression levels of target mRNAs, the selection of appropriate internal reference genes is critical ( Vandesompele et al. 2002 ; Pfaffl et al. 2004 ; Bustin et al. 2009 ). It is becoming increasingly evident that reliance on a single reference transcript may lead to significant errors in the analysis of target gene expression ( Vandesompele et al. 2002 ; Pfaffl et al. 2004 ). Three internal reference RNAs (rp49/RpL32 [FBgn003461 4], FBgn0034354, FBgn0023529) were selected due to their stable expression in S2 cells treated with either JH III or MLA in the microarray analysis. Using three reference genes, as little as a 17% difference in the mean transcript relative expression ratios was shown to be statistically significant ( Rotenberg et al. 2006 ). All three up-regulated genes that we analyzed ( Epac , FBgn0040887 and FBgn0037057) were confirmed to be upregulated by real-time RT-qPCR ( Table 4 ). However, only one of six genes predicted to be down-regulated by the addition of JH III was confirmed to be statistically reduced by real-time RT-qPCR. This result illustrates the absolute need to confirm microarray data by real-time RT-qPCR analysis before time and resources are expended on further analysis ( Morey et al. 2006 ). The relative inability of microarrays to identify down-regulated genes has been noted previously ( Beckman et al. 2004 ; Morey et al. 2006 ). This phenomenon relates to the decreased reliability and increased variability in the detection of spots on the microarray exhibiting reduced fluorescence ( Beckman et al. 2004 ). An important consideration with any expression analysis is the choice of the conditions that will be used as the control for the microarray or real-time RT-qPCR analysis. In this study, we used MLA, a lipid physically similar to JH III but without known hormonal activity, as our control treatment of S2 cells. Another recent microarray analysis of genes that are JH III induced in both Drosophila and honey bee, used dimethyl sulfoxide (DMSO), the carrier for JH III in the experiments, as a control treatment ( Li et al. 2007 ). It has been previously shown that DMSO significantly increases juvenoid activity when used as a solvent in a bioassay on Dysdercus cingulatus ( Sláma 1974 ). A search of online data (Appendices 1, 2, and 3) revealed that all of the genes described in Li et al. ( 2007 ) as JH III inducible were also induced by MLA in our microarray analysis (Appendix 5 available online). This raises the possibility that the induction of these loci may be influenced by the metabolism of the JH III lipid backbone. Due to the increased expression in the MLA control, this set of genes was not identified as JH III inducible (Appendix 5 available online). Since the raw microarray data and the conditions used for microarray analysis in Li et al. ( 2007 ) have not as yet been published, the reason for the failure of the previous analysis to identify the genes that were found to be JH III inducible in S2 cells ( Table 3 , Table 4 ) is not evident. It may be that DMSO treatment simulates JH III induction to some extent in S2 cells thereby masking the induction, or that this set of genes was induced in S2 cells but not in honey bee by JH III treatment (induction in both insects was a criterion for analysis in the Li et al. 2007 paper). Preliminary real-time RT-qPCR data confirming the specific induction of Epac by JH III in both S2 cells and Drosophila third instars ( Wang et al. 2009 ) confirms this aspect of our microarray analysis. In summary, this work details the analysis by microarray of genome-wide gene expression alterations induced by treatment of Drosophila S2 cells with JH III. The comparison of JH III treatment to treatment with MLA, a structurally similar but hormonally inactive lipid, revealed only 32 of 14,010 loci responded differentially by microarray analysis. Up-regulated genes were confirmed by real-time RT-qPCR but most predicted down-regulated genes failed confirmation. This indicates that a remarkably small number of genes were specifically affected by JH III. The most intriguing gene that was confirmed to increase expression following JH III treatment was Epac that demonstrated highly significant up-regulation in the presence of JH III (≥3-fold) but was refractory to MLA. Epac , an exchange factor directly activated by cAMP, is a direct target for cAMP and a guanine-nucleotide exchange factor for the small GTPase, Rap1 ( Bos 2005 ). This suggests that induction of Epac expression may be a major component of the JH III hormone's activity in insect development.
Associate Editor: Zhijian Jake Tu was editor of this paper. A microchip array encompassing probes for 14,010 genes of Drosophila melanogaster was used to analyze the effect of juvenile hormone (JH) on genome-wide gene expression. JH is a member of a group of insect hormones involved in regulating larval development and adult reproductive processes. Total RNA was isolated from Drosophila S2 cells after 4 hours treatment with 250 ng/ml (10 R ) JH III or 250 ng/ml methyl linoleate. A collection of 32 known or putative genes demonstrated a significant change with JH III treatment ( r > 2.0, P ≤ 0.005). Of these, the abundance of 13 transcripts was significantly increased and 19 decreased. The expression of a subset of these loci was analyzed by real-time quantitative reverse transcription polymerase chain reaction (RT-qPCR). Three loci that exhibited constant expression in the presence and absence of JH III (RP49 [FBgn0002626], FBgn0023529, and FBgn0034354) were evaluated and found to be reliable invariant reference transcripts for real-time RT-qPCR analysis using BestKeeper and geNorm software. Increased expression in presence of JH III was confirmed by real-time RTqPCR analysis. However, only one of five loci that exhibited reduced expression on microarrays could be confirmed as significantly reduced (P ≤ 0.05). Among the confirmed JH III up-regulated genes were two loci of unknown function (FBgn0040887 and FBgn0037057) and Epac , an exchange protein directly activated by cyclic AMP, a guanine nucleotide exchange factor for Rap1 small GTPase. Keywords
Acknowledgements This work was supported in part by the Wisconsin Agricultural Experiment Station (WIS0122) (WGG) and the Alex and Lillian Fier Distinguished Graduate Fellowship (JW). Abbreviations juvenile hormone; exchange protein directly activated by cyclic AMP; methyl linoleate; open reading frame; quantitative reverse transcription polymerase chain reaction; relative expression ratio
CC BY
no
2022-01-12 16:13:47
J Insect Sci. 2010 Jun 15; 10:66
oa_package/bc/ba/PMC3014815.tar.gz
PMC3014816
20569133
Introduction Using hymenopteran parasitoids as biological control agents in any integrated pest management program largely depends on how well the natural history is known for the particular group of species. But, despite the importance of the subfamily Opiinae in biological control strategies, the morphological changes it experiences during the immature stages have been poorly studied because preimaginal development takes place inside the host. Diachasmimorpha longicaudata (Ashmead) (Hymenoptera: Braconidae) is an obligate endoparasitoid of third instar larvae of fruit flies (Diptera: Tephritidae), although some studies indicate that it can occasionally develop inside other Dipterans, such as Musca domestica ( Terán López 1983 ). It is native to Malaysia, India, New Britain, Borneo, Saipan, and the Philippine Islands ( Bess 1961 ; Clausen et al. 1965 ). D. longicaudata is the most important parasitoid species that is used as part of integrated pest management programs against fruit flies of the genera Bactrocera, Anastrepha and Ceratitis, comprising some of the world's most widespread and damaging pests of fruticulture and horticulture and causing enormous economic losses ( Cancino Díaz et al. 1993 ; Malavasi et al. 1994 ; Ovruski et al. 1999 ). This parasitoid has been released and successfully established in Hawaii, Australia, Fiji, Mexico, Costa Rica, and Trinidad ( Clausen 1978 ; Wharton et al. 1978 ; Wharton 1989 ). However, its biology is still not understood well enough to analyze its impact on exotic ecosystems and to improve its production in mass rearing facilities. D. longicaudata females usually lay only one egg inside the host larva, but when hosts are scarce, and under laboratory conditions, superparasitism occurs as more than one egg can be deposited in host larvae ( Greany et al. 1976 ). However, only one individual from each puparium reaches the adult stage. A previous study carried out by Montoya et al. ( 2000 ) on D. longicaudata reared in Anastrepha ludens (Loew) (Diptera: Tephritidae) indicated that there was a highly positive correlation between the number of scars per puparium and the number of first instar parasitoids per puparium. Adult D. longicaudata usually emerge from fruit fly puparia a few days after the emergence of adult flies from unparasitized puparia and, depending on temperature and humidity during development, parasitoid males emerge between two and three days before parasitoid females ( Hoy 1994 ; Bautista et al. 1997 ). A preliminary study showed that male parasitoids emerge around 17 days post parasitism (DPP) when reared at 25° C and 75% relative humidity ( Carabajal Paladino et al. 2005 ). The aim of this study was to thoroughly describe the developmental time, survival rates and morphology of the immature stages of D. longicaudata reared in Ceratitis capitata (Wiedemann) (Diptera: Tephritidae) larvae under controlled environmental conditions and to compare the results with previous studies of related Opiinae species ( Pemberton et al. 1918 ; Willard 1920 ; Biliotti et al. 1959 ; N'Guetta 1990 ; Hurtrel et al. 2001 ; Rocha et al. 2004 ). This information contributes to the ecological, biological and genetic characterization of the species to optimize the use of D. longicaudata as a biological control agent.
Materials and Methods Experimental insects Adult D. longicaudata were imported from Mexico to Tucumán (Argentina) in 1998 and were introduced to our laboratory in 2001 (SENASA, exp n° 14054/98). They were maintained in glass flasks with water and honey. Larvae of C. capitata were reared in a larval medium ( Terán 1977 ) and offered to adult D. longicaudata females in plastic Petri dishes (5 cm in diameter and 1 cm in depth) covered with voile fabric for a period of 4 to 6 hours. The larvae were then transferred into a plastic tray with fresh artificial larval medium, where they completed their development. The trays were placed over a thin layer of vermiculite, which acted as a pupation medium. Both larvae and pupae were maintained in an incubator at 25° C, 85% RH and an 18:6 L:D photoperiod. Experimental procedures To follow development of D. longicaudata, third instar larvae of C. capitata were exposed to 7-day-old adult females of D. longicaudata, according to the procedure described above for a period of 4 hours. This short period was used for a better synchronization of stage duration. The exposed hosts were then maintained under standard rearing conditions. The development of D. longicaudata was followed during 20 days which was considered long enough to fully cover the development of all individuals. It is not possible to follow individual eggs throughout their development due to the endoparasitic characteristic of D. longicaudata. To study the immature stages of development of the parasitoid, 20 homogenous samples were obtained from the exposed material. During the first and second days, the host is still a larva and tends to cluster in the rearing tray, so it was difficult to obtain enough homogeneous samples to dissect. To overcome this problem, three spoonfuls of the larval medium containing the larvae previously exposed to the parasitoids were randomly sampled during the first two days, and every host larvae was recovered. By the third day all of the C. capitata larvae reached the pupa stage. Pupae were recovered from the pupating medium and divided into 18 groups, all of which were kept in an incubator under the study conditions. Every 24 hours, one of the groups was dissected under a stereoscopic binocular microscope, and the parasitoids were photographed. For every parasitoid found, the stage of development and other characteristics (presence of fungus, color, etc.) were recorded. The recorded data were plotted to obtain the developmental curves for each instar. The duration of each instar was estimated as the time between its developmental curve and the next one, at a frequency of 0.5. Superparasitism was recorded by counting the number of oviposition scars (marks left in the host's cuticle after being pierced by the female parasitoid with its ovipositor) and the number of parasitoids found inside the puparium on the second and third days after parasitoid attack. Statistical analysis The correlation between the number of scars and the number of parasitoid larvae per host puparium was analyzed using the non-parametric Spearman test, since these variables have a Poisson distribution. The independence of the variables was studied by a χ2 test, including the data from the pupae with 1 to 6 scars and 0 to 3 parasitoids, which were the bulk of the cases (95.3%). A “Difference test” was performed in order to compare the statistical significance of the differences between the correlation coefficient obtained in the present study and the one presented by Montoya et al. ( 2000 ), where D. longicaudata was reared in A. ludens.
Results Developmental analysis Between 665 and 853 larvae or pupae of C. capitata were dissected daily, and 119–212 parasitoids were found and examined each day. Cases of superparasitism were not included in the developmental analysis. Overall parasitism was 20%. The number of individuals found daily in each stage, either alive or dead, is summarized in Table 1 . The complete immature development of this species, from egg to the emergence of the adults, took about 16 days (at 25° C, 85% RH and 18:6 L:D photoperiod). The duration of each stage of development was as follows: egg = 1.5 days, first instar larva = 1.5 days, second instar larva = 2.7 days, third instar larva = 1.5 days, prepupa = 2.5 days, pupa = 3 days, and pharate adult = 3 days ( Table 1 , Figure 1 ). Morphology Egg ( Figure 2a ): One day after parasitization eggs were slightly cylindrical, translucent, and very difficult to visualize amidst the host tissue. The few eggs found on the second day were brighter and more swollen. Larva: First instar larva ( Figures 2b, 2c ): The head was large, highly chitinized, and bore a pair of sickle-like mandibles that could usually be seen through the host larval cuticle ( Figure 3a ). The body was translucent and segmented, but it becomes whitish as the digestive canal became gradually filled and swollen with globules of fat. Second instar larva ( Figure 2d ): When the parasitoid larva reached the second larval instar, the molted skin was easily dissected from the fly puparium. The head could not be differentiated from the other body segments, and the mandibles were translucent and difficult to visualize ( Figure 2d , arrowhead). The body was entirely glabrous, and the segments were less distinguishable than in the previous instar. Approximately 10% of the dissected individuals remained in second-third larval instar from the sixth day after parasitization, but they were mostly dead and dehydrated at the bottom of the puparium ( Figure 3b ). Third instar larva ( Figure 2e ): The parasitoid was much bigger and filled almost the entire host puparium. Pointed mandibles with brownish chitinizations at the tips and bases were distinguished on the head ( Figure 2f ). The body was yellowish, well-segmented, and large white cells were clearly visible under the integument. The larvae could move from side to side and change its orientation inside the puparium. A dark oval meconium could be seen through the cuticle at the caudal end. This meconium contained the digestive waste and was eliminated during the adult emergence ( Cancino Díaz 1998 ). Prepupa ( Figure 2g ): The prepupa was characterized by a lack of mobility, compared to the third larval instar, and the beginning of a reddish pigmentation of the eyes ( Figure 2g , arrowhead). The development of the pupa could be seen through the cuticle as this instar proceeded. No visible changes in the mandibles' morphology were detected. Pupa ( Figures 2h, 2i ): The shape of the pupa was similar to that of an adult, with a white body and fully pigmented red eyes. As pupation progressed, both the eyes and the body became darker. Sexual dimorphism was very noticeable. Females presented a well-developed ovipositor bent to the dorsal side ( Figure 2i , arrow), and males had valves that corresponded to the external part of the reproductive system at the ventral-caudal region. Furthermore, the antennae were noticeably longer in males than in females ( Figures 2h and 2i , arrowheads). Pharate adult ( Figures 2j, 2k ): The only difference from the pupa was that the body started to acquire a brownish pigmentation, as the sclerotization process took place. Superparasitism analysis A total of 297 pupae were analyzed, and the results are summarized in Table 2 . When the number of scars was impossible to count, the situation was recorded as “present,” and when it was difficult to determine the presence of oviposition, it was recorded as “uncertain.” The number of oviposition scars ranged from 0 to 12, with the mode at one scar (94 cases, 31%). Host pupae with more than six scars were very infrequent (9 cases, 3%). The number of parasitoid larvae inside each puparium ranged from 0 to 9. Superparasitism was found in 19.87% of the analyzed host pupae. The correlation analysis between the number of scars and the number of parasitoid larvae per host puparium showed a significant positive correlation (Spearman R = 0.402, p < 0.001, n = 297). The results of the χ 2 test confirmed the non-random distribution of these variables (p < 0.001). The Difference test performed between the current data and the data obtained by Montoya et al. ( 2000 ) (r = 0.9492, p < 0.001, n = 100) showed a significant difference between the two studies, which led to the conclusion that the relation between number of scars and the number of parasitoid larvae per host puparium is stronger when D. longicaudata is reared in C. capitata than when it is reared in A. ludens.
Discussion In the present study the morphological changes occurring to D. longicaudata during its development inside the host's puparium were analyzed, along with the duration of each stage under controlled environmental conditions. The preimaginal development of this species took about 16 days (at 25° C, 85% RH and an 18:6 L:D photoperiod) when reared in C. capitata. The size and duration of parasitoid developmental stages varied with the size, age and quality of the host in which they were reared ( Lawrence et al. 1976 ; Lawrence 1990 ) as well as the environmental conditions at which they were kept. Hurtrel et al. ( 2001 ) analyzed the developmental time of Diachasmimorpha tryoni (Cameron) (Hymenoptera: Braconidae) reared in C. capitata larvae at 25° C, 85% ± 15 % RH and a 12:12 L:D photoperiod, and found that male and female development took 19.80 days and 21.47 days, respectively. Those times were longer than when observed for D. longicaudata possibly because of interspecific differences, although some effect of the relative humidity or the photoperiod cannot be ruled out. These observations of the morphology and characteristics of the immature developmental stages in D. longicaudata are in accordance with previous reports on related species of Opiinae parasitoids, such as D. tryoni, D. fullawayi, Opius humilis, Psyttalia concolor, P. fletcheri, and Fopius arisanus ( Pemberton et al. 1918 ; Willard 1920 ; Biliotti et al. 1959 ; N'Guetta 1990 ; Hurtrel et al. 2001 ; Rocha et al. 2004 ). According to Chapman ( 1998 ), an insect is referred to as a prepupa during the period of quiescence that occurs before the ecdysis to a pupa. Other authors also differentiate the stage pharate adult ( Aluja et al. 1998 ; Kitthawee et al. 1999 ). Although they may not be considered true developmental stages, as no real molt occurs before them, they have specific morphological traits that allow their identification, and they have proven to be very useful for other types of studies, such as cytogenetic analysis ( Kitthawee et al. 1999 ). Therefore, seven preimaginal stages were considered in D. longicaudata: egg, three larval instars, prepupa, pupa, and pharate adult. First instar D. longicaudata larvae were found in C. capitata larvae, suggesting that the egg may hatch while the host is still in active feeding larval stage. This observation was also made by Pemberton and Willard ( 1918 ) for D. triony, D. fallowayi and O. humilis. The parasitized host larva continues to feed and develops to maturity forming a perfect puparium, but then the complete histolysis of the larval tissues occurs within the puparium. Hence, its content is a liquid mass containing the rapidly developing parasite larva ( Pemberton et al. 1918 ). During the development of an endoparasitic insect, various metabolic and endocrinological changes occur. Some parasitoid species regulate host development to satisfy their own physiological requirements ( Vinson et al. 1980 ; Lawrence 1982 ). However, it is also known that, sometimes, the development of the endoparasitoid is tied to the endocrine events associated with the host metamorphosis, while others may exhibit a more facultative synchronization between their development and that of their host ( Lawrence 1986 ). When D. longicaudata is reared in Anastrepha suspensa larvae, it never molts until the onset of larval-pupal apolysis of its host, indicating that the parasite utilizes endocrine signals from the host to mark its change in nutritional quality ( Lawrence 1982 , 1986 ). The temperature and humidity of the rearing conditions also affect the parasitoid development, indirectly, by accelerating or decreasing the developmental rate of its host; and directly, by affecting its own metabolic rate during the last stages, when the host is already eaten. Under the stereoscopic binocular microscope, the second and third instar larvae, previously described by other authors, were impossible to distinguish. According to Pemberton and Willard ( 1918 ), the only differences between the second and the third larval stages are the molt of mandibles and an increase in size. The mandible molt was not observed, and the difference in size was not reliable enough since the parasitoid larval size depends directly on a number of factors, including the size and nutritional state of the larval host when it was parasitized, environmental conditions, whether the parasitoid is diapausing (diapausing individuals are usually smaller than nondiapausing ones), etc. Furthermore, according to the results, the short period between the first and third larval instars does not seem enough for two prolonged larval instars to occur. For practical purposes, it would be better to consider the second and third larval instars as one category (second instar larvae in the present work), as it has been suggested for other parasitoid species such as D. tryoni and F. arisanus ( Hurtrel et al. 2001 ; Rocha et al. 2004 ). The live individuals in early stages of development regularly found from the fourth day after parasitization onward can be explained by the presence of diapausing parasitoids. It has been shown that D. longicaudata has a diapause period during the fourth larval instar or during prepupa, which depends on the temperature, humidity, photoperiod, host and host plant ( Clausen et al. 1965 ; Ashley et al. 1976 ; Aluja et al. 1998 ; Carvalho 2005 ). This study shows that D. longicaudata males and females have different developmental rates, as can be observed by the differences in the pigmentation found between individuals of both sexes of the same age. Male pupae could be identified one day before female pupae, and adult males emerged from their puparium two days before females. As it is difficult to differentiate both sexes before the pupa stage (at least with the methodology used in this study), it is not possible to confirm whether the eggs hatch at different times or if males accelerate or females delay their development at a more advanced stage. The observation that the sex ratio is biased toward females is currently under investigation. As no normally developed fly was found in parasitized puparia, it was interesting to find perfectly formed C. capitata pupa inside puparia with oviposition scars (113 cases, 38.05%), which are usually used as a sign of parasitism. Although a significant correlation was found between the number of oviposition scars and the number of parasitoids found inside the puparium, the former does not indicate directly the number of parasitoids that will be found. These facts will have to be taken into account in further studies of superparasitism and parasitism rates, among others. It was common to observe superparasitized host pupae. As it was stated in the introduction, only one individual is able to complete development. Many theories have been suggested to explain the process that affects the survival of the surplus larvae. They include cannibalism, physical attacks between larvae followed by encapsulation of the injured one, and physiological suppression, where the level of oxygen inside the host or chemical compounds segregated by the older larva do not allow the development of the surplus larvae ( Fisher 1963 ; Lawrence 1988b ; Vinson et al. 1998 ). Lawrence ( 1988a ) studied the interactions between first instar larvae of D. longicaudata in vivo and in vitro, reporting that both physical attack and physiological suppression occur in this species. The present observations agree only with the latter, as no signs of physical injury were detected, and while one larva was always found to be in advanced first or second-third instars, the others remained intact in early first instar and eventually died. The same observation was made by Montoya et al. ( 2000 ) when studying superparasitism in D. longicaudata reared in A. ludens. The description of the changes that take place inside the puparium during the development of D. longicaudata, as well as other characteristics found during the dissections, contribute to a better knowledge of the biology of the species. All this information will be useful to determine when release has to be done to minimize the time the parasitoid has to be kept under rearing conditions, and to estimate the intrinsic rates of population increase, which is important, for example, to establish the releasing methods in the control programs, either inoculative, seasonal inoculative or inundative release ( Lewontin 1965 ; van Lenteren 1986 ). This information will also provide the necessary knowledge for cytogenetic studies, for the design of effective quality control tests for this natural enemy, and for studies related to the differential impact of radiation or other mutagenic sources during early stages of development.
The morphological changes experienced during the immature stages of the solitary parasitoid Diachasmimorpha longicaudata (Ashmead) (Hymenoptera: Braconidae: Opiinae) were studied. This natural enemy of several species of tephritid fruit flies is widely used in biological control strategies. Immature stages are poorly understood in endoparasitoids because they exist within the host. In the present work, developmental processes are described for this species, reared in Ceratitis capitata (Wiedemann) (Diptera: Tephritidae) larvae under controlled environmental conditions. At 25° C, 85% RH, and with an 18:6 L:D photoperiod, preimaginal development takes about 16 days. Five preimaginal stages can be described: egg, three larval instars, prepupa, pupa, and pharate adult. Superparasitism was found in 20% of the host pupae, and the number of oviposition scars was positively correlated with the number of parasitoid larvae per host puparium. The results are compared and discussed with previous studies on related species. Keywords
Acknowledgements We thank Clara Liendo and Fabián Milla for helping with the handling of the biological material. We would also like to thank Mariana Viscarret and Diego Segura for helping with the writing of this work. LZCP and AGP are fellows from CONICET, Argentina. Grant PICTO 12909 from FONCyT, ANPCyT, Argentina to JLC and X164 from UBA, Argentina to AGP are acknowledged. All the experiments in this study comply with the current laws of Argentina.
CC BY
no
2022-01-12 16:13:47
J Insect Sci. 2010 Jun 6; 10:56
oa_package/be/68/PMC3014816.tar.gz
PMC3014817
21209811
1. Introduction Leiomyomas, also known as fibroids or fibromas represent the most common uterine neoplasm, occurring in 20–30% of women between the ages of 35 and 50 [ 1 – 4 ]. However, these benign tumors are extremely rare in women under the age of 20 [ 5 – 11 ]. An accurate detection, characterization and localization of uterine leiomyomas are important in these patients. MR imaging is considered the examination of choice for the detection and localization of uterus fibroids [ 1 – 4 , 12 , 13 ]. Uterine leiomyomas represent an incidental finding on CT examination, usually performed for a variety of other reasons [ 4 , 14 ]. We present a case of a 16-year-old girl with fibromatous uterus, evaluated with multidetector CT and MR imaging examination. As to our knowledge, this is the first report of a uterus with multiple fibroids in an adolescent girl in the English literature, although there are few reports of solitary uterus leiomyomas in this age population [ 5 – 11 ]. The value of preoperative imaging evaluation in these patients is discussed.
3. Discussion Uterus leiomyomas are extremely uncommon in the paediatric and adolescent population [ 5 – 11 ]. Approximately twelve cases of uterine leiomyomas among teenagers, under the age of 18 years have been reported in the English language literature [ 5 – 11 ]. In all these cases, a solitary symptomatic, large uterus fibroid was described. Our case is one of the first reporting a uterus with multiple fibroids in an adolescent girl. Estrogens and progesterone play an important role in the development of these neoplasms [ 1 , 5 – 8 , 15 ]. Uterus leiomyomas are estrogen-dependent tumors and exogeneous estrogen, obesity and pregnancy usually influence their growth [ 1 , 5 – 8 , 15 ]. Pregnancy and obesity were excluded in our patient as well as any history of administration of pharmaceutical agents. A genetic component in the pathogenesis of uterine leiomyomas has also been strongly suggested [ 16 , 17 ]. Inheritance may play an important role, as indicated by family and twin-pair studies, although no positive family history was reported in this patient. Cytogenetic abnormalities involving chromosomes 6, 7, 12, and 14 have been reported in uterus fibroids with high frequency, although relevant studies were not performed in this case [ 16 , 17 ]. Leiomyomas in the young population often show histologic features favouring the diagnosis of malignancy; half of the reported cases demonstrated increased cellularity, mitotic activity, and cellular atypia [ 5 ]. These pathologic characteristics were not met in our patient. Uterine leiomyomas, although rare they should be considered in adolescent women presenting with a pelvic mass and abdominal pain, as in this case, or menstrual disorders and abnormal uterine bleeding. The management of leiomyomas in this age should be conservative for the preservation of fertility. Therefore, the preoperative characterization of the nature of these tumors is extremely important. The diagnosis should be based on imaging findings, that is, sonographic and magnetic resonance imaging features. Ultrasonography is well established as the primary method for the evaluation of the female genital tract [ 4 , 18 ]. It is a noninvasive, widely available technique, with satisfactory results in the detection of uterus fibroids [ 4 , 18 ]. Although, CT is not recommended for the evaluation of uterine leiomyomas, radiologists should be familiar with the CT findings of these benign neoplasms, since they are often found, either incidentally on a CT examination performed for a variety of other reasons, or for the investigation of abdominal pain [ 4 , 14 ]. The typical CT findings of leiomyomas include an enlarged uterus, with a lobular, deformed contour. Leiomyomas usually manifest as homogenous masses, similar to the normal myometrium, or as heterogeneously enhancing lesions. The above findings were seen in our patient. MR imaging is considered the most accurate technique for the detection and localization of leiomyomas, proved more accurate than US [ 1 , 2 , 4 , 12 , 13 , 19 – 22 ]. MRI can assist in preoperative planning for myomectomy by accurately depicting and localizing uterine leiomyomas. The technique has proved superior to sonography in localization of leiomyomas, especially in cases of enlarged, myomatous uterus, as it was in this patient. Differential diagnosis from conditions that may mimic uterine leiomyoma both clinically and sonographically, such as adenomyosis, adnexal tumor, or focal myometrial contraction, is possible with MR imaging. Adnexal masses are more common in women under the age of 20, therefore differential diagnosis from uterus leiomyomas is particularly important in this age population [ 9 ]. The superb contrast, multiplanar capability combined with the absence of ionizing radiation render MR imaging the modality of choice in detecting and characterizing tumors. Typical findings of uterine leiomyomas at MR examination are well known, including sharply demarcated uterine masses, homogeneously hypointense compared to the normal myometrium on T2-weighted images [ 1 ]; therefore diagnosis is usually straightforward, as proven also in our case. Differential diagnosis of uterus leiomyomas should also include uterus leiomyosarcoma [ 1 ]. Although, the presence of a rapidly growing or an irregularly marginated uterus leiomyoma has been proposed as suggestive of malignant transformation, the final diagnosis of uterus leiomyosarcoma is established mainly on histopathologic findings [ 1 ].
Academic Editor: Michael N. Varras Although uterine leiomyomas are the most common neoplasms of the female genital tract, this is not the case when referring to women under the age of 20. Only a few cases of uterus leiomyomas have been reported in this age. Preoperative imaging evaluation is mandatory in adolescent women for the accurate detection, localization, and characterization of uterus leiomyomas. We report a case of a 16-year-old girl admitted to our hospital for pain and abdominal distention. The patient underwent multidetector CT examination of the abdomen and MR examination of the pelvis. Both imaging modalities revealed uterine enlargement and the presence of innumerable variably sized leiomyomas. Histopathologic examination following exploratory laparotomy confirmed the presence of uterus leiomyomas. The patient underwent laparoscopic myomectomy two years after the first operation, following MR examination of the pelvis.
2. Case Report A 16-year-old female patient was referred to the emergency unit of our hospital for abdominal pain and distention. Patient's gynecologic history was unremarkable. Menarche occurred at the age of 13, and menses had been regular ever since. From the family history, her mother reported diabetes mellitus. Physical examination revealed the presence of a relatively firm pelvic mass, of probably uterine origin. Laboratory analysis showed a mild anemia and a slight elevation of CA 125 levels (40 U/ml). The possibility of pregnancy was excluded after a negative pregnancy test. Ultrasonography, both transabdominal and transvaginal, showed globular uterus enlargement and multiple hypoechoic or heterogeneous masses, probably representing leiomyomatous cores, causing distortion of the central endometrial echo. Multidetector CT examination of the abdomen on a 16-row CT scanner was followed. On CT, an enlarged uterus, with lobular, deformed contour was detected, filling the pelvis and extending up to the level of the lower pole of the kidneys. Multiple uterus leiomyomas, of variable size were found, heterogeneously enhancing after contrast material administration ( Figure 1 ). Neither ascites, nor lymphadenopathy was revealed. No renal hydronephrosis was noted. Finally, the patient underwent MR imaging examination of the pelvis on a 1.5 Tesla unit. An enlarged, posteriorly inclined uterus was found ( Figure 2 ). The presence of innumerable intramural uterus leiomyomas was confirmed, of maximal diameter 13 cm, detected mainly of low signal intensity on T2-weighted images, when compared to the outer myometrium ( Figure 2(a) and 2(b) ), slightly inhomogeneous the larger ones. The masses were well circumscribed, isointense to the adjacent myometrium on T1-weighted images, with contrast enhancement ( Figure 2(c) ). Both ovaries were normal. Imaging findings were diagnostic for the presence of fibromatous uterus. The patient underwent exploratory laparotomy. An extremely enlarged uterus, with multiple and variably sized fibroids, the largest of which about 10 cm in maximal diameter was found in surgery. Frozen section pathologic examination confirmed the presence of uterus leiomyomas. Most of leiomyomas were excised; some left in place due to their close relation to the uterine vessels. For histologic examination, 19 discrete fairly well-circumscribed nodules were received. They measured 1–13 cm in maximal diameter. On cut section, the nodules were well whitish, with whorled appearance and fibroelastic consistency. No areas of hemorrhage or necrosis were found. On microscopic examination, all nodules were composed of relatively uniform spindle cells with vesicular nuclei, arranged mostly in interlacing bundles and embedded within a collagenous stroma ( Figure 3(a) and 3(b) ). The mitoses were rare (max. 1 mitosis/10 high power fields). Immunohistochemical examination showed cell positivity for smooth muscle actin (SMA, Figure 3(c) ) and desmin ( Figure 3(d) ). Based on the above, the diagnosis of multiple leiomyomas was made. The patient was instructed to have pelvic follow-up sonograms on 6-months intervals. MR imaging examination performed two years after surgery, when the patient was admitted to the Gynecology clinic with lower abdominal pain, revealed recurrence of fibromatous uterus ( Figure 4 ). Laparoscopic myomectomy was followed. Five well-circumscribed nodules measuring 0.7–1.3 cm in maximal diameter were histologically examined. The macroscopic and microscopic features were identical to those in the previous specimen and the diagnosis of uterus leiomyomas was confirmed.
Abbreviations Computed tomography Magnetic resonance imaging Cancer antigen Smooth muscle actin Ultrasonography.
CC BY
no
2022-01-13 01:48:12
Case Rep Med. 2010 Nov 28; 2010:932762
oa_package/8a/59/PMC3014817.tar.gz
PMC3014818
20572788
Introduction In temporary wetlands void of large fishes, large aquatic heteropterans play a significant role as the major predator of aquatic fauna ( Runck & Blinn 1994 ; Blaustein 1998 ). Nepidae are reported to feed on a variety of aquatic organisms such as aquatic insects and tadpoles ( Menke 1979 ). In Japan, the water scorpion, Laccotrephes japonensis Scott (Nepidae: Heteroptera), is known as large bodied (28–38 mm in body length) and an important predator for both pest control and conservation. L. japonensis is an active mosquito larvae predator; however, nymphs of the endangered water bug Kirkaldyia deyrolli are also part of the diet ( Ohba and Nakasuji 2006 , 2007 ; Ohba 2007 ). Like many other aquatic insects inhabiting paddy rice systems, L. japonensis is declining in some regions in Japan and is designated as a Red Data List species in 6 of 47 prefectures ( Association of Wildlife Research and EnVision 2007 ). It is important to study the life cycle of this species in order to obtain fundamental information for more effective management of L. japonensis populations in the future. In recent years, rice fields have attracted concern because of their function as biodiversity conservation areas ( Bignal and McCracken 1996 ; Elphick 2000 ; Lawler 2001 ) and as alternative wetlands for many aquatic animals (e.g. Fujioka & Lane 1997 ; Lane and Fujioka 1998 ; Maeda and Matsui 1999 ; Maeda 2001 ). Rice fields are an important habitat for many aquatic insects, including endangered species in Japan ( Saijo 2001 ; Mukai et al. 2005 ; Mukai and Ishii 2007 ). L. japonensis is known to prefer lentic and slow-flowing lotic habitats, including paddy rice fields ( Ban et al. 1988 ; Hibi 1994 ; Hibi et al. 1998 ), ponds and marshes ( Miyamoto 1965 ), and river margins ( Iwasaki 1999 ). Ban et al. ( 1988 ) and Hibi et al. ( 1998 ) reported that L. japonensis is distributed mainly in the shallow areas of paddy fields. Saijo ( 2001 ) reported that L. japonensis was seldom found in irrigation ponds and mainly used the paddies for both reproductive and non-reproductive purposes. However, the detailed life cycle and overwintering in rice paddy systems is not well understood. In the present study, mark and recapture censuses were carried out to elucidate the seasonal pattern of habitat utilization by L. japonensis in rice paddy fields and an adjacent pond.
Materials and Methods Study sites Field surveys were conducted in rice fields and at a pond in the western part of Hyogo, central Japan. Rice fields were surrounded by a weed-covered ridge, making a narrow, convenient footpath between adjacent rice fields. The rice fields were initially ploughed and irrigated; then the muddy bottoms were levelled off. Subsequently, the rice fields were filled with 5–15 cm deep water, and the rice saplings were finally transplanted. Water in all rice fields in the study site was maintained from early May to the end of July (irrigation period). In late July, the drainage period started and the water was slowly drained from the field for a few weeks, eventually becoming fully drained, with the ground exposed to the sun. Nevertheless, water in the ditches connecting the rice fields remained at 3–5 cm deep, even during the drainage period. The pattern of rice culture in the site was similar between 2006 and 2007. Censuses were conducted along the ridges around four rice fields and in an adjacent irrigation pond, which was not directly connected. The pond permanently has 100–150 cm of water. The shallowest water strip (from the coast up to 50cm deep) of the irrigation pond was used as the survey area. Seasonal activity To measure the number of L. japonensis in the rice fields and in the pond, censuses were conducted from April to October in 2006 and 2007, at intervals of 5–14 days (a total of 25 and 22 occasions during 2006 and 2007, respectively). Censuses were performed by visual observation of L. japonensis at night using a flashlight (11,000 1x) from 20:00 to 01:00 h. L. japonensis is primarily a nocturnal animal and ambushes prey at the water surface after sunset. Thus, it is much easier to observe at night rather than during the day, and the illumination does not interfere with the behaviour ( Ohba and Nakasuji 2006 ). The observer maintained a constant distance from the water surface (30 cm), and a constant pace (3 m/min walking speed). To maintain sampling consistency, sampling was not conducted during rainy nights. L. japonensis adults and nymphs were caught using a 500-μm mesh dipnet (15 cm × 10 cm mouth opening) As a preliminary survey, the number of individuals for both sexes was counted on 6, 24, 30 April 2006, and 7 and 12 May 2006. From the 16 May 2006 survey onwards, newly captured adults were individually identified using colour combination paint dots (Paint Marker®, Mitsubishi) on their thorax. Individual number, generation (overwintered or new-generation adult), and sex were recorded. The prothorax width was measured for the collected specimens. New adults in the population were recognized by the intact wings and/or soft body. After recording, the specimens were released immediately at their point of capture. The Jolly-Seber method ( Jolly 1965 ; Seber 1965 ) was applied in order to estimate the number of individuals both in rice fields and in the pond. The total number of L. japonensis from all rice fields was pooled together. Rice fields and pond as reproductive sites: Comparison of survival rate of nymphs To estimate the survival rate of L. japonensis nymphs, the Kiritani-Nakasuji-Manly method ( Kiritani and Nakasuji 1967 ; Manly 1976 ) was applied to the frequency of each stage on a series of census occasions. Suppose that the i th stage is observed for a time period covered by n samples, possibly with a varying time interval between them. The samples are taken at intervals ( h 1 , h 2 ...), and the area under the frequency trend curve is estimated by the trapezoidal rule; for the area A i : where f iL = the number of the i th instar estimated from the samples taken on the L th occasion, which is at the end of the sampling intervals h L . There are h L +1 sampling intervals, the last interval extending from the last occasion when the stage was present to the next sampling occasion (when it was found to be absent) ( Manly 1976 ). Next, the estimated number of each nymphal instar N i was calculated as: where S i denotes the survival rate estimated by the Kiritani-Nakasuji-Manly method for the i th nymphal instar. The number of 5 th instar was calculated by using the maximum number of new adults estimated by the Jolly-Seber method in each habitat in each year. The survival analysis with sequential Bonferroni correction ( Rice 1989 ) was used to test for survival curve differences between rice fields and the pond in 2006 and 2007. The Kaplan-Meier method of estimating survival functions and the nonparametric Mantel-Cox log rank test were used. In this analysis, instar and emerged adult were regarded as survival period and censoring, respectively. Statistical significance was set at 0.05. All statistical tests were conducted using JMP version 6.03 (SAS Institute 2005). Rice fields and the pond as reproductive sites: Site quality comparison To evaluate the quality of the sites, the prothorax width of newly emerged adults was compared between specimens caught in the rice fields and specimens caught in the pond from late August to October. A two-way ANOVA was performed with sex and eclosion site (captured site) as the main factors. Log10 transformations for exact values were made to standardize variances and improve normality, if necessary to satisfy the assumptions of the ANOVA model. Overwintering site To determine whether L. japonensis adults were present in the rice fields and in the pond during winter, censuses were conducted on 10 December 2006 and 20 February 2007. Moreover, adults marked from late August to October 2006 (autumn) were followed up from April to May 2007 (spring) in order to estimate the overwintering survival of L. japonensis. A multinomial logistic regression analysis was applied to the data of the recaptured specimens in spring (assigning scores of 1 for not captured, 2 for captured in a different site, and 3 caught in the same site) with the recapture data as the dependent variable and site where marked in autumn and sex as the independent variables.
Results Seasonal activity Occurrence frequency of L. japonensis is shown in Figure 1 . Adults appeared both in the rice fields and in the pond in April 2006. All adults in the rice fields were found in ditches when water was drained from the rice fields. Mating pairs were found from 16 May to 14 July 2006 (breeding period). First instar nymphs appeared both in the rice fields and in the pond from June to July 2006. From mid-June to August, 2 nd and 3 rd instar nymphs were first observed, and from June to September, 4th and 5th instar nymphs appeared. Newly emerged adults appeared from late August to October. The occurrence frequency of nymphs did not differ markedly between the rice fields and the pond ( Figure 1 ). In both the rice fields and the pond, a total of 721 adults were numbered and 438 (61%) were recaptured at least once from May 2006 to October 2007. Out of 157 males and 142 females marked from May to July 2006, only 2 and 1, respectively, were recaptured after April 2007. Newly emerged adults in 2006 overwintered and then reproduced starting in May 2007, but few nymphs appeared in both the rice fields and the pond. The number of nymphs in 2007 was much lower than in 2006, although the seasonal pattern of occurrence was not different. As a result, in September 2007, only 1 male and 1 female of the new generation were caught in the rice fields, whereas 4 females were found in the pond ( Figure 1 ). Rice fields and pond as reproductive sites: Comparison of survival rate of nymphs Survival rates of L. japonensis nymphs did not differ between the rice fields and the pond in 2006 (Survival analysis, Mantel-Cox χ 2 = 0.30, P = 0.58; Figure 2 ). The survival rate in both habitats in 2006 was significantly higher than in 2007 (Mantel-Cox χ 2 > 26.8, P < 0.001 for all combinations). The survival rate in rice fields in 2007 was significantly the lowest (Mantel-Cox χ 2 > 16.6, P < 0.001 for all combinations). Rice fields and the pond as reproductive sites: Site quality comparison Regarding the prothorax width of newly emerged adults, the two-way ANOVA indicated that the effect of sex was significant, but the eclosion site and sex-by-eclosion site interactions were not (sex: F 1, 325 = 605.71, p < 0.001; eclosion site: F 1, 325 = 0.25, p= 0.62; sex-by-eclosion site: F 1, 325 = 0.25, p = 0.62 for log-transformed data). Differences in the prothorax width of newly emerged adults between eclosion sites were not significant for either sex (male: rice fields ( n = 130) = 7.51 ± 0.03 mm (mean ± SE), pond ( n = 14) = 7.39 ± 0.209 mm, one-way ANOVA = F 1, 142 = 1.49, p = 0.22; female: rice fields ( n = 157) = 8.49 ± 0.03 mm; pond ( n = 28) = 8.42 ± 0.07 mm, one-way ANOVA = F 1, 83 = 0.88, p = 0.35). Overwintering site There were adult males and females present on the bottom of the ditch connecting the rice fields (8 males and 12 females on 10 December 2006, 3 males and 12 females on 20 February 2007; Figures 1 , 3). Estimated number of L. japonensis in the rice fields was almost the same between the two surveys ( Figure 1 ). Adults were alone and quiescent on the mud, with their front legs folded up ( Figure 3b ). However, no adults were found in the pond in any of the two winter surveys. The results of the recapture experiments in spring 2007 were markedly different from the marked sites in autumn 2006 (the rice fields and the pond) (Logistic regression analysis: Marking site in autumn, df = 2, χ 2 = 22.33, p < 0.001; Sex, df = 2, χ 2 = 2.58, p = 0.275; Marking site in autumn by sex, df = 2, χ 2 = 0.89, p = 0.643). Sex and sex-by-marked-site interaction were not significant effects. Inter-habitat migration was confirmed, both from the paddy field to the pond and vice versa. In the rice fields, of a total of 328 adults numbered in autumn 2006, 119 were recaptured in the rice fields in spring 2007 (36.3%), and 4 adults were recaptured in the pond (1.2%). On the other hand, out of 47 adults marked in the pond in 2006, 3 adults were recaptured in the pond (6.4%) and 4 in the rice fields (8.5%) in spring 2007. Thus, the proportion of recaptured adults in the rice fields was greater than that in the pond.
Discussion The results show that L. japonensis had a univoltine life cycle in the study site; between mid-May and July, overwintered adults copulate, and the first nymphs appear from June to July. Adults of the new generation appear from late August to October and then overwinter until April of the following year. Reproductive site In 2006, L. japonensis nymphs appeared both in the rice fields and in the pond from June to September, as reported by Iwasaki ( 1999 ) and Saijo ( 2001 ). The occurrence period and survival rate of nymphs were almost the same in the rice fields and the pond ( Figures 1 , 2 ). Moreover, the prothorax width of newly emerged adults from the rice fields and from the pond was not different. Thus, the results show the functional equivalency between the rice fields and the pond. The life history pattern is similar to that of Nepa cinerea ( Southwood and Leston 1959 ) and Nepa apiculata ( McPherson and Packauskas 1987 ). However, the reproductive period was short and clearly discrete, contrary to what Papacek ( 1989 ) found. In 2007, although the reasons are unknown at present, there were few nymphs in the rice fields as well as in the pond ( Figure 1 ). The results suggest an annual fluctuation in the population between 2006 and 2007. The survival rates both in the rice fields and in the pond in 2007 were lower than those in 2006 ( Figure 2 ). However, in 2007 the survival rate in the pond was higher than in the rice fields. The pond would have played an important role in 2007 as a refuge site. Thus, in the present study site, it may be difficult for L. japonensis to subsist exclusively relying on the rice fields. Overwintering site New adults, emerging from late August to October, overwinter in and/or around rice fields and reproduce during the next spring. The recapture rate of overwintered specimens in 2007 was higher in rice fields than in the pond. Iwasaki ( 1999 ) studied the life cycle of L. japonensis at the river margins of the Yamato-gawa River in Nara, central Japan; however, he could not collect adults from November to March of the next year. In the present study, adults were collected in the ditches around the rice fields during winter ( Figure 3 ). This is the first report on overwintering in water in this species. Species of the closely related genus, Nepa , are known to overwinter as adults, also underwater ( Southwood and Leston 1959 ; McPherson and Packauskas 1987 ; Saulich and Musolin 2007 ). L. japonensis overwinters under the ground in the rice fields, according to Nakayama and Yajima ( 1985 ). Individuals not detected in the present study probably overwinter under the ground in the rice fields. In contrast, few individuals marked in the pond were recaptured in the same habitat after the winter. Although they are related species, Ranatra chinensis and Ranatra unicolor (Nepidae) overwinter in deeper and permanent water such as ponds ( Ban et al. 1988 ; Hibi et al. 1998 ); L. japonensis may not prefer ponds for overwintering sites. Rice fields and irrigation pond joint functioning This study revealed that both the rice fields and the pond have potential as reproductive and overwintering sites. Nevertheless, the overwintering survival rate in 2006, presumably a favorable year, was higher in rice fields than in the pond, and it was the other way around in 2007. Thus, the pond may play a role as a refuge site in comparison with the rice fields, especially when an unfavorable annual fluctuation occurs, because of the higher survival rate and the active migration. The migration from the pond to the paddies would be expected, as the Nepidae are considered “passive migrants” ( Kanyukova 2006 ), providing there was a water connection between both habitats. In this case, however, the active migration from the paddies to the pond and vice versa was confirmed. The migration method is unknown, but an adult was found walking from one rice field to another during May 2006 (unpublished data). Most adults probably walk in order to migrate before overwintering. In this study site, poorly drained ditches were suitable to cover the whole life cycle of L. japonensis even during the drainage period. However, this is not the case in many rice paddy systems, where the drainage from August onwards would have a large impact on the population dynamics of this species. Land consolidation, which is the conversion of poorly drained rice fields into well-drained dry rice fields using a below-ground drainage system, tillage in winter, and winter cropping will reduce the overwintering survival of this species, as was reported for the belostomatid bug, Appasus major ( Mukai and Ishii 2007 ). In conclusion, the rice fields and irrigation pond reinforced each other as reproductive and overwintering shelter sites of L. japonensis.
A Laccotrephes japonensis (Nepidae: Heteroptera) population was studied based upon mark and recapture censuses in order to elucidate the seasonal pattern of habitat utilization in a rice paddy system including an irrigation pond between April and October, in 2006 and 2007. The seasonal pattern of nymphs and adults did not differ markedly between the rice fields and the pond. Survival rates of L. japonensis of all stages did not differ between the rice fields and the pond in 2006, but were lower in 2007 in both habitats. In 2007, however, the survival rate of L. japonensis nymphs in the pond was higher than in the rice fields. In rice fields, 36.3% of the overwintering adults were recaptured the following year. On the other hand, the recapture rate after overwintering in the pond was only 6.4%. Migration from the pond to the paddies and vice versa was observed. In summary, the rice fields and the pond may reinforce each other as reproductive and overwintering sites of L. japonensis , especially during unfavorable years. Keywords
Acknowledgments We are grateful to Mr. Takuya Kojima for providing the Jolly-Seber automatic calculation program and to Dr. D. Musolin (Kyoto University, Grad. School of Agriculture) for vital information of seasonal development present in Russian literature.
CC BY
no
2022-01-12 16:13:46
J Insect Sci. 2010 May 10; 10:45
oa_package/b9/35/PMC3014818.tar.gz
PMC3014819
21209812
2. Discussion This case illustrates an individual who failed conservative therapy for chronic low back pain and developed several complications from a common procedure. The temporal association of events suggests that the individual received his epidural injections for his chronic low back pain, which created a spinal epidural abscess (SEA), in turn seeding his prosthetic valve. A literature search found only six past case reports involving an infectious complication following a lumbar injection [ 1 – 6 ]. Epidural injections are currently an adjuvant therapy for chronic back pain. Although epidural injections are considered a minor procedure there are potential complications. The Wessex Epidural Steroids Trial reported headache as the most common side effect at 3.3% and nausea second at 1.7% [ 1 ]. A central nervous system infection, which complicated this case's treatment, from an epidural injection is a rare occurrence, but has a significant potential for morbidity and mortality. In one study, 6.3% of 128 community-acquired bacterial meningitis patients had a history of epidural injections [ 2 ]. An SEA is a rare occurrence but the incidence is rising especially with increase in spinal interventions. Currently 0.2–2 for every 10,000 patients present with an SEA, and several risk factors have been identified [ 3 ]. Intravenous drug users, skin infections, and abscesses all potentially cause transient bacteremia and subsequent hematogenous spread. Diabetes mellitus changes the integrity of the microvasculature allowing for a favorable environment for proliferation in the epidural space. Penetrating trauma and spinal manipulation, such as epidural injections, cause direct inoculation and potential contamination of the epidural space. However, in one third of the cases, no identifiable source is appreciated [ 4 ]. The route of infection can shed light on the potential organism. Skin infections and spinal manipulation account for the majority of cases, and gram-positive cocci, such as staphylococcus and streptococcus, account for more than half of the organisms cultured [ 5 ], with methicillin-resistant staphylococcus aureus cases steadily rising [ 6 ]. In addition to staphylococcus, intravenous drug users with an SEA can also be infected with pseudomonas. With immunosuppressed patients, mycobacterium, and fungi have also been isolated [ 5 ]. Clinically, the “classic triad” of pain, neurologic deficits, and fever has low sensitivity, and thus an assessment of risk factor for SEA has been advocated with particular focus on spinal manipulation, immunosuppression, and any potential source of transient bacteremia [ 7 ]. 85% of patients with SEA complain of back pain while only 35% complain of paresthesia, and the use of antipyretics makes the diagnosis difficult. On laboratory values, an elevated erythrocyte sedimentation rate has been found to be more sensitive and specific than an elevated white blood cell count and is suggested to be used as a screening test [ 7 ]. Currently, magnetic resonance imaging of the spine is the preferred modality for diagnosis [ 8 ]. Since neurologic recovery is directly correlated with the duration of the abscess, particular importance is placed on prompt diagnosis. Unfortunately, 75% of patients with SEA experience diagnostic delay (defined as multiple ED visits before diagnosis, admission without a diagnosis, or more than a 24-hour delay before diagnosis) [ 7 ]. Furthermore, despite adequate intervention the mortality rate still ranges from 6%–30% [ 5 ]. Spinal epidural abscesses have a pleomorphic, potentially misleading clinical spectrum and presentation. As illustrated in this case, fever in an individual with recent epidural injections should raise the suspicion of a spinal epidural abscess.
Academic Editor: T. H. M. Ottenhoff Epidural injections for chronic low back pain are controversial, and their effectiveness is debated. Although epidural injections are considered a minor procedure with low morbidity, catastrophic complications may occur. We describe a case of prosthetic valve endocarditis secondary to an epidural abscess after epidural injection to alert clinicians to this unusual association.
1. Case Report A 65-year-old man presented with fevers, shaking chills, generalized weakness, and back pain for one week. Past medical history significanly involved chronic low back pain and a porcine aortic valve replacement one year previously for symptomatic aortic stenosis. The patient reported that he recently had begun epidural steroid injections secondary to his failure to improve with conservative therapy, his last injection being one week prior to admission. His only medications included atenolol, aspirin, and irbersartan. He had no recent travels or procedures and denied intravenous drug usage. On exam, temperature is 102.4, heart rate was 84, and blood pressure 90/55 mmHg sitting with normal oxygen saturation. He was diaphoretic, having shaking chills. His exam was notable for a harsh systolic III/VI murmur best heard at left sternal border 2nd intercostal space with no significant neurological, ocular, pulmonary, abdominal, or skin/nail findings. Admission electrocardiogram showed normal sinus rhythm, left ventricular hypertrophy, and no ST or T wave changes. Chest X-ray was unremarkable. Laboratory results were significant for WBC = 18,900 with 4% bands and an erythrocyte sedimentation Rate of 56. MRI of the spine revealed osteomyelitis/discitis at L2-3 and a small epidural abscess at L2 ( Figure 1 ). The abscess was too small to be aspirated by neurosurgery, and he was started on antibiotics. Given the presence of a new murmur with a prosthetic valve in the setting of an epidural abscess, endocarditis was suspected. Serial blood cultures grew coagulase negative staphylococcus. On hospital day number 2, a transthoracic echocardiogram showed no evidence of endocarditis. Due to continued high clinical suspicion of endocarditis, trans-esophageal echocardiogram on hospital day number 3 showed aortic valve vegetations with severe aortic valve regurgitation ( Figure 2 ). The patient was continued on antibiotics. Since he was hemodynamically stable with no evidence of congestive heart failure, an emergent valve replacement was not indicated. His course was complicated by first-degree heart block on EKG, as well as acute tubular necrosis likely secondary to antibiotics. Blood cultures became negative after five days of treatment. He was discharged to complete a six-week course of antibiotics. He underwent an aortic valve replacement one month after his initial hospitalization. Surgery confirmed dehiscence of the valve as well as a subaortic root abscess that had eroded the aortic annulus. The site was debrided, reconstructed, and a new porcine valve was implanted. He was discharged on postoperative day six and has returned to work with no sequelae.
Acknowledgments The authors would like to acknowledge Miriam Hospital, Lifespan Corporation, Warren Alpert Medical School at Brown University.
CC BY
no
2022-01-13 01:48:12
Case Rep Med. 2010 Dec 20; 2010:105426
oa_package/bf/62/PMC3014819.tar.gz
PMC3014820
20572790
Introduction Social insects provide a diverse array of model systems to examine the ecological immunology and sociobiology of disease resistance ( Pie et al. 2005 ; Schmid-Hempel 2005 ; Cremer et al. 2007 ; Ugelvig and Cremer 2007 ). The study of comparative immunity is particularly important to understand the evolution of disease resistance because the induction and maintenance of immunity are costly ( Rolff and Siva-Jothy 2003 ; SchmidHempel 2003 ) and because immune function is considered to be an adaptive life-history trait ( Schmid-Hempel 2005 ). Investment in immunity should therefore be dependent on the risk of contracting disease: species with reduced pathogen pressure should show reduced investment in immunocompetence. However, the role of interspecific variability in pathogen pressure as a selective agent for adaptive variation in disease resistance has received little attention. In termites, immune defense is a particularly important life-history trait. Termite social evolution is associated with life type ( Abe 1987 ); the nesting and feeding biology of soiland decayed wood-dwelling species may encourage the proliferation of pathogens relative to that of drywood species. For all termite life types, nestmate density and frequent social interactions among colony members could increase the probability of disease transmission. Termite nests are inhabited by a diverse array of microbes ( Hendee 1933 , 1934 ; Meiklejhon 1965 ; Sands 1969 ; Keya et al. 1982 ; Cruse 1988 ). The drywood termite, Incisitermes schwarzi Banks (Isoptera: Kalotermitidae), and the dampwood termite, Zootermopsis angusticollis Hagen (Termopsidae), are one-piece nesters that colonize dead wood and are similar in colony size and life history ( Castle 1934 ; Luykx 1986 ). However, these species have substantial differences in their nesting ecology that could impact exposure to parasites and pathogens: I. schwarzi is found most often in dry, dead, intact branches ( Collins 1969 ; Abe 1987 ; Eggleton 2000 ), whereas Z. angusticollis generally colonize decayed moist wood in contact with leaf litter and/or soil ( Castle 1934 ; Collins 1969 ; Eggleton 2000 ). In addition, Incisitermes is more tolerant of desiccation than Zootermopsis and requires less moisture ( Collins 1969 ), which likely affects microbe development. The dry wood exploited by I. schwarzi does not appear to favor the growth of bacteria and fungi ( Hendee 1933 , 1934 ; Ignoffo 1992 ). In fact, Rosengaus et al. ( 2003 ) found that I. schwarzi has significantly lower nest and cuticular loads of culturable microbial strains than Z. angusticollis (average nest load = 58 vs. 824 colony forming units; average cuticular load 4 v. 190 colony forming units). Contact with soil microbes and the habit of nesting in moist, decayed wood may thus have influenced the diversity and abundance of nest microbes and the nature of pathogen challenges. The question as to whether such differences in the nest environment selected for variation in individual and social mechanisms of disease resistance in these two termite species, however, remains unanswered. Is disease susceptibility in I. schwarzi and Z. angusticollis associated with variation in microbial loads present in their nests? Is disease susceptibility in the drywood termite I. schwarzi decreased by group living as in the dampwood Z. angusticollis ? Here, the survival of isolated and grouped I. schwarzi following low- and high-dose exposures of fungal conidia was examined to estimate immune function, assess disease susceptibility, infer investment in immunocompetence, and determine the role of sociality in infection control in a drywood termite. By using body mass-corrected doses of conidia the results were compared with resistance in Z. angusticollis to determine if differences in survivorship following pathogen exposure correlated with variation in nest microbial load.
Materials and Methods Collection and maintenance of termites Colonies of the drywood termite I. schwarzi ( n = 4, approximately 100–250 individuals) were collected on Grass Key and Key West, Florida in March 2003. Wood containing termites was placed in open Fluon®-lined plastic boxes (50 × 30 × 20 cm). Stock colonies were reared in the laboratory at 25° C and lightly sprayed with water once a month. Termites were removed from their colonies and used for experiments during August and September 2003. Colonies of Z. angusticollis ( n = 19, approximately 500–1000 individuals) were collected from Redwood East Bay Regional Park, Oakland, California and the Pebble Beach Resort, Monterey, California during July 1999. Log nests containing termites were sectioned and transferred to plastic tubs (50 × 30 × 20 cm) lined with moist paper towels. Decayed wood was added periodically as a supplementary food source. Stock colonies were reared in the laboratory at 25° C and sprayed liberally with water once a week to ensure a high level of moisture. Termites were removed from their colonies and used for experiments during September and October 1999. Preparation of conidia suspensions The entomopathogenic fungus Metarhizium anisopliae Sorokin (Hypocreales: Clavicipitaceae) (original source: American Type Culture Collection, batch 93–09, media 325, ATCC #90448) was used as a model pathogen. M. anisopliae is an entomopathogenic fungus ( Tanada and Kaya 1993 ) that naturally occurs with a number of soil-dwelling termites ( Zoberi 1995 ; Milner et al. 1998 ) and can induce mortality in drywood species ( Nasr and Moein 1997 ; Siderhurst et al. 2005 ). A stock Tween 80 conidia suspension containing 6.4 × 10 8 conidia/ml was freshly prepared according to Rosengaus et al. ( 1998 ). The average germination rate (± S.D.) of conidia was 97.4 ± 6.0% ( n = 30 fields of vision). Determination of body mass-corrected dosage To compare the susceptibility of I. schwarzi and Z. angusticollis , conidia dosage was corrected for body mass according to the following protocol. Z. angusticollis nymphs ( n = 10) were allowed to walk freely for 1 h as a group inside a Petri dish (100 × 15 mm) lined with filter paper (Whatman Qualitative no. 5, particle retention > 2.5 μm) moistened with 1.0 ml of a suspension containing 2 × 10 8 conidia/ml (high dose) or 6 × 10 6 conidia/ml (low dose) ( Rosengaus et al. 1998 ). Immediately after exposure, each termite was placed in a 1.0 ml microcentrifuge tube with 1.0 ml of Tween 80 solution, vortexed, and then centrifuged at 300 × g at 4° C for 20 min. Next, the termite was removed, the pellet redistributed using the vortex, and a sample of the wash was placed on a hemocytometer to determine the number of conidia washed from the cuticle of each individual sampled. The average mass of Z. angusticollis was approximately three times that of I. schwarzi (average ± S.D. = 0.045 ± 0.012 g, n = 25 nymphs and 0.014 ± 0.005 g, respectively; n = 25 instars 6, 7 and nymph). The resulting average conidia load recorded after washes for Z. angusticollis exposed to a high (1.5 × 10 5 ± 6.7 × 10 4 , n = 10 termites) or low dose (9.2 × 10 4 ± 1.9 × 10 4 , n = 10 termites) of conidia were divided by three to arrive at the appropriate conidia loads for I. schwarzi. To determine exposure concentrations that would produce the desired conidia loads, I. schwarzi were allowed to walk freely for 1 h in groups of 10 composed of mixed developmental stages (instars 6, 7 and nymphs) in a Petri dish (60 × 15 mm) lined with filter paper (Whatman Qualitative no. 5, particle retention > 2.5 μm) moistened with 0.5 ml of a 6.4 × 10 8 , 6.4 × 10 7 , 5.8 × 10 6 , 6.2 × 10 4 , or 6.0 × 10 3 conidia/ml suspension. Conidia loads were determined according to the protocol described above, with 6.4 × 10 7 (high dose) and 6.2 × 10 4 (low dose) producing the mass-corrected conidia loads (5.1 × 10 4 ± 1.5 × 10 4 , n = 10 termites; 3.2 × 10 4 ± 1.3 × 10 4 , n = 10 termites, respectively). Conidia exposure treatments To determine the effect of fungal exposure on survival, I. schwarzi (instars 6, 7 and nymphs) were exposed to a high (6.4 × 10 7 conidia/ml) or low dose (6.2 × 10 4 conidia/ml) of M. anisopliae conidia according to the abovedescribed procedure. Immediately after exposure, individual termites were transferred haphazardly into sterile Petri dishes (60 × 15 mm) lined with filter paper (Whatman Qualitative no. 1) moistened with 150 μl sterile water (low dose, n = 25; high dose, n = 25). To examine the effect of group size on survival, subcolonies containing mixed instar groups of 10 (low dose, n = 5; high dose, n = 5) and groups of 25 (low dose, n = 5; high dose, n = 5) were similarly established. This experiment used 123 I. schwarzi termites each from three colonies (A, B, and C). Colony D, due to its larger size, provided 231 termites. Control termites from all four stock colonies were treated with a conidia-free 0.1% Tween 80 suspension medium and established in Petri dishes containing an isolated termite ( n = 25) or mixed-instar groups of 10 ( n = 5) or 25 ( n = 5). All Petri dishes were subsequently stacked in covered plastic boxes (30 × 23 × 10 cm) and maintained in the laboratory. Survival All termites were censused daily for 20 days following exposure, providing survival data to estimate immune function ( Boots and Begon 1993 ; Moret and Schmid-Hempel 2000 ; Armitage et al. 2003 ). Dead individuals were removed, surface sterilized with 5.2% sodium hypochlorite, rinsed twice with sterile water, and plated on potato dextrose agar to confirm that mortality was due to infection by M. anisopliae ( Rosengaus et al. 1998 ). Confirmation rates for conidia-exposed termites ranged from 92% to 100% while the confirmation rate for controls was zero. Statistical analysis To determine the effect of conidia exposure on survivorship, several survival parameters were estimated, including the survival distribution (the time-course of survival), percent survivorship, and median survival time (LT50). A Cox Proportional Regression Analysis was performed to determine the relative hazard ratio of death. The model included the following variables: group size (1, 10, or 25 individuals), exposure (high dose, low dose, or control), and species ( I. schwarzi or Z. angusticollis ). The resulting relative hazard functions characterized the instantaneous rate of death at a particular time, given that the individual survived up to that point, while controlling for the effect of other variables on survival ( SPSS 1990 ; Rosengaus et al. 1998 ). Survival distributions were analyzed with the Breslow Statistic (BS; Kaplan-Meier Survival Test, SPSS 1990 ). When multiple, pairwise comparisons were made, the α-value of significance was adjusted ( Rice 1989 ). Data derived from Rosengaus et al. ( 1998 ) was used to compare the survivorship of I. schwarzi to that of Z. angusticollis following exposure to masscorrected doses of conidia.
Results An overall Cox Proportional Regression Analysis showed that conidia dosage, group size and species were all significant and independent predictors of termite survival [Wald Statistic (WS) = 311, 216, and 44, respectively; p < 0.001]. After controlling for the effects of all other variables in the model, isolated termites had 5.5 times the hazard ratio of death relative to grouped termites (WS = 214, df = 1, p < 0.0001), while termites in groups of 10 did not differ significantly from groups of termites composed of 25 individuals (WS = 3.3, df = 1, p = 0.07). Furthermore, Z. angusticollis had a significantly higher hazard ratio of death (1.6 times higher) relative to that of I. schwarzi , even after controlling for the influence of group size and conidia exposure on survivorship. The effects of group size and species are discussed in detail below. Susceptibility of I. schwarzi to fungal infection Survival analyses and the various estimated survival parameters provided further support for the significance of the role that group living in I. schwarzi has on the control of fungal disease. I. schwarzi exhibited dosage dependent mortality within each group size, but the effect of disease was significantly more pronounced when termites were isolated than when maintained in groups of 10 or 25 ( Figure 1 and Table 1 ). Termites kept in groups of 25 had an 83% reduction in the hazard ratio of death relative to isolated termites. Interestingly, colony of origin and instar (an estimator of age) were not significant predictors of I. schwarzi survival (Wald Statistic = 0.2, 0.4; df = 3,1; p > 0.05, respectively). Interspecific variation in susceptibility Following an exposure to a low or high dose of fungal conidia, isolated I. schwarzi survived significantly better than isolated Z. angusticollis (BS = 53.4, p < 0.001; BS = 7.0, p = 0.008, respectively; Z. angusticollis data from Rosengaus et al. 1998 ) surviving approximately 1 and 4 days longer following low and high dose exposures, respectively ( Figure 2A ). Control I. schwarzi and Z. angusticollis had similar survival distributions (BS = 4.3, p = 0.04; Figure 2A ). The above significance values reflect p-value adjusted for multiple comparisons of p = 0.008. When I. schwarzi and Z. angusticollis were maintained in groups of 10 individuals, following exposure to the low conidia dosage, I. schwarzi survived significantly longer than Z. angusticollis (BS = 39.5, p < 0.001; Figure 2B ). However, no significant differences were recorded between the two species in either the control treatment or the high conidia dose ( Figure 2B ). The above significance values reflect p-value adjusted for multiple comparisons at p = 0.008. Finally, for termites maintained in groups of 25 after exposure to a low conidia dose, I. schwarzi also survived significantly longer than Z. angusticollis (BS = 98.8, p < 0.001, Figure 2C ). But, following a high conidia exposure, Z. angusticollis survived significantly longer than I. schwarzi (BS = 25.9, p < 0.0001; Figure 2C ). The above significance values reflect a p-value adjusted for multiple comparisons of p = 0.008. Discussion Significant interspecific variation in immunocompetence has been described (reviewed in Fellowes and Godfray 2000 ; Wilson et al. 2000), but immune function has been assessed without challenging hosts with live pathogens or examining survivorship. Schmid-Hempel and Loosli ( 1998 ) demonstrated interspecific differences in mortality following exposure to a novel pathogen, but the ecological correlates of immunity remain unknown. There are compelling ecological and evolutionary reasons for predicting that Z. angusticollis should be less susceptible to fungal infection than I. schwarzi . The nesting and feeding habits of both termite species appear to promote differential growth of microbial communities ( Hendee 1933 , 1934 ) and thus differences in encounter rates with disease. The dampwood termite Z. angusticollis has significantly higher cuticular and nest microbial loads than the drywood termite I. schwarzi ( Rosengaus et al. 2003 ) and, therefore, should be under greater selection pressure to invest more heavily in immune function. Indeed, molecular analyses suggest that antifungal peptides have diversified in response to microbe-related variation in nesting ecology and pathogen pressure in other termite species ( Bulmer and Crozier 2004 ). It is likely that dampwood termites have a longer coevolutionary history with M. anisopliae than drywood termites. M. anisopliae conidia require high humidity to germinate ( Milner et al. 1997 ), and the moist nest and soil conditions surrounding the decayed wood nests of Z. angusticollis are more suitable for the development of this fungus than the dry wood environments of I. schwarzi. Thus, it is conceivable that coevolution between Z. angusticollis and M. anisopliae would have resulted in greater immune adaptation to resist M. anisopliae infection rather than for I. schwarzi , to which the pathogen may be novel. Yet the fact that the latter species had higher survival across most treatments (with the exception being when termites were maintained in groups of 25 individuals following exposure to the high conidia dosage) does not support the hypothesis that adaptive variation in immune response results from heterogeneity in microbial pressures. Differences in cuticular chemistry may also influence the susceptibility of I. schwarzi to M. anisopliae. It would be expected that Z. angusticollis , with their apparently more heavily melanized cuticle, would be more resistant to fungal infection although other substances distributed on the cuticle could impact microbes. Another plausible explanation for the lack of a consistent association between susceptibility to fungal infection and microbial loads associated with the different nesting and feeding habits of Z. angusticollis and I. schwarzi is that the methods for estimating microbial loads in termite colonies may not have a level of resolution sufficient to identify interspecific differences in pathogenic and/or parasitic forms ( Cruse 1998 ; Rosengaus et al. 2003 ). Records of colony forming units isolated from termite and nest washes provide only a one-time snapshot of culturable nest microbes. Ultimately, molecular immunity may be driven by the presence and abundance of pathogenic/parasitic microorganisms that vary temporally throughout colony ontogeny. Unfortunately, comparative quantitative analyses on the abundance of pathogenic/parasitic microorganisms are lacking. These results illustrated the importance of sociality in coping with disease and parasitism ( Rosengaus et al. 1998 , 2000 ; Rosengaus and Traniello 2001 ; Traniello et al. 2002 ; Shimizu and Yamaji 2003 ; Maekawa et al. 2005 ; Calleri et al. 2006 ; Wilson-Rich et al. 2007 ; Yanagawa and Shimizu 2007 ). An emerging literature shows that termites, independent of species, benefit from group living when exposed to a variety of infectious agents including entomopathogenic fungi and nematodes. Interspecific differences in behaviors such as allogrooming, known to be associated with the social control of disease, may be significant in determining resistance to infection. Disease has been proposed as an important selective factor in termite evolution ( Rosengaus and Traniello 1993 ; Thorne and Traniello 2003 ). Selection for individual physiological resistance was perhaps influenced more by group living than by ecological variations in exposure to antigens. Calleri et al. ( 2006 ) demonstrated that low genetic heterozygosity reduced the disease resistance of grouped Z. angusticollis , but did not appear to negatively affect the immune response of individual termites maintained in isolation. This suggests that social mechanisms of infection resistance may be more significant in termite disease control than individual physiological immunity and its underlying genetic architecture. In other words, socially mediated immunocompetence ( Traniello et al. 2002 ), may have benefits in disease resistance sufficient to relax selection for individual immune function. Research linking ecological heterogeneity in pathogenic pressure, genetic variation in immunity, and direct measurement of in vivo immune response to both inert and viable disease agents is required to further evaluate this hypothesis.
Discussion Significant interspecific variation in immunocompetence has been described (reviewed in Fellowes and Godfray 2000 ; Wilson et al. 2000), but immune function has been assessed without challenging hosts with live pathogens or examining survivorship. Schmid-Hempel and Loosli ( 1998 ) demonstrated interspecific differences in mortality following exposure to a novel pathogen, but the ecological correlates of immunity remain unknown. There are compelling ecological and evolutionary reasons for predicting that Z. angusticollis should be less susceptible to fungal infection than I. schwarzi . The nesting and feeding habits of both termite species appear to promote differential growth of microbial communities ( Hendee 1933 , 1934 ) and thus differences in encounter rates with disease. The dampwood termite Z. angusticollis has significantly higher cuticular and nest microbial loads than the drywood termite I. schwarzi ( Rosengaus et al. 2003 ) and, therefore, should be under greater selection pressure to invest more heavily in immune function. Indeed, molecular analyses suggest that antifungal peptides have diversified in response to microbe-related variation in nesting ecology and pathogen pressure in other termite species ( Bulmer and Crozier 2004 ). It is likely that dampwood termites have a longer coevolutionary history with M. anisopliae than drywood termites. M. anisopliae conidia require high humidity to germinate ( Milner et al. 1997 ), and the moist nest and soil conditions surrounding the decayed wood nests of Z. angusticollis are more suitable for the development of this fungus than the dry wood environments of I. schwarzi. Thus, it is conceivable that coevolution between Z. angusticollis and M. anisopliae would have resulted in greater immune adaptation to resist M. anisopliae infection rather than for I. schwarzi , to which the pathogen may be novel. Yet the fact that the latter species had higher survival across most treatments (with the exception being when termites were maintained in groups of 25 individuals following exposure to the high conidia dosage) does not support the hypothesis that adaptive variation in immune response results from heterogeneity in microbial pressures. Differences in cuticular chemistry may also influence the susceptibility of I. schwarzi to M. anisopliae. It would be expected that Z. angusticollis , with their apparently more heavily melanized cuticle, would be more resistant to fungal infection although other substances distributed on the cuticle could impact microbes. Another plausible explanation for the lack of a consistent association between susceptibility to fungal infection and microbial loads associated with the different nesting and feeding habits of Z. angusticollis and I. schwarzi is that the methods for estimating microbial loads in termite colonies may not have a level of resolution sufficient to identify interspecific differences in pathogenic and/or parasitic forms ( Cruse 1998 ; Rosengaus et al. 2003 ). Records of colony forming units isolated from termite and nest washes provide only a one-time snapshot of culturable nest microbes. Ultimately, molecular immunity may be driven by the presence and abundance of pathogenic/parasitic microorganisms that vary temporally throughout colony ontogeny. Unfortunately, comparative quantitative analyses on the abundance of pathogenic/parasitic microorganisms are lacking. These results illustrated the importance of sociality in coping with disease and parasitism ( Rosengaus et al. 1998 , 2000 ; Rosengaus and Traniello 2001 ; Traniello et al. 2002 ; Shimizu and Yamaji 2003 ; Maekawa et al. 2005 ; Calleri et al. 2006 ; Wilson-Rich et al. 2007 ; Yanagawa and Shimizu 2007 ). An emerging literature shows that termites, independent of species, benefit from group living when exposed to a variety of infectious agents including entomopathogenic fungi and nematodes. Interspecific differences in behaviors such as allogrooming, known to be associated with the social control of disease, may be significant in determining resistance to infection. Disease has been proposed as an important selective factor in termite evolution ( Rosengaus and Traniello 1993 ; Thorne and Traniello 2003 ). Selection for individual physiological resistance was perhaps influenced more by group living than by ecological variations in exposure to antigens. Calleri et al. ( 2006 ) demonstrated that low genetic heterozygosity reduced the disease resistance of grouped Z. angusticollis , but did not appear to negatively affect the immune response of individual termites maintained in isolation. This suggests that social mechanisms of infection resistance may be more significant in termite disease control than individual physiological immunity and its underlying genetic architecture. In other words, socially mediated immunocompetence ( Traniello et al. 2002 ), may have benefits in disease resistance sufficient to relax selection for individual immune function. Research linking ecological heterogeneity in pathogenic pressure, genetic variation in immunity, and direct measurement of in vivo immune response to both inert and viable disease agents is required to further evaluate this hypothesis.
Associate Editor: Robert Jeanne was editor of this paper Termites live in nests that can differ in microbial load and thus vary in degree of disease risk. It was hypothesized that termite investment in immune response would differ in species living in nest environments that vary in the richness and abundance of microbes. Using the drywood termite, Incisitermes schwarzi Banks (Isoptera: Kalotermitidae), as a model for species having low nest and cuticular microbial loads, the susceptibility of individuals and groups to conidia of the entomopathogenic fungus, Metarhizium anisopliae Sorokin (Hypocreales: Clavicipitaceae), was examined. The survivorship of I. schwarzi was compared to that of the dampwood termite, Zootermopsis angusticollis Hagen (Termopsidae), a species with comparatively high microbial loads. The results indicated that I. schwarzi derives similar benefits from group living as Z. angusticollis: isolated termites had 5.5 times the hazard ratio of death relative to termites nesting in groups of 25 while termites in groups of 10 did not differ significantly from the groups of 25. The results also indicated, after controlling for the influence of group size and conidia exposure on survivorship, that Z. angusticollis was significantly more susceptible to fungal infection than I. schwarzi , the former having 1.6 times the hazard ratio of death relative to drywood termites. Thus, disease susceptibility and individual investment in immunocompetence may not be dependent on interspecific variation in microbial pressures. The data validate prior studies indicating that sociality has benefits in infection control and suggest that social mechanisms of disease resistance, rather than individual physiological and immunological adaptations, may have been the principle target of selection related to variation in infection risk from microbes in the nest environment of different termite species. Keywords
Acknowledgements We thank the administrators of the Redwood East Bay Regional Park, Pebble Beach Resort and the State of Florida for allowing us to collect termite colonies and two anonymous referees for their helpful comments. We also thank Dr. Rudy Scheffrahn for sharing termite identification resources and Dr. Marc Seid for helping with termite collection. This research was supported by National Science Foundation Grant IBN-0116857 (J. Traniello and R. Rosengaus, PIs).
CC BY
no
2022-01-12 16:13:46
J Insect Sci. 2010 May 8; 10:44
oa_package/09/6f/PMC3014820.tar.gz
PMC3014821
21209813
1. Introduction Nonoperative management of splenic trauma is well established in paediatric practice [ 1 ]. Patients who are haemodynamically stable can be safely treated with NOM [ 2 ]; it is associated with decreased incidence of postsplenectomy sepsis and complications associated with nontherapeutic laparotomies. A careful history, thorough clinical examination, and radiological investigations would guide overall management. However, management of children presenting with multiple injuries can often be challenging, and both clinical and radiological features may be subtle initially. The aim of our report is to highlight some of these subtle radiological features associated with splenic avulsion injury, which may be difficult to interpret especially in the presence of multiple injuries.
4. Discussion NOM of splenic trauma is well established in paediatric practice [ 1 ]. An avulsion injury to the spleen is rare and can occur following impact at great speed, where the spleen is torn off its pedicle. This would result in severe (case I) or uncontrollable (case II) haemorrhage and shock. Splenic injury following blunt abdominal trauma is best diagnosed with the use of CT scan; intravenous contrast improves the sensitivity of this imaging modality [ 3 ]. In a nontrauma setting the spleen enhancement (56–65 HU) is slightly greater than that of the liver (40–60 HU). Berland and VanDyke [ 4 ] scanned ten random patients and found that splenic enhancement was on an average 23 Hounsfield units (HU) greater than hepatic enhancement. The splenic parenchyma reaches a homogeneous enhancement approximately 1 minute after contrast agent administration, with density values ranging between 75 and 97 HU. Shock can lead to hypoperfusion of the spleen and result in poor enhancement on contrast-enhanced CT scans ( Figure 2 ). However, following trauma, splenic enhancement may be decreased without any parenchymal or vascular injury [ 5 ], making interpretation of this subtle sign difficult. Furthermore, in an avulsion injury the solid organ may appear to be intact radiologically. In both cases above, the spleen appeared to be intact on both CT scans, and the difference in poorer enhancement of the spleen (30–45 HU) compared to the liver was subtle. (Figures 1 and 2 ). In a state of hypovolemia, Detweiler [ 6 ] suggests the development of a hypoperfusion complex that affects the spleen more than the liver. Studies in canines demonstrate that stimulation of splenic nerve fibres caused decreased arterial inflow and increased venous outflow. This was associated with a decrease in splenic weight as blood was expelled from the spleen into the circulation. Anderson [ 5 ] postulated that a similar response might be observed in humans following trauma and hypotension. The reason the splenic perfusion is affected more than hepatic perfusion is possibly because hepatic arterial inflow is autoregulated and also because of the dual blood supply to the liver. Splenic enhancement by less than 30 HU in children (20 HU in adults) is a recognised feature of this hypoperfusion complex [ 4 , 7 ]. Other features of vascular rupture or infarction of the spleen may be suspected with findings such as perisplenic haematoma, splenic enlargement, or rim enhancement. Radiologists may also be able to comment on the patency of the hilar vessels and whether they are intact or not. The absence of adequate enhancement of the spleen in the portal phase would suggest a splenic vascular pedicle injury; the presence of extravasated contrast around the splenic hilum suggests active ongoing bleeding.
5. Conclusion NOM remains the mainstay of managing splenic injury following blunt abdominal trauma. Radiological features of splenic avulsion can be subtle and difficult to interpret in the background of multiple injuries and persistent hypotension. In the context of blunt abdominal trauma, the density of the spleen on CT, as judged by Hounsfield units (HU), is an indicator of poor spleen perfusion. However, decision to go to laparotomy will continue to be based on clinical features of persistent shock despite resuscitation rather than specific identification of source of bleeding in the majority of cases. However, an awareness of the features highlighted above can help refine the decision making process, inform the surgeon regarding the operative approach and focus their search for source of bleeding during a difficult emergency laparotomy.
Academic Editor: Aaron S. Dumont Splenic trauma in children following blunt abdominal injury is usually treated by nonoperative management (NOM). Splenectomy following abdominal trauma is rare in children. NOM is successful as in the majority of instances the injury to the spleen is contained within its capsule or a localised haematoma. Rarely, the spleen may suffer from an avulsion injury that causes severe uncontrollable bleeding and necessitates an emergency laparotomy and splenectomy. We report two cases of children requiring splenectomy following severe blunt abdominal injury. In both instances emergency laparotomy was undertaken for uncontrollable bleeding despite resuscitation. The operating team was unaware of the precise source of bleeding preoperatively. Retrospective review of the computed tomography (CT) scans revealed subtle radiological features that indicate splenic avulsion. We wish to highlight these radiological features of splenic avulsion as they can help to focus management decisions regarding the need/timing for a laparotomy following blunt abdominal trauma in children.
2. Case I An 11-year-old boy was admitted following a head-on collision with a stationary vehicle, after riding downhill on a bicycle without any headgear, approaching at a speed of around 25 miles/hr at moment of impact. On arrival, his airway was patent and his cervical spine was immobilised in a collar. His pulse rate was 160/min, blood pressure 105/60 mmHg, respiratory rate 28/min, and oxygen saturation 99% on 15 L/min of oxygen. Glasgow Coma Scale at the scene was 4/15, improving to 8/15. Clinically evident injuries included a large degloving injury of the scalp, multiple facial and lip lacerations, comminuted fracture of the right humerus, and fracture of the left femur. He also had decreased air entry on the left side of the chest with some mild bruising on the left chest wall. He was resuscitated with intravenous fluids and a urinary catheter was inserted, and this did not reveal any haematuria. Following elective intubation and ventilation, he underwent trauma series X-rays and contrast-enhanced computed tomography (CT) scans. CT thorax revealed a substantial left-sided pneumothorax, which was relieved by a left chest drain. X-rays of his cervical spine and pelvis were normal. X-rays of his right humerus and femur revealed comminuted fractures of both long bones. CT head revealed normal appearances of the ventricles and basal cisterns with no intracranial haemorrhage; there was a small linear fracture of the left maxillary antrum. Abdominal CT revealed a small amount of free fluid/blood seen around liver with multiple lacerations seen in the left kidney with a left perinephric and retroperitoneal haematoma. The left renal vein and artery appeared intact. There was no evidence of free intraperitoneal air or bowel wall thickening to suggest bowel trauma. The liver, spleen, pancreas, gall-bladder, and right kidney appeared normal; however, there was some image degradation artefact secondary to the arms being by the side of the abdomen (Figures 1(a) and 1(b) ). During CT scanning he became haemodynamically unstable and required further resuscitation with boluses of colloid and two units of blood. At this stage an emergent laparotomy was considered but not performed as he stabilised and was transferred to intensive care unit (ICU) for observation. His haemoglobin was 11.1 g/dL and platelet count 176. Urea and electrolytes were within the normal range. Within the next 4 hours, following review by orthopaedic surgeons and neurosurgeons, he was taken to theatre for necessary operative interventions. These included suturing of lacerations, stabilisation of fractures, and insertion of an intracranial pressure (ICP) monitoring bolt. Over the subsequent 24 hours he remained haemodynamically unstable on the ICU with his haemoglobin dropping to 5.9 g/dL and a platelet count of 21. He received a further two units of blood plus cryoprecipitate. Examination revealed a soft abdomen with no evidence of bruising. Further 24 hours later he was hypotensive, urine output decreased, and he developed abdominal distension. Following a further blood transfusion, he underwent a repeat contrast-enhanced CT scan of his abdomen. The CT scan revealed intraperitoneal blood, which had increased since the previous scan, mainly in the right and left paracolic gutter, lower abdomen, and pelvis. There was decreased enhancement of the spleen, suggesting a splenic injury (Figures 2(a) and 2(b) ). He underwent an emergency laparotomy, and the spleen was found to be completely avulsed and in two pieces ( Figure 3 ). The spleen was removed and the pedicle transfixed. He made an uneventful recovery and was discharged home on postoperative day 12. 3. Case II A three-year-old boy was sitting on the handlebars of a motorcycle that was involved in a head-on collision with another motorcycle and suffered blunt trauma to his abdomen. He was admitted to the local hospitals A&E Department. Upon admission his pulse rate was 144/min, blood pressure 105/52 mmHg, and GCS 5/15. He was intubated ventilated, and subsequently trauma series X-rays and CT scans were performed. Clinical findings of note were an abrasion to his lower anterior abdominal wall and bluish discoloration of his abdomen. The only positive radiological finding of note was the presence of free fluid in his abdomen on the abdominal CT scan. He underwent an emergency laparotomy because of persistent hypotension despite resuscitation. At laparotomy, his spleen was in three pieces and completely avulsed off its pedicle. There was a 30 cm long tear in the small bowel mesentery with the related gut showing signs of ischaemia. The rest of his bowel and solid organs were normal. He underwent splenectomy and resection of small bowel and end-to-end anastomosis. He made an uneventful recovery except for an incisional hernia that has been repaired successfully. Subsequent review of his CT scans revealed subtle features of nonenhancement of the spleen compared to the liver in the portal venous phase suggestive of splenic pedicle injury. There was active extravasation of contrast around the splenic hilum suggestive of ongoing active bleeding (Figures 4(a) and 4(b) ). Conflict of Interests This is no conflict of interests declared.
CC BY
no
2022-01-13 01:48:12
Case Rep Med. 2010 Dec 1; 2010:762493
oa_package/23/6e/PMC3014821.tar.gz
PMC3014822
21209814
2. Discussion PMR and giant cell arteritis (GCA) are closely related vasculitic conditions. GCA involves large-sized and medium sized arteries, most commonly the temporal arteries. PMR is characterised by aching and morning stiffness in the shoulder, pelvic girdles, and neck. The two disorders can occur separately or together and it is postulated that they are different manifestations of the same disease process [ 1 ]. About 50 percent of patients with GCA also have PMR and about 10 percent of those with PMR also have GCA [ 2 ]. Diagnosis is based on clinical symptoms, raised acute phase makers, response to glucocorticoids and exclusion of other disease. Different clinical criteria exist to help diagnosis: the majority (89 percent) include an elevated ESR [ 3 ]. The gold standard for diagnosis remains temporal artery biopsy. Unusual presentations of GCA include cough, pyrexia of unknown origin, and lower limb claudication [ 4 ]. Neurological manifestations such as mononeuropathy or peripheral neuropathy can occur in approximately 30 percent of patients [ 5 ]. Stroke is less common (3-4 percent) [ 6 ]. Where stroke occurs, it may follow a fluctuant course corresponding to the severity of vasculitis [ 7 ]: this may have been the case with this patient. In this instance, diagnosis was complicated further by the patient's normal ESR and CRP. Low levels of ESR (<40) have been reported in up to 5.4% of patients with GCA [ 8 ]; this patient's disease seemed unusually aggressive given the low levels of antiinflammatory markers. Anticardiolipin antibodies have been associated with GCA and with more severe disease [ 9 ], however, this patient's thrombophilia screen was negative. Patients with GCA-associated stroke tend to recover gradually with prompt administration of corticosteroids [ 7 ]. A combination of antiplatelet and corticosteroids may be advisable for preventing stroke occurrence [ 10 ].
3. Conclusion Stroke is an uncommon but serious complication of GCA. Normal levels of ESR and CRP do not preclude the diagnosis. Temporal artery biopsy should be considered for patients with stroke and symptoms suggestive of PMR.
Academic Editor: J. A. Elefteriades We describe an unusual complication of a common disease: stroke presenting in a man recently diagnosed with polymyalgia rheumatica. Initial inflammatory markers were misleading. We discuss pitfalls in diagnosis, and approach to management.
1. Case History A 63-year-old businessman presented to the emergency department (ED) with transient left arm weakness, expressive dysphasia, left facial droop, and muscle aches. He had attended his general practitioner (GP) one month previously with aches and pains: his GP diagnosed polymyalgia rheumatica (PMR) and commenced steroid treatment with significant improvement. Before starting steroids, erythrocyte sedimentation rate (ESR) was mildly elevated at 22 mm/hr; C-reactive protein (CRP) had been modestly elevated (46 mg/L). The patient had stopped steroids two days prior to admission, with relapse of symptoms. His medical history was significant for ocular migraines, angina, an upper limb deep venous thrombosis 25 years ago, and a pulmonary emoblism post-angiogram 9 months ago. On examination, reduced power was noted in his left hand with normal speech, no visual field abnormalities, and no facial droop. Investigations included a normal ESR and CRP. Computed tomography of brain was normal, but magnetic resonance imaging (MRI) of brain and C-spine ( Figure 1 ) showed a subacute right posterior parietal infarct with C3-C4 posterior disc bulge. His left upper limb weakness was not attributed to cervical myelopathy. Ultrasound of carotids showed no significant stenosis. Antinuclear antibody assay was negative, as were antineutrophil cytoplasmic antibody and rheumatoid factor. A thrombophilia screen was negative. His symptoms were felt to be consistent with PMR and giant cell arteritis with concomitant stroke, and he was treated with high-dose aspirin and oral steroids. A temporal artery biopsy was scheduled. This was delayed secondary to worsening of left arm weakness, new left upper quadrantanopia, and left sided neglect. A repeat MRI brain ( Figure 2 ) showed extension of the right middle cerebral infarct. CRP was raised at 41 mg/L with a normal ESR (6 mm/hr). His steroid treatment was changed to high-dose intravenous methylprednisolone 1 g daily for three days; he subsequently continued on high-dose oral steroids (prednisolone 60 mg daily). A temporal artery biopsy performed two weeks post-admission disclosed a small and fibrotic vessel. Histological examination confirmed giant cell arteritis. The patient was discharged with mild weakness in his left upper limb and minimal functional limitations. He continued on low-dose aspirin and oral steroids. Conflict of Interests None declared. The patient has given his consent for his story and his images to be used in this way. Author Contribution All authors were members of the team who treated the patient. N. Casey prepared the clinical vignette. S. McDermott conducted the literature review and prepared the first draft which was critically reviewed by D. J. Robinson and K. M. Tan. All authors approved the final draft.
CC BY
no
2022-01-13 01:48:12
Case Rep Med. 2010 Dec 15; 2010:549258
oa_package/2f/f7/PMC3014822.tar.gz
PMC3014823
21209815
1. Introduction External hydrocephalus is a well-established entity in infants which is benign and usually resolves without shunting [ 1 , 2 ]. The term “External Hydrocephalus” has also been used to describe the presence of extra ventricular cerebrospinal fluid (CSF) collections accompanied by hydrocephalus, particularly in cases of adults suffering from aneurysmal subarachnoid hemorrhage and severe head injuries [ 3 – 6 ]. Several other terms have been used to describe this entity [ 7 ] which has lead to confusion about this disease. However, the fact that this form of hydrocephalus does not have a benign course and needs in many cases surgical management [ 3 , 6 – 9 ] demonstrates the need for a different term other than “external hydrocephalus.” The term subdural effusion with hydrocephalus (SDEH) has been used in the literature previously [ 6 , 8 ] and describes more accurately the nature and the severity of this condition, thereby differentiating it from the benign subdural collections of infancy and subdural hygromas. A subdural peritoneal (S-P) shunt or single burr hole drainage are the preferred methods of treating subdural hygromas [ 10 ]. This opinion has been challenged and in a retrospective study of 1,601 patients with brain injury, conservative management has been proposed for delayed evolution of posttraumatic subdural hygroma [ 11 ] because of modest improvement after operation. However, if an SDEH is treated as a simple subdural hygroma, after the S-P shunt placement ventricular dilatation supervenes and the patient will need a ventriculoperitoneal (V-P) shunt thereafter. It is extremely important to differentiate SDEH from other subdural effusions such as hygromas and chronic subdural hematomas because the V-P shunt placement in cases of subdural collections without hydrocephalus will increase the collection and may lead to neurological deterioration. On the other hand, if an SDEH is regarded as a subdural hygroma, the treatment of the hydrocephalus is delayed, which may lead to a permanent neurological deficit. In addition, the management of subdural effusions may require multiple unsuccessful surgical procedures (burr hole drainage of the subdural collection or S-P shunts). These procedures have only a temporary effect in cases of SDEH, since the real cause of this condition is the hydrocephalus and the communication between the ventricles and the subdural space which allows the CSF to be diverted outside the ventricles. Any attempt to treat the subdural collection directly, in cases of SDEH, before the permanent management of the hydrocephalus increases significantly the risk of developing a central nervous system (CNS) infection with subsequent further delay in V-P shunt implantation. To follow, there are three illustrative cases, demonstrating patients who successfully underwent a V-P shunt placement (OSV II Smart ValveT System, Integra Neurosciences Implants S.A.) for the treatment of a subdural collection with hydrocephalus, following a head injury.
3.3. Methods of Diagnosis Obviously, it is very important to differentiate SDEH from other subdural collections, for example, chronic subdural hematomas (CSDHs) and subdural hygromas because a V-P shunt is the treatment of choice in SDEH, but in the other cases, it will cause an enlargement of the subdural collection and a deterioration of the mass effect. The CT scan reveals, in most of the cases, dilatation of the lateral ventricles and periventricular lucency when the CSF accumulates in the subdural space [ 6 ]. However, the subdural hygroma cannot be differentiated radiographically from an SDEH before the stage of ventriculomegaly [ 8 ]. Another point is that the composition of the subdural collection can be evaluated from the signal intensity [ 3 ], especially with brain magnetic resonance imaging (MRI). This is very helpful for the diagnosis of a CSDH. The subdural hygroma contains xanthochromic fluid, and the protein content is often higher than that of CSF [ 10 ]. In addition, the subdural hygroma shows meningeal enhancement on gadolinium-diethylenetriamine pentaacetic acid- (Gd-DTPA-) enhanced MRI. Radiological evaluation of SDEH reveals preservation of the ipsilateral sulci and basal cisterns. On the other hand, CSDH and subdural hygromas produce compression of the subarachnoid spaces on the same side as the fluid collection [ 10 ]. Another radiological examination, proposed as helpful to differentiate SDEH, is CT cisternography [ 6 ] which may show whether the subdural space communicates with the ventricles; however, it has been proved inadequate test in the diagnosis of normal pressure hydrocephalus (NPH) [ 18 , 19 ], and there is no indication that it will be useful investigating complex cases of SDEH. Simple lumbar punctures with removal of 20 cc of CSF may be useful but carry an obvious risk particularly with regards to subdural hygromas or CSDH. There might be a scope of using acetazolamide in the diagnosis of SDEH if the clinical condition of the patient allows for a delay in the V-P shunt implantation [ 20 ]. It has already been used for the treatment of external hydrocephalus in infants [ 21 ]. This center has used acetazolamide in some cases of extra-axial collections of CSF after craniectomies with significant but temporary results. The presence of an extra-axial collection with mass effect makes the decision to treat an SDEH with a V-P shunt difficult and many surgeons would argue that it is better to wait until the subdural collection has been absorbed and the hydrocephalus is established [ 6 ]. However, this practice is not without risk because shunting at a later stage might not reverse a neurological deficit. Also, dealing with the subdural collection first with a simple burr hole evacuation of the subdural effusion is not without risks; the cause of the SDEH remains untreated and the patient might develop CSF leak and subsequently infection which will delay further the implantation of the V-P shunt. After diagnosis of SDEH has been made, we would advise to treat this condition with a V-P or an L-P shunt [ 6 ]. Another approach to diagnose these difficult cases of ventriculomegaly is to use CSF dynamics [ 22 ] calculating the resistance for CSF absorption. Both of these tests require CSF removal with a lumbar puncture which carries a potential risk in the presence of a subdural hematoma or hygroma. Also, a negative tap test does not exclude the diagnosis of hydrocephalus [ 23 , 24 ]. In posttraumatic patients, the ventriculomegaly may be associated with altered CSF dynamics [ 25 ] which might produce dubious results in infusion studies. Alternative to the tests requiring a lumbar puncture is the continuous monitoring of intracranial pressure (ICP). This has been proposed by Poca et al. as a helpful diagnostic method, particularly in complex cases, and it seems that high mean ICP and plateau waves are good prognostic factors for satisfactory outcome after shunt insertion [ 23 , 25 ]. Recently, Huh et al. [ 7 ] suggested measuring the subdural pressure using a manometer intraoperatively, before opening the dura mater, in patients with subdural collections and ventriculomegaly. They found that four patients with subdural pressures above 15 cm H 2 O and a pediatric patient (2 years old) with a subdural pressure of 12 cm H 2 O eventually required a shunt operation. Both methods will be extremely helpful in managing patients with SDEH because there is no need for a lumbar puncture which carries the risk of increasing the size of the subdural collection in cases of a misdiagnosed subdural hygroma.
3. Discussion Subdural effusions with hydrocephalus (SDEH) in adults have been described after aneurysm rupture and subarachnoid hemorrhage, [ 3 , 4 , 6 , 7 ] after neurosurgical procedures [ 3 , 4 , 6 , 8 ] and severe head injuries [ 3 , 4 ]. 3.1. Pathophysiology The pathophysiological mechanisms of this disorder include the free communication of the ventricles, and the subdural space due to the rupture of some part of the arachnoid membrane, particularly basal cisterns or lamina terminalis tear, which then allows fluid to flow into this compartment. The SDEH occurs when the abnormal CSF circulation is combined with communication between the subdural space and the ventricles. The CSF is diverted to the subdural space because the convexity of the brain has less resistance compared to the ependyma of the ventricles and the formation of the subdural CSF collection requires less pressure than the ventricular enlargement [ 8 ]. In addition to the free communication between the ventricles and the subdural space, the dysfunction in CSF absorption at the level of the arachnoid granulations is necessary for the development of a CSF subdural effusion with hydrocephalus. Severe head injuries are associated with hydrocephalus because of abnormal CSF circulation due to posttraumatic subarachnoid haemorrhage, arachnoid tear, cranial surgery, and particularly craniectomy [ 12 – 17 ]. According to Kilincer and Hamamcioglu [ 13 ], head trauma itself can cause subdural effusion due to subarachnoid haemorrhage, ruptured arachnoid tear, and gradual shrinkage of the swollen brain. The authors experienced similar difficulty with our cases in treating persistent subdural effusions, and they hypothesized that they might be the result of a “resistance gradient” between the two hemispheres caused by a unilateral large craniectomy. In a large series of 108 consecutive decompressive craniectomies [ 15 ], the incidence of posttraumatic hydrocephalus was 9.3%. In the same series, 21.3% of the patients had posttraumatic subdural effusion. In this study, the coexistence of the two pathologies in the form of SDEH has not been addressed. The primary problem in SDEH is the hydrocephalus, and we do agree with the opinion of Yang et al. for surgical intervention “as soon as possible after the diagnosis of hydrocephalus and the exclusion of contraindications”. Aarabi et al. [ 12 ] studied the dynamics of subdural hygromas following decompressing craniectomy (DC). In their series of 68 patients who underwent DC, there were 39 patients who developed hygromas and 29 who did not. The authors concluded that although hygromas are commonly (57%) developed after craniectomies, they rarely require surgical intervention since they gradually disappear. However, the hydrocephalus which was developed in patients with or without hygroma was treated with CSF diversion. Yang et al. [ 16 ] proposed another interesting explanation of the subdural effusions after treating this complication in a patient with a decompressive craniectomy. They noticed that the patient was still treated with dehydration despite the fact that the oedema has subsided. Simply rehydrating the patient resolved the collection and the symptoms. 3.2. Difference between Subdural Effusions with Hydrocephalus (SDEH) and a Subdural Hygroma The pathophysiological mechanisms that have been proposed for the formation of the traumatic subdural hygroma involve the arachnoid tearing which acts as a one-way valve between the subarachnoid and the subdural space and is usually caused by mild or moderate trauma [ 8 ]. There is also a theory that serum fluid leaks from fenestrations of small vessels on subdural neomembranes and concomitant enlargement of the subdural hygromas [ 10 ]. 3.3. Methods of Diagnosis Obviously, it is very important to differentiate SDEH from other subdural collections, for example, chronic subdural hematomas (CSDHs) and subdural hygromas because a V-P shunt is the treatment of choice in SDEH, but in the other cases, it will cause an enlargement of the subdural collection and a deterioration of the mass effect. The CT scan reveals, in most of the cases, dilatation of the lateral ventricles and periventricular lucency when the CSF accumulates in the subdural space [ 6 ]. However, the subdural hygroma cannot be differentiated radiographically from an SDEH before the stage of ventriculomegaly [ 8 ]. Another point is that the composition of the subdural collection can be evaluated from the signal intensity [ 3 ], especially with brain magnetic resonance imaging (MRI). This is very helpful for the diagnosis of a CSDH. The subdural hygroma contains xanthochromic fluid, and the protein content is often higher than that of CSF [ 10 ]. In addition, the subdural hygroma shows meningeal enhancement on gadolinium-diethylenetriamine pentaacetic acid- (Gd-DTPA-) enhanced MRI. Radiological evaluation of SDEH reveals preservation of the ipsilateral sulci and basal cisterns. On the other hand, CSDH and subdural hygromas produce compression of the subarachnoid spaces on the same side as the fluid collection [ 10 ]. Another radiological examination, proposed as helpful to differentiate SDEH, is CT cisternography [ 6 ] which may show whether the subdural space communicates with the ventricles; however, it has been proved inadequate test in the diagnosis of normal pressure hydrocephalus (NPH) [ 18 , 19 ], and there is no indication that it will be useful investigating complex cases of SDEH. Simple lumbar punctures with removal of 20 cc of CSF may be useful but carry an obvious risk particularly with regards to subdural hygromas or CSDH. There might be a scope of using acetazolamide in the diagnosis of SDEH if the clinical condition of the patient allows for a delay in the V-P shunt implantation [ 20 ]. It has already been used for the treatment of external hydrocephalus in infants [ 21 ]. This center has used acetazolamide in some cases of extra-axial collections of CSF after craniectomies with significant but temporary results. The presence of an extra-axial collection with mass effect makes the decision to treat an SDEH with a V-P shunt difficult and many surgeons would argue that it is better to wait until the subdural collection has been absorbed and the hydrocephalus is established [ 6 ]. However, this practice is not without risk because shunting at a later stage might not reverse a neurological deficit. Also, dealing with the subdural collection first with a simple burr hole evacuation of the subdural effusion is not without risks; the cause of the SDEH remains untreated and the patient might develop CSF leak and subsequently infection which will delay further the implantation of the V-P shunt. After diagnosis of SDEH has been made, we would advise to treat this condition with a V-P or an L-P shunt [ 6 ]. Another approach to diagnose these difficult cases of ventriculomegaly is to use CSF dynamics [ 22 ] calculating the resistance for CSF absorption. Both of these tests require CSF removal with a lumbar puncture which carries a potential risk in the presence of a subdural hematoma or hygroma. Also, a negative tap test does not exclude the diagnosis of hydrocephalus [ 23 , 24 ]. In posttraumatic patients, the ventriculomegaly may be associated with altered CSF dynamics [ 25 ] which might produce dubious results in infusion studies. Alternative to the tests requiring a lumbar puncture is the continuous monitoring of intracranial pressure (ICP). This has been proposed by Poca et al. as a helpful diagnostic method, particularly in complex cases, and it seems that high mean ICP and plateau waves are good prognostic factors for satisfactory outcome after shunt insertion [ 23 , 25 ]. Recently, Huh et al. [ 7 ] suggested measuring the subdural pressure using a manometer intraoperatively, before opening the dura mater, in patients with subdural collections and ventriculomegaly. They found that four patients with subdural pressures above 15 cm H 2 O and a pediatric patient (2 years old) with a subdural pressure of 12 cm H 2 O eventually required a shunt operation. Both methods will be extremely helpful in managing patients with SDEH because there is no need for a lumbar puncture which carries the risk of increasing the size of the subdural collection in cases of a misdiagnosed subdural hygroma. 3.4. Treatment In patients with SDEH after a severe head injury or intracranial aneurysm clipping, the placement of a V-P shunt may be sufficient to treat both the subdural effusion and the hydrocephalus and subsequently improve the clinical symptoms. The V-P shunt placement drains the hydrocephalus which is the cause of this entity, and as a result it prevents the CSF diversion to the subdural space. Yang et al. [ 15 ] have suggested that duraplasty could prevent the alteration in CSF dynamics after a craniectomy. In their series, the duraplasty decreased the incidence of subdural effusion. Although the authors did not mention any difference in the incidence of hydrocephalus, duraplasty is a measure to avoid the disturbance in the CSF circulation and facilitate the future cranioplasty protecting the brain tissue during dissection. According to Kilincer and Hamamcioglu [ 13 ], acknowledging the “resistance gradient” caused by a large unilateral craniectomy is important to decrease the complication rates applying modifications in the surgical technique. Duraplasty is a simple technique which might prevent subdural effusions and decrease complication rates after craniectomies. Another simple measure to correct the pressure gradient is bandaging the head after the peak time of cerebral swelling to avoid brain herniation [ 16 ]. Early cranioplasty has also been proposed [ 15 – 17 , 26 – 29 ] for correction of CSF hydrodynamics after decompressive craniectomy particularly in cases of the “syndrome of the trephine”. Delayed cranioplasty was correlated with persistent hydrocephalus in a retrospective study of ten patients who underwent decompressive hemicraniectomy for ischemic or hemorrhagic stroke [ 17 ], and based on this observation, the authors suggested that early cranioplasty might promote spontaneous improvement of hydrocephalus. Although this is a different patient group from our cases—there were no trauma patients in the Waziri et al. study—this observation is important, and we would agree that early cranioplasty is appropriate providing that there are no contraindications. The two patients in our study who had a craniectomy before developing SDEH (Cases 2 and 3 ) were not medically fit for an early cranioplasty. When treating a patient with a subdural effusion, it is important to consider whether there is accompanying hydrocephalus. Apart from the radiological evaluation, clinical tests including measurement of the subdural pressure are recommended providing that there is a suspicion of an SDEH. Although our patients responded to V-P shunt placement, in persistent cases, there might be an indication to proceed to an S-P shunt either in parallel or connected to the valve of the V-P shunt.
Academic Editor: Aaron S. Dumont Background . Subdural collections of cerebrospinal fluid (CSF) with associated hydrocephalus have been described by several different and sometimes inaccurate terms. It has been proposed that a subdural effusion with hydrocephalus (SDEH) can be treated effectively with a ventriculoperitoneal shunt (V-P shunt). In this study, we present our experience treating patients with SDEH without directly treating the subdural collection. Methods . We treated three patients with subdural effusions and hydrocephalus as a result of a head injury. All the patients were treated with a V-P shunt despite the fact that there was an extra-axial CSF collection with midline shift. Results . In all of the patients, the subdural effusions subsided and the ventricular dilatation improved in the postoperative period. The final clinical outcome remains difficult to predict and depends not only on the successful CSF diversion but also on the primary and secondary brain insult. Conclusion . Subdural effusions with hydrocephalus can be safely and effectively treated with V-P shunting, without directly treating the subdural effusion which subsides along with the treatment of hydrocephalus. However, it is extremely important to make an accurate diagnosis of an SDEH and differentiate this condition from other subdural collections which require different management.
2. Case Presentations Case 1 The first patient was a 67-year-old man who was transferred to the Accident and Emergency Department disorientated, with a Glasgow Coma Score (GCS) of 10/15 (global aphasia, not obeying commands) following a moderate head injury, which he sustained in a road traffic accident. A computerized tomography (CT) scan revealed posttraumatic subarachnoid hemorrhage and right frontal petechial contusions which did not require evacuation. The patient was completely aphasic for the first 72 hours after the injury. A subsequent CT scan revealed bilateral subdural hypointense collections although the patient remained neurologically stable. However, on day 4 after the injury he deteriorated gradually and a new CT scan revealed enlargement of the left subdural collection with dilatation of the ventricles (Figures 1(a) and 1(b) ). A lumbar puncture was performed, after which the patient initially improved but unfortunately he deteriorated again. The decision was made to treat this condition as an SDEH, and as a result, a V-P shunt was inserted. On the day that the operation was performed, the patient had deteriorated to a GCS of 8/15. In the immediate postoperative period, he improved gradually, and 7 days after the operation, he regained speech and began to mobilise. A followup CT scan ( Figure 1(c) and 1(d) ) three months after the operation showed normalization of the ventricles, and the subdural collection was almost absent. Furthermore, the patient's symptoms were completely resolved, and the outcome was classified as “good recovery” using the Glasgow Outcome Scale (GOS = 5). Case 2 The second case was a 47-year-old man who sustained a severe head injury following a road traffic accident. He was intubated and ventilated before being transferred to the Accident and Emergency Department. His initial GCS was 5/15. A head CT scan demonstrated a subdural hematoma in the left frontotemporoparietal region, hemorrhagic contusions in the same area with mass effect, and evidence of a posttraumatic subarachnoid hemorrhage. He was immediately transferred to the theatre where he underwent a left frontotemporoparietal craniectomy because of the cerebral edema and removal of the subdural hematoma. Before the operation, the left pupil was fixed and dilated, he had a right hemiplegia and he was extending to pain. A followup CT scan showed complete removal of the hematoma, but the subsequent scans revealed a subdural CSF collection with ventricular dilatation ( Figure 2(a) ). Due to brain herniation he developed a left posterior cerebral artery (PCA) infarct. Initially, attempts were made to remove the subdural collection with a burr hole, but this procedure was unsuccessful as the collection recurred. The condition was treated as an SDEH and a V-P shunt was inserted. The subdural effusion disappeared, and the hydrocephalus was successfully treated ( Figure 2(b) ). He underwent a cranioplasty 6 months after the V-P shunt was inserted. However, he remains severely disabled, obeying simple commands, and opening his eyes spontaneously (GOS: 3). Case 3 The last case was a 69-year-old man who was intubated and ventilated before transfering to the Accident and Emergency Department from a district hospital, following a road traffic accident. His initial GCS at the scene was 5/15. A CT scan revealed a subdural hematoma in the left frontotemporoparietal region, a large hemorrhagic contusion in the left temporal lobe with mass effect, and a posttraumatic subarachnoid hemorrhage. He underwent a left frontotemporoparietal craniectomy and removal of the subdural hematoma. The patient remained intubated and the first followup CT scan showed that the hematoma had been successfully removed ( Figure 3(a) ). 24 days later, a CT scan revealed ventricular dilation with a right subdural effusion and widening of the interhemispheric fissure (Figures 3(b) and 3(c) ). Serial lumbar punctures were performed because a V-P shunt could not be inserted due to serious infections. After a long period of antibiotic treatment, the patient underwent a V-P shunt placement. The post-op CT scan revealed normalization of the ventricular size and absence of the right frontal and interhemispheric subdural effusion ( Figure 3(d) ). The patient still remains severely disabled (GOS: 3).
Acknowledgment Mr. N. Tzerakis would like to acknowledge Professor Philip Tsitsopoulos, Dr. Parmenion Tsitsopoulos, and Mr. Julian Cahill for their useful remarks and suggestions regarding the content of the paper.
CC BY
no
2022-01-13 01:48:12
Case Rep Med. 2010 Dec 12; 2010:743784
oa_package/63/fe/PMC3014823.tar.gz
PMC3014824
21209816
1. Introduction The use of cervical spine radiographs in the investigation of suspected foreign body ingestion remains a contested issue amongst ENT surgeons, radiologists, and accident and emergency doctors alike. However, no clear consensus has been reached with many physicians and surgeons still advocating discharge from hospital without cervical spine radiographs despite a positive history for foreign body ingestion but negative findings on flexible endoscopy and absent clinical signs. We report an unusual case of a sewing needle stuck in the posterior pharynx following an unusual preceding history. To our knowledge, this is the first reported English-language case involving the posterior pharyngeal wall. This unusual case study is relevant to doctors across different medical and surgical specialities. It illustrates the potential consequences of a failure to correctly identify foreign body ingestion and the importance of imaging when there is a history of foreign body ingestion, despite the absence of specific clinical signs.
3. Discussion Foreign bodies lodged in the pharynx are not uncommon findings in the accident and emergency setting. Frequently, these can be attributed to fish bone ingestion [ 1 ]. Patients that have a suspected swallowed foreign body tend to present with mild throat discomfort and dysphagia, progressing to odynophagia, dyspnoea, and surgical emphysema in more severe cases. Our particular case warrants analysis for a number of reasons. The first is the unusual preceding history. To date, it remains unclear as to how the needle became lodged in the posterior pharyngeal wall. Given the history, one might ascribe the symptoms of the foreign body retention to the chicken bones from the meal consumed. However, it is imperative that assumptions about the nature of the foreign body are not made. Bones are commonly found by flexible endoscopy and depending on their location and can often be removed without the need for general anaesthesia [ 1 , 2 ]. In this case, the flexible endoscopy merely revealed mild erythema but no obvious foreign body. The needle was only identified on posteroanterior and lateral soft tissue cervical spine radiographs. Chicken and fish bones can often be missed on plain films especially when the bone is lodged in an area of high soft tissue overlap [ 3 ], and sometimes they are radiolucent. Hence, if clinical findings are negative, there is a temptation to discharge the patient, especially if their symptoms are mild and nonspecific but the consequences of missing a foreign body are potentially life-threatening. (Of note, the effective radiation doses in anteroposterior and lateral cervical spine radiographs are 0.12 and 0.02 mSv, resp. This compares favourably to a routine chest radiograph, with a radiation dose of between 0.06 and 0.25 mSv [ 4 ]). Complications of foreign body retention are numerous and depend upon the nature of the foreign body involved, its location, and ultimately the duration of impaction [ 5 , 6 ]. If swift action is not taken, one potentially risks oesophageal perforation that can lead to fistula and abscess formation. According to one study, the sharper the foreign body, the earlier the risk of perforation [ 7 ]. Rare studies have even shown swallowed metal pins migrating to the superior mediastinum, ultimately requiring a median sternotomy for retrieval. Various ENT and emergency department studies have investigated the necessity for radiographic evaluation of suspected foreign body ingestion in the absence of obvious clinical signs. One such study by Marais et al. (1995) noted that radiography only correctly identified 38.3% of all foreign bodies with over one quarter of the patient population having a false positive diagnosis [ 8 ]. This was further backed up by Evans et al. (1992), who stated that plain radiography had a sensitivity of just 25.3% and that routine radiography for suspected fish bone impaction, as was the case in our patient, ought to be abandoned [ 9 ]. Neither study though takes into consideration variability between the interpreting clinician. Interestingly, Karnwal et al. (2008) looked at just this point and found that emergency department and ENT doctors missed almost 80% and 67% of all positive findings on radiography, with lateral neck X-rays helping in over 50% of all patients with foreign body ingestion [ 10 ]. They advocate greater radiology training to all junior ENT and emergency department doctors in recognising foreign bodies from lateral neck radiographs. In conclusion, a high index of suspicion must always remain in any patient presenting acutely with a history of foreign body ingestion, even in the absence of specific clinical signs. The minimal radiation exposure from cervical spine radiographs is an acceptable risk but the consequences of incorrectly discharging patients are potentially life-threatening. If there is no obvious foreign body visible on flexible endoscopy, we recommend imaging initially with radiographs, subsequently with CT, and if necessary, endoscopy under a general anaesthetic.
Academic Editor: Ingo W. Husstedt Foreign body ingestion is a frequent presenting complaint to most emergency departments but the finding of a sewing needle in the posterior pharynx particularly is a rare finding. We report a case of a male patient with a sewing needle lodged in the posterior pharynx despite a history suggestive of chicken bone ingestion, absent clinical features, and negative flexible endoscopic examination. The needle was only identified through cervical spine radiographs. Even subsequent pharyngoscopy, laryngoscopy, and upper oesophagoscopy all proved to be unremarkable with the patient eventually requiring a left neck exploration to remove the needle. The case outlines the importance of simple radiography in suspected foreign body ingestion, even though clinical and endoscopic findings may be unremarkable.
2. Case A 49-year-old Nigerian male presented to the emergency department of St Mary's Hospital, London, after experiencing sudden left-sided throat pain while eating chicken. There was mild dysphagia and odynophagia but no dyspnoea. His past medical history was unremarkable. On examination, the patient was apyrexial but distinctly hypertensive (190/112 mmHg). Observations were otherwise within normal limits. He had a full range of neck movements, and there was no obvious external neck swelling palpable. However, some tenderness was elicited just lateral to the thyroid cartilage in the left anterior triangle. Flexible nasendoscopy and laryngoscopy revealed only mild erythema over the posterior wall but no foreign body was seen. Specifically, there was no pooling of saliva in the piriform fossae, and the patient was still able to eat and drink. Despite the relatively unremarkable examination, postero-anterior (PA) and lateral soft tissue cervical spine radiographs were requested for completeness. To our surprise, these revealed a sewing needle, measuring 34.5 mm, lodged in the soft tissues of the posterior pharyngeal wall between C4–C6. The eye of the needle was clearly visible on magnification of the images. A CT scan was ordered to further delineate the needle's location, in view of its apparent proximity to large vessels. The patient was placed nil by mouth and given intravenous fluids. He was additionally reviewed by a cardiologist for his hypertension, with an echocardiogram and ECG revealing left ventricular hypertrophy. The patient was thus commenced on nifedipine. The patient was taken to theatre and underwent direct pharyngoscopy, laryngoscopy, and upper oesophagoscopy. These were all unremarkable. Consequently, left neck exploration and foreign body removal were undertaken. A left-sided skin crease neck incision was made, and dissection continued identifying the left internal jugular vein and carotid artery in the process. With the pharynx exposed, the needle was located penetrating the lateral part of the posterior pharyngeal wall and was easily extracted. There was no evidence of a residual perforation once removed. The patient's postoperative course was uneventful, commencing on sterile water the following day and discharged with a course of prophylactic antibiotics on the second postoperative day, eating and drinking normally.
Disclosure Dr. S. N. Unadkat wrote the manuscript, Mr. R. Talwar made the acquisition and analysis of data, Mr. N. Tolley was the responsible of the revision and approval of final draft of the paper. Please note, the case was presented at the XIXth World Congress of Otorhinolaryngology (IFOS), Sao Paolo, Brazil. 1–5 June 2009.
CC BY
no
2022-01-13 01:48:12
Case Rep Med. 2010 Dec 8; 2010:608343
oa_package/5e/fe/PMC3014824.tar.gz
PMC3014830
21209817
1. Introduction Delayed posthypoxic encephalopathy (DPHE) is a rare clinical entity characterized by delayed neurological deficits seen after an initial hypoxic-ischemic insult. It is commonly described in relation to carbon monoxide (CO) poisoning with prevalence of 0.06%–2.8% and equal sex predilection [ 1 ]. Prognosis can be variable, ranging from complete recovery to death. Methadone-induced DPHE is very rare and has been reported in only 5 cases so far: 4 children and 1 adult [ 2 – 6 ]. We report clinical, radiological, and pathological findings of methadone induced DPHE in a 38-year-old man who was successfully treated with methylprednisolone.
3. Discussion DPHE is characterized by a delayed onset of neurological deterioration after hypoxic-ischemic brain injury [ 1 ]. The diagnosis of DPHE is based on typical clinical and radiological presentation, after excluding other conditions that may mimic DPHE [ 7 ]. MRI or CT brain studies in DPHE show diffuse periventricular white matter changes [ 8 ]. Classic neuropathological findings in DPHE associated with CO poisoning are characterized by diffuse symmetrical demyelination in white matter with preserved axons, U fibers, and perivascular white matter [ 9 ]. Although MRI brain in our patient showed diffuse white matter lesions, surprisingly histopathology showed axonal injury with intact myelin structure. Pathological findings in our patient are similar to those seen in toxic leukoencephlopathy including small vacuoles in the white matter, axonal injury, diffuse reactive astrogliosis, and microglial proliferation [ 10 ]. Difference in pathological changes in our patient indicates that methadone induced DPHE may be different in underlying pathophysiology from CO-induced DPHE and DPHE may represent a spectrum of disorders rather than a single clinical entity. Even though impaired oligodendroglial function [ 11 ], reduced arylsulphatase A activity [ 12 ], altered immune response [ 13 ], and mitochondrial dysfunction [ 14 ] have been postulated in the etiology of DPHE, the underlying pathophysiology is unknown. In our patient, CSF MBP increased with time (2.8 NG/ML at initial presentation to 4.4 NG/ML at second admission) which may represent slow progressive axonal destruction. Delayed neurological symptoms after initial insult may be explained by this slow axonal destruction. Methadone induced DPHE may be related to mitochondrial dysfunction as described in heroin-induced leukoencephlopathy. Mitochondrial dysfunction causing opioid related leukoencephlopathy has been suggested by electron microscopy, elevated lactate peak on magnetic resonance spectroscopy, and the clinical improvement following antioxidant therapy in few patients [ 14 , 15 ]. There are no available controlled treatment trials for DPHE. Steroids have occasionally been used with some improvement in treatment of DPHE in children [ 2 , 4 ]. In an animal study, dexamethasone was found to be effective in preventing histological brain damage and learning and memory impairment caused by hypoxic-ischemic insult [ 16 ]. Autoimmune response against MBP induced by aldehydes from lipid peroxidation [ 13 ] may play an important role in the pathophysiology of DPHE which may explain the success of methylprednisolone in our patient. Amantadine 100 mg twice a day may be tried if patient does not respond to intravenous steroids or if patient has prominent features of apathy and abulia [ 3 ]. High-dose vitamin C, vitamin E, and coenzyme Q10 may be tried for their possible antioxidant effects and role in mitochondrial disorders [ 12 , 17 ]. Early and continued vigorous rehabilitation plays a very vital role in the management of these patients as seen in our patient [ 14 ]. The recovery and permanent neurological impairment from DPHE varies from series to series. Choi reported 75% full recovery in 1 year, and Shillito and Drinker reported only 50% full recovery within 2 years [ 1 , 18 ]. Follow up MRI at 2 months was unchanged in our patient despite clinical improvement which goes along with a previous report of earlier clinical improvement and delayed improvement radiologically which may take up to 9 months [ 8 ]. As our patient was doing extremely well at 2-year follow up, we did not repeat his MRI brain although it would have been interesting to follow the progression of white matter lesions. The usage of methadone has increased significantly during the last decade with methadone-related deaths increasing 390% from 1999 to 2004 [ 19 ]. However, despite the widespread use of methadone and unintentional overdose on methadone, only 5 cases of methadone induced DPHE have been reported to date [ 2 – 6 ], suggesting that it is either uncommon or often goes unrecognized or unreported. Out of the 5 cases 4 were reported in children and only 1 was reported in a 24-year-old male. We expect that more DPHE cases will be seen with the widespread usage of methadone in pain clinics. Neurobehavioral changes may be the initial manifestation of DPHE. Clinicians, especially psychiatrists, ER clinicians, and neurologists, should be fully aware of this entity to avoid exposing the patient to extensive invasive diagnostic procedures. The diagnosis of DPHE can be made based on typical clinical presentation and neuroimaging findings without brain biopsy. Patients with DPHE should be considered for a trial of steroids with high doses of antioxidant in order to hasten clinical recovery.
Academic Editor: Marie-Cécile Nassogne Objective . To describe the clinical, radiological and pathological findings in a patient with methadone-induced delayed posthypoxic encephalopathy (DPHE). Case Report . A Thirty-eight-year-old man was found unconscious for an unknown duration after methadone and diazepam ingestion. His initial vitals were temperature 104 degree Fahrenheit, heart rate 148/minute, respiratory rate 50/minute, and blood pressure 107/72 mmhg. He developed renal failure, rhabdomyolysis, and elevated liver enzymes which resolved completely in 6 days. After 2 weeks from discharge he had progressive deterioration of his cognitive, behavioral and neurological function. Brain MRI showed diffuse abnormal T2 signal in the corona radiata, centrum semiovale, and subcortical white matter throughout all lobes. Extensive work up was negative for any metabolic, infectious or autoimmune disorder. Brain biopsy showed significant axonal injury in the white matter. He was treated successfully with combination of steroids and antioxidants. Follow up at 2 year showed no residual deficits. Conclusion . Our observation suggests that patients on methadone therapy should be monitored for any neurological or psychiatric symptoms, and in suspected cases MRI brain may help to make the diagnosis of DPHE. A trial of steroids and antioxidants may be considered in these patients.
2. Case Report A 38-year-old righthanded Caucasian male computer engineer presented to the Kansas University Medical Center emergency department (ED) after being unconscious for an unknown duration. His history was notable for prior back and neck pain, for which he had taken methadone and diazepam, prescribed for his father. He had a long history of alcohol abuse, smoking, and hypertension. He had no drug allergies. Family history was notable for a father with schizophrenia, depression, and stimulant abuse. He was unresponsive and comatose on initial examination. His pupils were small (2 mm) and reactive. All four extremities moved in response to painful stimuli, and plantar responses were bilaterally downgoing. He had right basal crackles. His initial vital signs showed a temperature of 104 degree Fahrenheit, heart rate of 148 beats/minute, a respiratory rate of 50/minute, and blood pressure of 107/72 mmhg. Patient did not undergo any cardiopulmonary resuscitation and was intubated for airway protection. Postintubation ABG showed pH 7.4 (reference range (RR): 7.35–7.45), pCO 2 37 (RR: 35–45 mmhg), pO 2 110 (RR: 85–100 mmhg), and bicarbonate 23 (RR: 22–26 MEQ/L). His blood pressure dropped (78/50 mmhg) requiring intravenous dopamine. Initial blood tests revealed a white blood cell count of 13.1 (RR: 4.5–11.0 K/UL), aspartate transaminase of 233 (RR: 7–40 IU/L), alanine transaminase of 88 (RR: 7–56 IU/L), troponin of 3.98 (RR: 0.0–0.05 NG/ML), and remarkably elevated CPK 3339 (RR: 22 to 198 IU/L). The patient was in acute renal failure with BUN 13 (RR: 8–20 MG/DL), creatinine of 3.2 (RR: 0.4–1.24 MG/DL), and potassium of 6.6 (RR: 3.5–5.1 MMOL/L). Urine toxicological screen was positive for methadone and benzodiazepines, and negative for amphetamines, barbiturates, cannabinoids, cocaine and phencyclidine. His initial CSF exam showed protein 37.2 (15–40 MG/DL), glucose 81 (RR: 40–75 MG/DL), white cells 3 (RR: 0–5/UL), red cells 1 (RR: <1/UL), myelin basic protein (MBP) 2.8 (RR: <1.5 NG/ML), negative cultures, and no oligoclonal bands. A CAT scan without contrast of the head showed loss of distinction between grey and white matter. Patient did not have an initial MRI. Coronary angiogram revealed no abnormalities. Despite no growth on cultures, he was empirically treated for possible aspiration pneumonia with intravenous antibiotics. He received kayexalate and intravascular hydration along with sodium bicarbonate for acute renal failure. His condition stabilized, and he was extubated after 2 days. He gradually recovered over the next 4 days and was subsequently discharged home after 6 days of hospitalization. At discharge, he had no neurological deficits, and he was able to return to his previous profession as a computer engineer. However, 2 weeks after discharge, his cognitive and behavioral function began to deteriorate. He was brought to the ED 34 days from initial presentation, for subacute onset of difficulties with short-term memory, confusion, impairment of executive function, and lack of motivation. He was admitted to the psychiatry ward. At the time of the second admission, he was alert and oriented to person only. He was able to follow one-step commands. Cranial nerves were intact. He was moving all his extremities spontaneously. He had positive grasp and palmomental reflexes and was hyperreflexic with bilateral plantar extensor response. He continued to deteriorate over the next few days to the point he was totally dependent on others for all activities of daily living. CSF cytology, protein, cell counts, and glucose were normal. Oligoclonal bands were not detected. MBP was 4.4 (RR: <1.5 NG/ML). CSF 14-3-3 protein was negative. Urine sulfatide, peripheral leukocyte arylsulfatase A assay, serum long chain fatty acids, ELISA for HIV, serum levels of vitamin B12, methylmalonic acid, folate, TSH, vitamin E level, and thiamine were all within normal range. First EEG was done 46 days from initial presentation and showed symmetric bihemispheric dysfunction, and second EEG done on day 60 showed well-developed dominant rhythm of 10 hertz present mainly in the left hemisphere, marked depression of alpha, and medium amplitude of 2-3 hertz waves in the right frontotemporal regions, suggesting possible dysfunction of the right hemisphere. There was no indication of epileptic discharges. MRI of the brain revealed diffuse abnormal T2 signal in the corona radiata, centrum semiovale, and subcortical white matter throughout all lobes. There was no abnormal contrast enhancement, and no foci of restricted diffusion ( Figure 1 ). Initial brain biopsy was done 50 days from initial presentation to evaluate for hypoxic encephalopathy, toxic leukoencephlopathy, Creutzfeldt-Jakob disease, and metachromatic leukodystrophy. It was nondiagnostic as there was no white matter on the biopsy sample. A repeat biopsy of the right frontal lobe 12 days later revealed white matter changes including small vacuoles, axonal injury, diffuse reactive astrogliosis, and microglial proliferation. Luxol fast blue myelin stains revealed well-stained and preserved myelin. Immunohistochemical stains revealed immunoreactivity of axons in white matter area to b-APP and NF. White matter also showed strong immunoreactivity with CD 68 and GFAP ( Figure 2 ). A working diagnosis of DPHE was based on clinical history and neuroimaging findings. Fifty-six days from initial presentation, he was treated with intravenous methylprednisolone 500 mg twice daily for 5 days, amantadine 100 mg twice a day, vitamin C 500 mg twice a day, and vitamin E 800 mg three times a day. Following 5 days of therapy, the patient was more interactive and followed simple commands like open your month and touch examiner's finger. He recognized his wife and regained some long-term memories. He could again feed himself and used more words to interact and speak. After 1 week, methylphenidate was added to increase his participation in rehabilitation. Within 3 days he began having visual hallucinations consisting of spider webs and space ships around him. Both amantadine and methylphenidate were stopped and the visual hallucinations disappeared. He was able to perform his activities of daily living with verbal and procedural cueing. He was started on a slow taper of prednisone 80 mg, with incremental decreases by 20 mg every 2 weeks. He continued to show significant improvement during two weeks of inpatient rehabilitation therapy. At the time of discharge, he was independent in performing all activities of daily living and returned to his job as a computer engineer. A repeat MRI after 2 months was unchanged from his initial MRI ( Figure 1 ). At 6-month followup following hospital discharge, neuropsychiatric testing revealed no deficits. He was doing well at 2-year followup clinic visit.
CC BY
no
2022-01-13 01:48:12
Case Rep Med. 2010 Dec 9; 2010:716494
oa_package/5e/7d/PMC3014830.tar.gz
PMC3014831
21209818
1. Introduction Primary sclerosing cholangitis is a rare cause of chronic cholestasis in adult with prevalence of about 1–5/100,000 in Caucasian inhabitants. There is a close association with inflammatory bowel disease. The clinical course is, yet variable and unpredictable, slowly progressive and develops into biliary cirrhosis and corresponding complications. Therapeutic measures aim in improving bile flow to prevent the progression of biliary obstruction and liver transplantation is the treatment of choice in advanced stage of the disease.
3. Discussion The diagnosis of primary sclerosing cholangitis (PSC) in this patient is established by the biochemical profile of chronic cholestasis, typical strictures and pruning of the biliary tree upon cholangiography, and ring fibrosis around the bile ducts in liver biopsy. The coexisting iron deficiency anemia should lead to the suspiciousness of coexisting inflammatory bowel disease which was confirmed by colonoscopy in our case. Firstly reported by Dr. K. Delbet in 1924 by constellation of symptoms including pruritus and cholestatic liver pattern, PSC is characterized pathologically by the progressive, fibrous-stenosing and obliterating, predominantly segmental inflammation of the intra- and extrahepatic bile ducts and preferably affects male with a maximum age of around 25–45 [ 1 ]. About 80% of patients show both intrahepatic and extrahepatic involvement; 20% showed only extrahepatic involvement [ 2 ]. It is likely an immune-mediated disease with a wide range of autoantibodies detected in which p-ANCA, having the highest prevalence and detected in our patient, occurs in 80% of patients. Although it does not correlate with the activity of PSC, it may draw attention to colon involvement [ 3 ]. For diagnostic imaging, the typical findings of ERCP and MRCP in PSC are pearl-string-like changes of the bile ducts with intermittent, diffusely distributed, multiple, and irregular strictures of different length. Nowadays MRCP replaces the role of ERCP in the diagnosis of PSC because it is noninvasive with high sensitivity and specificity (both are greater than 80%) whereas ERCP can lead to potential serious complications such as pancreatitis and bacterial cholangitis [ 4 ]. As MRCP is not easily available in our centre, ERCP was performed instead, and the typical strictures and pruning of the biliary tree were demonstrated and two stones were found concurrently and removed uncomplicatedly in the same session. The liver biopsy of our case revealed the typical features of PSC. In fact, a liver biopsy is not required in the presence of typical cholangiographic features of PSC unless a small duct PSC is suspected because the disease localizes in intrahepatic ducts [ 5 ]. It was performed in our case because he was young with quite advanced laboratory parameters and had an elevated serum aminotransferase. Thus an accurate staging of the disease and exclusion of a PSC-autoimmune hepatitis (PSC-AIH) are warranted. A clinical entity called autoimmune pancreatitis (AIP-SC), characterized by a lymphoplasmacytic infiltrate around pancreatic duct and elevated serum IgG4, can cause stricturing in intrahepatic and extrahepatic bile duct similar to that present in PSC. Distinguishing from PSC, both PSC-AIH and AIP-SC are responsive to corticosteroid [ 6 ]. PSC is strongly associated with inflammatory bowel disease (IBD) in which up to 80% are associated with ulcerative colitis (UC), and only 10% Crohn's disease with the latter is usually diagnosed before PSC [ 7 ]. As IBD in PSC may be asymptomatic, and therefore any newly diagnosed PSC should have full colonoscopy with biopsies. Although IBD may be diagnosed at any time during the course of PSC, the concomitant presentation like in our case is not typical [ 8 ]. There are several clinical and endoscopic features of IBD in PSC distinctive from ordinary IBD in which the former usually has more pancolitis, rectal sparing, and backwash ileitis. These mentioned features are present in our patient. In addition, IBD has more frequently quiescent and prolonged subclinical course [ 9 , 10 ]. The medical therapy does not differ from that of IBD without PSC. The indications for urgent surgery are acute severe colitis not responding to medical treatment, toxic dilatation, perforation, or hemorrhage while the indications for elective surgery fall into two groups: failure of medical therapy and dysplastic/malignant change in the colon. Proctocolectomy with ileal pouchanal anastomosis is now the procedure of choice because it has the advantage of both removing the diseased colon and avoiding a permanent ileostomy. Patients with UC and PSC are at higher risk of colorectal neoplasia compared with those with UC alone, with odd ratio 4.79 and predilection for right-side distribution [ 11 ]. Thus, surveillance colonoscopy with biopsies at up to two-year intervals is recommended. There is no effective medical treatment of PSC; the routine use of ursodeoxycholic acid (UDCA) is not recommended because of unclear benefit and possible serious adverse effects in high dose (28–30 mg/kg/day) [ 12 ]. Treatment with corticosteroid and other immunosuppressant agents does not show any beneficial effect. However, our case received UDCA of the dose around 15 mg/kg/day, and his serum liver ductal enzymes seemed improving, but the long-term effect needs to be observed. The management of our patient should also include management of potential complications including portal hypertension, dominant strictures of bile ducts, gallbladder diseases, cholangiocarcinoma, and colorectal neoplasia. Concerning portal hypertension, about 40% newly diagnosed cases have esophageal varices, and its management does not differ from non-PSC patients [ 13 ]. The dominant stricture in PSC is defined as a stenosis with a diameter of less than 1.5 mm in common bile duct or of 1 mm in intrahepatic ducts [ 14 ]. It happens in about 50% of patients during the followup, and the common presentations are jaundice, pruritus, right-upper quadrant pain, and elevated serum bilirubin. The management is to relieve biliary obstruction by endoscopic balloon dilatation with or without stent placement preceded by the brush cytology and biopsy at the stricture site to exclude a superimposed malignancy [ 15 ]. In an early case series, the gallbladder abnormalities are frequently observed, including gallstones (26%), PSC involving gallbladder (15%), and neoplasm of gallbladder (4%) [ 16 ]. Therefore, an annual ultrasound of biliary system is recommended to detect mass lesions in the gallbladder. In our case, the coexisting common bile duct stones might be formed in situ or migrated from the gallbladder. Lastly, PSC is a risk factor for cholangiocarcinoma, with 10% ten-year cumulative incidence [ 17 ]. It is a difficult task to distinguish it from the benign stricture. Because of lack of any diagnostic test, the diagnosis of cholangiocarcinoma relies on the following features: contrast-enhanced mass lesion in imaging, positive biopsy/cytology, or highly elevated CA 19-9 in case of borderline resulting from imaging and histology. For early stage of cholangiocarcinoma, surgical resection is indicated in patient of good liver functional condition while liver transplantation following neoadjuvant therapy is an option in case of poor liver reserve [ 18 ]. The natural course of PSC depends on the respective stage at the time of diagnosis, and its survival rate is around 60% after 6 years [ 19 ]. Because of the absence of effective treatment available, there are several prognostic models to predict the clinical outcome, such as Mayo score, which is shown to be useful in predicting the clinical course particularly the early stage of PSC, and this score includes age, bilirubin, serum aminotransferase, albumin, and history of variceal bleeding [ 20 ]. Liver transplant indications for patient with PSC include liver failure and several unique indications such as intractable pruritus, recurrent bacterial cholangitis, and cholangiocarcinoma. The appropriate time for referral for liver transplantation includes one of the following: Child-Pugh score of seven, model for end-stage liver disease (MELD) of 10, or any complication of portal hypertension. Overall, the results of liver transplantation are good with 70% in 10-year survival rates [ 21 , 22 ]. The risk of developing colonic neoplasia in ulcerative colitis still persists after transplantation, and therefore annual colonoscopic surveillance is still warranted. In our patient, he has been regularly followed up with stable condition, and he is planned to have regular ultrasound imaging of hepatobiliary system and colonoscopic surveillance.
Academic Editor: Yolanda T. Becker Primary sclerosing cholangitis is a rare cause of cholestasis caused by progressive inflammation and fibrosis of both intrahepatic and extrahepatic bile ducts leading to multifocal ductal strictures. Herein, we report a case of primary sclerosing cholangitis and inflammatory bowel disease. The concomitant diagnosis of these two diseases is not typical. The management includes the treatment of inflammatory bowel disease and potential complications of primary sclerosing cholangitis, including dominant strictures of bile duct, portal hypertension, gallbladder diseases, cholangiocarcinoma, and colonoscopic surveillance.
2. Case Report A 31-year-old man was admitted to the hospital because of hypochromic microcytic anemia He had chronic nonspecific epigastric pain for the past six months which had bloating sensation without radiation and any relationship to meal. He consulted a private practitioner. The complete blood picture showed that the hemoglobin was only 6 g/dl, and so he was referred to our unit for further management. His appetite reduced with subjective weight loss in the past three months'time. His bowel opening increased up to two times per day more loose in nature. All being along there was no per rectal bleeding. His past health was well except for taking herbal medicine for acne for the past seven months. On examination he was pale with the absence of stigmata of chronic liver disease. The abdominal examination showed hepatomegaly. Laboratory data were as follows: hemoglobin, 4.3 g/dL (normal: 13.4–17.2); mean cell volume, 49.6 fl (normal: 83–98); white blood cell count, 9/mm 3 (normal: 3.9–10.7); platelet count, 508/mm 3 (normal: 152–358); total bilirubin, 17 umol/L (normal: 5–20); alkaline phosphatase, 1541 IU/L (normal: 46–127); γ -glutamyl transpeptidase, 366 IU/L (normal: 12–57); alanine aminotransferase, 102 IU/L (normal: 10–57); albumin, 34 g/l (normal: 35–50); globulin, 40 g/l (no reference); iron saturation, 1% (normal: 20–55); hemoglobin A2, 4.8% (normal: 1.6–3.5). The preliminary investigations revealed that he had severe iron deficiency anemia coexisting with β thalassaemia trait and cholestatic liver derangement. The esophagogastroduodenoscopy (OGD) showed no abnormality down to the third part of duodenum. Early colonoscopy performed one week later showed that the colonic mucosa was erythematous with loss of vascular pattern and multiple small superficial ulcerations in which the proximal parts including ascending and transverse colon were more severely affected. The mucosa of terminal ileum, sigmoid, and rectum was endoscopically normal looking. The histology revealed that there was inflammatory cell infiltration at lamina propria of terminal ileum, and colon, the latter also having distorted cryptal architecture. The abdominal ultrasonography showed that the liver was enlarged with 16.7 cm of span length and dilated intrahepatic and common bile duct. Further, relevant blood tests showed that the perinuclear antineutrophil cytoplasmic antibodies (p-ANCAs) were present while other autoimmune antibody (including antinuclear, antimitochondrial, antismooth muscle antibody), HBsAg, anti-HCV and HIV antibody were absent. During the next 3 days, his hemoglobin was topped up to 8.7 g/dL by two units of packed cell. Then endoscopic retrograde cholangiopancreatography (ERCP) was performed and found multiple irregularities over bilateral intrahepatic bile ducts: common bile duct was not dilated but with two small stones distally ( Figure 1 ). These stones were extracted after papillotomy. The liver biopsy was also performed and revealed that the portal tracts had mixed inflammatory infiltrate, some interlobular bile ducts having concentric, laminated (onion-skin) fibrosis around them, and focal bile ductular proliferation. These were consistent with primary sclerosing cholangitis, Stage III (Ludwig) (Figures 2 , 3 and 4 ). Therefore, this gentleman was diagnosed to have primary sclerosing cholangitis coexisting with ulcerative colitis. He was put on medications including ursodeoxycholic acid 500 mg bd, enteric coated mesalazine 2000 mg bd, and iron supplement. He was regularly followed up for the past 4 months and his condition was stable in which his hemoglobin remained static with hemoglobulin level around 9 g/dl and the alkaline phosphatase improved to 204 U/L.
CC BY
no
2022-01-13 01:48:12
Case Rep Med. 2010 Dec 8; 2010:536207
oa_package/92/cd/PMC3014831.tar.gz
PMC3014832
21209819
1. Introduction Angioedema of the face and oral pharynx is a well-recognized complication of ACE inhibitor therapy. These medications can also cause angioedema of the bowel and may present a diagnostic dilemma to the emergency physician. Patients typically present with a complaint of abdominal pain with or without vomiting and diarrhea. Workup is usually nondiagnostic including leukocytosis and nonspecific bowel wall thickening on CT scan. Therapy consists of withdrawal of the medication. This is a diagnosis of exclusion, and physicians must have a high index of suspicion. Making the diagnosis can prevent patients from exposure to costly and invasive procedures.
3. Discussion After review of the English literature, we were able to find 21 documented cases of ACEI-induced angioedema of the bowel [ 1 – 10 ]. Patients typically present with unexplained abdominal pain despite extensive evaluation [ 1 , 2 ]. The patient in this case initially presented with abdominal pain due to angioedema of the bowel. While in the emergency department, she progressed to angioedema of the face and oral pharynx, which is a rapid and atypical onset of angioedema from ACEI. Pancreatitis, obstruction, mesenteric ischemia, infection, cholecystitis among other abdominal emergencies, and C1 esterase inhibitor deficiency all need to be considered in the differential. Although the CT of the abdomen and pelvis was done without IV and oral contrast, which is recommended to fully appreciate pathology of the pancreas, it did not reveal any signs of pancreatitis. The patient's labs and CT scan were not consistent with a diagnosis of pancreatitis, obstruction, or acute infection. C1 esterase inhibitor deficiency was ruled out during her admission. As her facial angioedema resolved, so did her abdominal pain. The abdominal pain did not return after discontinuing the ACE inhibitor, leading to this diagnosis of exclusion. At 6-month followup, the patient remained free of abdominal pain. Approximately 30% of all ED visits for angioedema are from ACEI, while the annual rate of ED visits for ACEI-induced angioedema is 0.7 per 10,000 [ 11 , 12 ]. Angioedema is asymmetrical nonpitting edema of the skin or mucus membrane and a well-documented side effect of ACEI [ 13 ]. ACEI-induced angioedema typically affects the face, eyelids, lips, tongue, neck, and pharynx, while urticaria or pruritis is seen only rarely [ 11 , 13 – 17 ]. These adverse effects commonly present within the first 4 weeks after initiation of therapy and have not shown to be dose related or caused by one particular ACEI [ 13 – 16 ]. No definitive predisposing factors have been identified although the current literature suggests that patients with a history of either hereditary or idiopathic angioedema are at an increased risk for ACEI-induced angioedema [ 13 , 15 , 16 ]. Japanese patients have a lower incidence of angioedema from ACEI while several case reports have shown that patients of African origin have a significantly increased relative risk [ 13 , 14 , 18 , 19 ]. The mechanism of action by which ACEI causes angioedema is not fully understood, but it is theorized to be from a biochemical rather than immunological reaction [ 16 ]. ACE converts angiotensin I to angiotensin II while also inactivating bradykinin. Increased levels of bradykinin, along with other mediators, are responsible for the angioedema reaction [ 13 , 15 , 16 , 20 ]. Bradykinin causes vasodilatation and increased vascular permeability, thereby leading to angioedema. Some studies suggest that patients with a deficiency of aminopeptidase-P, another enzyme that catabolizes bradykinin, are at an increased risk of developing of angioedema from ACEI [ 13 , 17 , 20 ]. ACE inhibitor-induced angioedema of the intestines is a diagnosis that should be considered in any patient presenting with unexplained abdominal pain while on an ACE inhibitor. The incidence of ACE inhibitor-induced angioedema is low (0.1%–0.2%) with a small fraction of those representing angioedema of the bowel [ 1 – 4 ]. However, the exact incidence is unknown and is likely underdiagnosed [ 1 , 4 ]. ACE inhibitor angioedema of the intestine is more common in females, with an average age of 48 years, suggesting a possible sex-linked or hormonal etiology [ 4 , 9 ]. Common symptoms include abdominal pain, vomiting, diarrhea, and ascites [ 3 – 5 ]. Symptoms typically present within 24–48 hours of initiation of an ACE inhibitor, but there are case reports of facial angioedema 7 years after initiation of therapy [ 6 ] and bowel angioedema 9 months after initiation of therapy [ 3 ]. The management of ACEI angioedema should be prompt and aggressive with careful attention to airway management. Treatment can range from simple discontinuation of ACEI therapy to intubation and vasopressors, depending on the severity of the reaction. Angioedema of the intestine is reversible with cessation of the medication. Many of the patients discussed in the literature underwent invasive procedures including endoscopy, intestinal biopsy, exploratory laparotomy, and bowel resection, before a diagnosis of ACE inhibitor induced angioedema was made [ 1 , 3 , 4 , 9 ]. Swift recognition is necessary to prevent unwarranted procedures, surgical intervention, or potentially death. Symptoms typically resolve within 24–48 hours after discontinuing the ACEI and continue to improve over the next 1 to 2 months [ 2 , 10 ].
4. Conclusion ACE inhibitor angioedema is a rare occurrence with intestinal involvement being less common. Unfortunately there is no specific test that can be used to diagnose the condition. Failure to make the correct diagnosis can place patients at increased risk of adverse outcomes due to invasive testing and procedures. While there does appear to be a greater risk of occurrence early in therapy, angioedema can present at any time and thus when treating any patient with abdominal pain use of ACEI therapy should be considered in the differential diagnosis.
Academic Editor: Bettina Wedi Angiotensin converting enzyme inhibitor ACEI-induced angioedema of the intestine is a rare occurrence and often unrecognized complication of ACEI. We present a case of a 45-year-old Hispanic female with angioedema of the small bowel progressing to facial and oral pharyngeal angioedema. Patients are typically middle-aged females on ACEI therapy who present to the emergency department with abdominal pain, nausea, vomiting, and diarrhea. This is a diagnosis of exclusion, and physicians must have a high index of suspicion to make the diagnosis. Symptoms typically resolve within 24–48 hours after ACE inhibitor withdrawal. Recognizing these signs and symptoms, and discontinuing the medication, can save a patient from unnecessary, costly, and invasive procedures.
2. Case Report A 45-year-old Hispanic female presented to the emergency department with a chief complaint of severe abdominal pain for the last several days that progressed to severe pain over the last 24 hours. Approximately one week ago, the patient was evaluated at another facility for a complaint of abdominal pain. Lab tests showed an elevated lipase, and the patient was diagnosed with pancreatitis. She was discharged home and instructed to follow a clear liquid diet and advance as tolerated. The patient's past medical history was significant for hypertension, type 2 diabetes, chronic renal failure requiring dialysis, and the recent diagnosis of pancreatitis. Her medications included enalapril/hydrochlorothiazide, hydralazine, clonidine, metoprolol, metoclopramide, promethazine, mirtazapine, pantoprazole, insulin NPH/regular 70/30, alprazolam, and zolpidem. On initial evaluation, the patient was complaining of several days of sharp, crampy abdominal pain worsening over the 24 hours prior to arrival. She complained of nausea, vomiting, and watery diarrhea for the past day. She denied fever, chills, dizziness, weakness, headache, chest pain, or shortness of breath. She denied blood in her emesis, stool, or urine. The remainder of the review of systems was negative. On initial physical exam, vital signs were significant for hypertension (175/86) and mild tachycardia (107). The patient appeared to be in acute distress secondary to pain. She was alert and oriented, oral pharynx was clear with no edema, erythema, or exudates, and neck was supple with full range of motion. Her lungs were clear to auscultation bilaterally. Cardiac exam was significant for tachycardia with a regular rhythm, no murmurs, rubs, or gallops. The patient's abdomen was diffusely tender and worse in the mid-epigastric and periumbilical region. She had no peritoneal signs, and rectal exam was hemoccult negative. The remainder of her physical exam was unremarkable. Initial labs were significant for an elevated white blood cell count of 19.8 K/UL with a left shift, and the rest of the hemogram was normal. Her metabolic panel revealed a BUN of 29 mg/dL and creatinine of 5.8 mg/dL (which was consistent with her baseline). Serum glucose was 182 mg/dL, amylase slightly elevated at 203 U/L (normal 36–128 U/L), lipase 43 U/L (normal 10–51 U/L), and liver function tests were normal. CT scan of the abdomen and pelvis was limited due to the lack of IV contrast with no significant findings other than mild edema of the small bowel, with No pancreatic swelling, fat stranding, fluid collection, free fluid in the abdomen, lymphadenopathy, or masses. The patient was given multiple doses of hydromorphone during her stay in the emergency department, with only minimal improvement of her pain. The patient was reevaluated on several occasions with no change in her physical exam. Shortly after returning from CT, the patient began complaining of difficulty swallowing and mild shortness of breath. Upon reevaluation at that time, the patient was found to have diffuse swelling of her face, neck, lips, oral pharynx, and tongue. The patient required emergent fiberoptic intubation and was admitted to the intensive care unit. During her hospitalization, C1 esterase inhibitor and complement levels were all within normal limits. The patient was diagnosed with ACEI angioedema of the oral pharynx and small intestine. The ACEI was discontinued at the time of admission. The swelling improved, and she was extubated after 48 hours. The patient's abdominal pain resolved, and she was discharged home with instructions to avoid ACEI in the future. At followup visits over the next six months, the patient's abdominal pain had not returned (see Figures 1 and 2 ).
CC BY
no
2022-01-13 01:48:12
Case Rep Med. 2010 Dec 1; 2010:690695
oa_package/e0/53/PMC3014832.tar.gz
PMC3014833
21209729
1. Introduction We present a case of acute pancreatitis caused by a duodenal diverticular abscess occluding her ampulla of Vater. This case report is the first documented case in the literature of acute pancreatitis caused by a perivaterian duodenal diverticular abscess.
3. Discussion This is the first documented report of a perivaterian diverticular abscess causing acute pancreatitis due to compression of the ampulla of Vater. Duodenal diverticula occurs with a frequency of between 5% and 25% [ 1 ]. The large variance in these figures is due to the fact that they tend to be asymptomatic and mostly diagnosed if they cause complications or at autopsy. They are mostly located around the posterior border in the second part of the duodenum. Those located around the ampulla of Vater, as described in this patient, are known as perivaterian diverticular abscesses [ 2 ]. Recognised complications of these types of diverticulae include mechanical obstruction and perforation leading to peritonitis, requiring urgent surgical intervention. Ulceration giving rise to upper gastrointestinal bleeding has also been reported and could be fatal if the erosion involves the aorta or a mesenteric vessel [ 3 , 4 ]. Radiological diagnosis of these abscesses can be difficult. Since the commonest site of formation is at the second part of the duodenum, a large fluid-filled cystic lesion could easily be diagnosed as a neoplasm of the pancreas arising from the head of the pancreas, which is the common site of pancreatic neoplasm formation. Computer tomography and magnetic resonance imaging scans are useful to distinguish between these two widely varying diagnoses by demonstrating characteristic air-fluid levels within these lesions [ 5 , 6 ]. Upper gastrointestinal endoscopy has been shown in various studies to be a useful diagnostic tool; however, if the diverticula are located in the third or fourth part of the duodenum, then the sensitivity decreases [ 7 ]. It has been shown that there is an association between periampullary diverticula, which can lead to abscess formation, and biliary duct stones. However a large study showed that there is no association between periampullary diverticula and pancreatitis [ 8 ]. Thus in a patient suffering from pancreatitis with dilated bile ducts but no gallstones, the diagnosis of perivaterian abscess should be considered. Our presented patient was successfully treated with endoscopic drainage of this abscess and made a full uncomplicated recovery. She was seen four months postoperatively in clinic with no reported problems. She is now being routinely followed up at our local tertiary hepatobiliary centre. Endoscopic relief for similar cases has been reported successfully in several other cases [ 8 , 9 ] and is an alternative to more invasive procedures, with the advantage of faster recovery times for patients.
4. Conclusion This is the first documented case of a perivaterian duodenal abscess causing compression of the common bile and pancreatic duct leading to pancreatitis. Duodenal diverticula are more common than previously thought, but case reports are scarce as the majority are silent and cause no significant clinical manifestations. Care must be taken to diagnose the condition correctly by using appropriate imaging modalities. Successful endoscopic treatment is possible and should be attempted in appropriate patients.
Academic Editor: Indraneel Bhattacharyya A 46-year-old previously fit lady was admitted with acute pancreatitis. She had no history of gallstones. She was not on any medications and consumed minimal amounts of alcohol. On subsequent investigations as to the causative factor, she was found at ultrasound to have an air-fluid filled cystic structure posterior to the head of pancreas which was compressing the common bile duct. Further magnetic resonance imaging and computer tomography scans showed that this cystic lesion was located around the ampulla of Vater. A diagnosis of a perivaterian abscess was made. At endoscopy, a large contained abscess was seen which was successfully drained. She made a full and uneventful recovery.
2. Case Presentation A thirty-eight-year-old female presented to our emergency department with one-day history of acute onset epigastric pain. It was constant and sharp in nature and associated with several episodes of vomiting. There was no history of fevers, rigors, dysuria, or change in bowel habit. She gave a two-week history of preceding mild abdominal pain, particularly after eating. She was previously fit and well with no history of gallstones, took no regular medications, and had no significant past medical history. She had no known drug allergies and was on no regular medication. She was a nonsmoker and consumed around four units of alcohol per week. On inspection, she looked unwell. She was apyrexial and tachycardic. On palpation of her abdomen, she had marked epigastric tenderness and guarding. Her blood tests revealed a raised white cell count (16.5), bilirubin (26), ALT (165), AST (222), LDH (494) and amylase level of 2245. She was diagnosed with acute pancreatitis, scoring 2 on the Glasgow severity score. An abdominal ultrasound demonstrated a dilated common bile duct (11 mm) with a distended gallbladder with no gallstones or other pathology ( Figure 1 ). A MRCP performed the following day confirmed a 12.4 mm dilated bile duct with dilation of the intrahepatic biliary tract. In addition, it revealed a “curious” cystic lesion, with an air-fluid level, lying posterior to the head of the pancreas at the level of the distal common bile and pancreatic duct, which appeared to be causing some extrinsic compression ( Figure 2 ). She began to improve clinically and haematologically (reduced leukocytosis) with oral antibiotics. However her bilirubin continued to rise, peaking at 58. She underwent a CT scan of her abdomen on her fourth day of admission. This showed a cystic or necrotic mass behind the head of the pancreas, representing an inflammatory lesion, phlegmon, or duodenal diverticular abscess ( Figure 3 ). An ERCP on her sixth day postadmission revealed a 10 cm duodenal diverticular abscess draining pus and bile, at the level of the major papilla ( Figure 4 ). It appeared that the cause of her pancreatitis was extrinsic compression of the bile duct due to a perivaterian duodenal diverticular abscess.
Conflict of Interests None of the authors involved in this submission state any conflict of interests.
CC BY
no
2022-01-13 01:48:12
Case Rep Med. 2010 Dec 16; 2010:527141
oa_package/ee/45/PMC3014833.tar.gz
PMC3014834
21209730
1. Introduction Mastocytosis is a rare hematopoietic disorder which is characterized by abnormal proliferation and accumulation of mast cells in one or more organs [ 1 ]. Mast cell disorders are recently included under the category of myeloproliferative neoplasms by the 2008 World Health Organization classification of myeloid neoplasms [ 2 ]. Mastocytosis limited to the skin is called cutaneous mastocytosis, and when extracutaneous organs such as bone marrow, liver, spleen, or gastrointestinal tract are involved, it is called systemic mastocytosis (SM) [ 3 ]. Cutaneous mastocytosis is largely a disease of infancy and childhood, while SM is usually seen in adults. Systemic mastocytosis which accounts for about 10% of all cases of mastocytosis is a persistent disease that can follow benign or indolent course or may be associated with hematological disorders. Major clinical manifestations of SM are episodic flushing, dyspepsia, diarrhea, abdominal pain, tachycardia, and pruritus [ 4 ]. These are related directly to tissue infiltration or to the release of mast cell mediators like leukotrienes and histamine. Here, we report an SM patient presenting with flushing, hypotension, fever, and syncope attacks. Her symptoms have been successfully controlled and mast cell infiltration in bone marrow decreased significantly after short-term corticosteroid and cyclosporine treatment.
3. Discussion Diagnosis of SM is confirmed with evidence of involvement of a tissue other than skin, most commonly bone marrow, spleen, liver, lymph node, and gastrointestinal tract. Main symptoms related to mast cell degradation are episodic flushing caused by vasodilatation, dyspepsia, nausea, vomiting, diarrhea, and abdominal pain. There may be associated hypotension and syncope due to cerebral hypoperfusion. Such an acute attack typically lasts from 15 to 30 minutes and may be precipitated by a variety of triggers such as physical exertion, emotional upset, heat, cold, ethanol, intravenous contrast exposure, and certain medications (nonsteroidal anti-inflammatory drugs, opioids, and general anesthetics). In our patient, attacks of flushing, hypotension, and syncope with nonspecific symptoms like fatigue, diarrhea, nausea, vomiting, and dyspepsia were all suggestive for SM. Although fever is not a known classical symptom for SM, there are limited numbers of cases that presented with fever in the literature [ 5 , 6 ]. Our patient also had a body temperature reaching 39°C in her last two severe attacks. Skin lesions may or may not accompany SM. Although urticaria pigmentosa which is known to be the characteristic lesion of mastocytosis was described in 80% of SM patients previously [ 7 ], recently Lim et al. showed that 41% of patients with SM had urticaria pigmentosa and 53% had cutaneous symptoms including pruritus, flushing, urticaria, or angioedema [ 8 ]. It is considered that more aggressive forms of SM which have unfavorable prognosis are more likely to present without cutaneous lesions [ 9 ]. There was no skin lesion except intermittent and temporary flushing and urticarial rash in our patient, and Darier' sign was negative. Diffuse erythema during attacks were supposed to result from histamine release rather than skin infiltration by mast cells. The major criterion for diagnosis of SM is the finding of multifocal dense infiltrates of mast cells in bone marrow or other extracutaneous tissues. There are also 4 minor criteria defined as follows: (a) atypical mast cell morphology, (b) aberrant mast cell surface immunophenotype, (c) serum total tryptase >20 ng/mL, and (d) c-kit mutation at codon 816 in extracutaneous organs [ 10 ]. If at least one major and one minor criterion or at least three minor criteria are fulfilled, the final diagnosis of SM can be confirmed. Tryptase which is stored almost exclusively within the secretory granules of mast cells is the most widely used marker of mastocytosis. In healthy individuals, serum tryptase levels range between <1 and 15 ng/mL; however, mast cell activation causes increased tryptase levels [ 11 ]. Additionally, tryptase levels in SM are assumed to correlate closely with the cumulative mast cell burden and multiorgan involvement [ 5 ]. Mast cell infiltration of bone marrow has been shown in our patient. Additionally, high serum tryptase (>200 ng/mL) levels and positive c-kit staining in bone marrow biopsy confirmed our diagnosis. However, we could not make analysis for D816V c-kit mutation due to technical reasons. World Health Organization classify 7 types of mastocytosis: cutaneous mastocytosis, indolent SM, SM with an associated clonal hematological nonmast cell lineage disease, aggressive SM, mast cell leukemia, mast cell sarcoma, and extracutaneous mastocytoma [ 12 ]. Cutaneous mastocytosis is an indolent disease which can only be diagnosed when SM is excluded by appropriate investigations. The most common variant of SM is indolent SM which is differentiated from more advanced categories of SM by lack of end organ dysfunctions and relatively low infiltration grade. Aggressive SM is characterized by evidence of end organ dysfunction such as significant cytopenia, ascites, malabsorption, splenomegaly, or pathologic fractures due to osteolysis. If mast cells comprise >20% of all nucleated cells in bone marrow aspirate and are increased in circulation with ≥10% mast cells in peripheral blood, this is called mast cell leukemia. SM in our patient was subtyped as mast cell leukemia as 80% of bone marrow aspirate was composed of mast cells. Modalities used in the treatment of SM are directed to two targets: symptomatic control and decrease in mast cell burden. Commonly used medications for symptomatic relief are H1 and H2 antihistamines, oral disodium cromoglycate, and epinephrine for hypotensive episodes. However, currently there is no cure for more serious types of SM. Interferon alpha is the drug for which most experience has been reported, but there are conflicting results about it [ 13 – 15 ]. Combination of interferon alpha with corticosteroid has also been shown to have a beneficial effect in controlling symptoms of SM [ 16 ]. Cladribine and imatinib mesylate are other two cytoreductive agents recently used for SM patients and showed promising results [ 17 , 18 ]. It was found that patients with a gene translocation resulting in fusion of the FIP1L1 gene and platelet-derived growth factor (PDGF) receptor alpha genes respond well to imatinib mesylate [ 19 ]. However, the major problem with that drug is that patients that are positive for D816V c-kit mutation are usually resistant to its effects [ 20 ]. In the literature, we found one patient with aggressive SM showing a good response to cyclosporine and corticosteroid treatment [ 21 ]. We also treated our patient with corticosteroid and cyclosporine in the early period. Bone marrow biopsy after three weeks of this therapy showed significant reduction in mast cell burden. Besides, the patient was nearly asymptomatic after a few days of therapy. In followup, based on negative FIP1L1-PDGFR mutation analysis, she was administered on interferon alpha treatment.
4. Conclusion Systemic mastocytosis without skin involvement may represent with attacks of flushing, hypotension, syncope, and fever mimicking septic shock or cardiovascular collapse. Combination therapy with corticosteroid and cyclosporin seems to help to control symptoms and decrease mast cell burden in a short time.
Academic Editor: Hermann E. Wasmuth Mast cell disorders are defined by an abnormal accumulation of tissue mast cells in one or more organ systems. In systemic mastocytosis, at least one extracutaneous organ is involved by definition. Although, systemic mastocytosis usually represents with skin lesion called urticaria pigmentosa, in a small proportion, there is extracutaneous involvement without skin infiltration. Other manifestations are flushing, tachycardia, dyspepsia, diarrhea, hypotension, syncope, and rarely fever. Various medications have been used but there is not a definite cure for systemic mastocytosis. The principles of treatment include control of symptoms with measures aimed to decrease mast cell activation. We describe a case of systemic mastocytosis presenting with hypotension, syncope attacks, fever, and local flushing. In bone marrow biopsy, increased mast cell infiltration was demonstrated. She had no skin infiltration. A good clinicopathological response was obtained acutely with combination therapy of glucocorticoid and cyclosporine.
2. Case Report A 52-year-old woman presented with fatigue, flushing, dyspepsia, hypotension, and syncope attacks for about 18 months. She defined headache and fatigue at the beginning of attack, and then flushing occurred in her face and neck primarily, descending to body but not to extremities. During attacks, she had hypotension with systolic blood pressure between 50–70 mmHg, diastolic blood pressure between 20–40 mmHg, and followed by 5–10 minutes of syncope. While flushing resolved in about an hour, she sometimes had diarrhea, dyspepsia, nausea, and vomiting before or just after the attacks. After she had experienced fever reaching 39°C in the last attack, she was referred to our hospital. In the 11th day of hospitalization, she developed nausea, vomiting, and fever of 40°C ongoing with severe hypotension (systolic and diastolic blood pressures of 50 mmHg and 25 mmHg, resp.). She had conjunctival hyperemia and flushing in her face, neck, and upper chest with sharp margins ( Figure 1 ). She was transferred to critical care unit for close monitorization and management of hemodynamic instability. With intravenous fluid replacement therapy, her blood pressure increased up to 120/70 mmHg and hemodynamic stability was achieved in an hour. There was no pneumonic infiltration and blood and urine cultures were negative for any microorganism. Thoracoabdominal computed tomography was normal. Transesophageal echocardiography revealed no vegetation. Carcinoid syndrome and pheochromocytoma were excluded based on normal urine catecholamines and 5-hydroxyindoleacetic acid checked before and during the first 4 hours of the attack. Bone marrow aspiration and biopsy were performed because of anemia (hemoglobin = 6.9 mg/dl) and thrombocytopenia (110.000/ μ L) ( Table 1 ). Bone marrow aspiration showed that 80% of nucleated cells were mast cells ( Figure 2 ), and biopsy revealed a hypercellular bone marrow infiltrated with mast cells. In addition, c-kit staining was positive in bone marrow biopsy, immunohistochemically. Serum tryptase and histamine levels at the time of attack were >200 ng/mL (<13.5 ng/mL) and >100 nmol/L (<10 nmol/L), respectively. Her serum hemoglobin and platelet concentrations returned to acceptable levels without any transfusion in a few days (hemoglobin = 8.9 g/dl and platelets = 176.000/ μ L). She was administered on prednisolone (1 mg/kg/day) and cyclosporine (300 mg/d) with a H1 receptor blocking agent (desloratadine 5 mg/day). After esophagogastroduodenoscopy, an H2 receptor blocking agent (famotidine 80 mg/day) was also initiated since she had duodenal ulcer. After three weeks of corticosteroid and cyclosporine treatment, repeated bone marrow biopsy showed normal morphology and in aspiration mast cells constituted 15% of nucleated cells. Serum tryptase level was still >200 ng/mL; however, histamine level decreased to 29.7 nmol/L. About 10 weeks after she was discharged from the hospital, Fip1-like1 (FIP1L1) gene mutation analysis was found to be negative and she was administered on interferon alpha treatment. She was free of any attacks or other symptoms for about 5 months and in the last visit her serum hemoglobin and platelet levels were 10.9 g/dl and 238.000/ μ L, respectively. Conflict of Interests Authors declare that there is no conflict of interests that could be perceived as prejudicing the impartiality of this paper.
CC BY
no
2022-01-13 01:48:12
Case Rep Med. 2010 Dec 20; 2010:782595
oa_package/e6/73/PMC3014834.tar.gz
PMC3014835
21209731
1. Introduction Acute coronary syndrome is a common cause of presentation to hospital emergency departments. In patients who present with chest pain, ST segment elevation is likely to be cardiac in origin and prompt recognition and treatment improves outcomes. However, unnecessary treatment with thrombolytic therapy or anticoagulation can be harmful, and in patients who are at low risk of cardiac disease other causes must be ruled out. Hypercalcaemia may be caused by a variety of illnesses and can present acutely with a range of symptoms. Hypercalcaemia is usually reversible with intravenous fluid and bisphosphonate whilst the cause is simultaneously investigated.
3. Discussion Severe hypercalcaemia provoking ECG changes mimicking acute myocardial infarction is infrequently reported. It is important for physicians to recognise severe hypercalcaemia as a differential diagnosis for ST segment elevation on the ECG. Wesson et al. described this association, in a patient with a past medical history of ischaemic heart disease, coronary angioplasty, hypertension, and left ventricular failure [ 1 ]. Subsequent comments about this case suggested that the ST changes might have been due to a left ventricular aneurysm [ 2 , 3 ]. Shawn et al. described a case of a patient with ST segment elevation induced by hypercalcaemia [ 4 ]. Resolution of ST segment elevation occurred on correction of the hypercalcaemia. A transthoracic echocardiogram demonstrated that there was underlying moderate left ventricular hypertrophy, and the patient had an ejection fraction of 50%. Our patient had no history of cardiac disease or hypertension. He was a smoker, but there were no other risks for cardiac disease. His acute ST changes on ECG resolved on reversal of his hypercalcaemia. Our case forms a case series with previous cases [ 1 , 5 – 8 ], demonstrating a clear link between hypercalcaemia and ECG changes mimicking acute myocardial infarction. Our case also underlines the importance of awareness of overuse of over-the-counter supplements.
Academic Editor: Dierk Thomas Acute coronary syndrome is a common cause of presentation to hospital. ST segment elevation on an electrocardiogram (ECG) is likely to be cardiac in origin, but in low-risk patients other causes must be ruled out. We describe a case of a man with hypercalcaemia, no evidence of cardiac disease, and ECG changes mimicking acute myocardial infarction. These ECG changes resolved after treatment of the hypercalcaemia.
2. Case Report A 39-year-old man presented to the acute medical team with generalised weakness, vomiting, constipation, and abdominal pain. He had no chest pain or shortness of breath. The patient had no prior cardiac history. He had a 40-pack year smoking history, but he was not known to be hypertensive and there were no other risk factors for coronary artery disease. Blood pressure on admission was 105/55. On clinical examination the patient appeared dehydrated. He had a Glasgow Coma Score (GCS) of 13 and was unable to give a clear history at the time of presentation. There was mild epigastric tenderness but without rigidity or guarding. Heart sounds were normal, and there was no evidence of cardiac failure. There were no other significant findings or untoward features. Blood testing revealed acute renal failure; urea was 21.7 mmol/L and creatinine was 338 μ mol/L. His plasma adjusted calcium was 5.75 mmol/L and his albumin was 38 g/L, and parathyroid hormone was suppressed at 9 ng/L (normal range: 15–65 ng/L). Chest radiography revealed no features of malignancy or left ventricular failure, and a myeloma screen was negative. Thyroid function tests were also normal. The ECG at presentation ( Figure 1 ) revealed abnormal ST morphology in leads II, aVF, and V2-V3. These changes were minimal, and thrombolysis was not indicated. The patient underwent initial resuscitation with intravenous fluids, and subsequently intravenous pamidronate was administered to correct the hypercalcaemia. His condition improved rapidly, and he was subsequently able to provide a detailed history. This revealed that he had been taking an over-the-counter calcium carbonate supplement: Tums. He had been ingesting extremely large quantities, up to 112 g calcium carbonate daily, for six months. This medication had initially been taken for indigestion. The patient was unaware of the detrimental effects these supplements could have on his health. Repeating a review of systems did not elicit any other significant symptoms, and there were no features suggestive of malignancy. Blood pressure recording on the ward at discharge was 128/70. An ECG following reversal of the hypercalcaemia ( Figure 2 ) showed resolution of the ST segment elevation. Conflict of Interests The authors declare no conflict of interests. Patient Consent was obtained.
CC BY
no
2022-01-13 01:48:12
Case Rep Med. 2010 Dec 16; 2010:563572
oa_package/a6/c5/PMC3014835.tar.gz
PMC3014836
21209732
1. Introduction Gardner's syndrome (GS) is a variant of familial adenomatosis polyposis (FAP), which affects one in 8300 individuals and one in 7500 births in the United States [ 1 ]. The disease is characterized by colonic polyps and extracolonic manifestations. The polyps typically develop in adolescence and undergo malignant change between the third and fifth decades of life. The extracolonic manifestations include osteomas, desmoids tumors, epidermoid cysts, and malignancies [ 2 ]. It is believed that GS and familial adenomatous polyposis are variants of the same disorder, since they share the same genetic alterations [ 3 ]. The fact that GS is associated with extracolonic manifestations may be explained by a variable penetrance of a common mutation. The disorder is linked to band 5q21-q22, the adenomatous polyposis coli locus (APC gene) [ 4 ]. More than 1400 different mutations of this gene have been reported. These mutations have a nearly complete penetrance of the colonic phenotype, but a variable penetrance of the extracolonic manifestations of the disease [ 4 ]. Among rare extracolonic manifestations of AFP, endocrine neoplasms of the adrenal cortex have occasionally been reported. We describe a pre-Cushing's syndrome in a 37-year-old male patient with GS who presented with an incidental adrenal mass.
3. Discussion Gardner's syndrome, a variant of familial adenomatous polyposis, represents a multisystemic disease and disorder of growth [ 5 ]. The primary risk for patients with FAP and its variants is the development of colorectal cancer; however, there is also an increased incidence of other tumors, including adrenal masses. The polyp formation starts at puberty but diagnosis is usually made in the third decade, while the malignant transformation reaches 100% by the fourth decade of life [ 3 ]. Although not common, endocrine neoplasms such as parathyroid, pituitary, pancreatic islet cell, and adrenal neoplasms have all been described in patients with GS [ 2 ]. The first case of a FAP patient with an adrenal adenoma was published almost a century ago [ 4 ]. As a result of technological advances in imaging techniques such as CT and MRI during the last decades, new data have become available regarding the prevalence of adrenal masses in both the general population and patients with FAP. 7% of patients with FAP or its variants have adrenal masses, compared with only 3% of the general public [ 4 , 6 ]. Although the prevalence of adrenal masses in FAP patients are two to four times as high as in the general population, the clinical presentation and biological behavior do not seem to be different [ 4 , 7 ]. Most adrenal lesions are not functional. Functional lesions typically secrete cortisol. Because most endocrine-associated tumors in patients with GS occur without symptoms, most cases are discovered incidentally or at autopsy. A review of the literature resulted in the identification of 17 cases of adrenal neoplasms (13 adenomas, 4 carcinomas) in patients with GS [ 8 ]. Only 4 patients, however, were symptomatic. They developed weight gain, hypertension, and headaches but did not have electrolyte abnormalities. Two of the patients had adrenal cortical carcinomas, and 2 had adrenal adenomas [ 8 ]. In all the 4 cases, the symptoms were consistent with cortisol hypersecretion or adrenal Cushing's syndrome [ 8 ]. While the overwhelming majority of these masses are benign and nonfunctional, there are reports of more aggressive and functional tumors in patients with FAP or its variants [ 9 , 10 ]. Such rare cases are uncommon and highlight the relative risks of adrenal tumors versus other risks associated with treatment for FAP: one study of 132 FAP patients found that only one patient (0,9%) died of adrenal carcinoma, while 4,5% died from perioperative complications as a result of various abdominal operations [ 11 ]. Although the natural history is similar to lesions occurring sporadically, familial adenomatous polyposis-associated adrenal incidentaloma should warrant long term followup. In this rare condition, the development of a rigorous regimen will require evidence from worldwide patient cohorts. However, a tailored schema is suggested as a consistent basis for future modification [ 7 ]. Data on genetic analysis are limited, and only three mutations have been described (codons 1061, 1542, and 1981). The latter was associated with multiple and bilateral adenomas [ 4 ].
4. Conclusion In conclusion, we presented a case of GS with an unusual clinical presentation of an adrenal tumor incidentally discovered. The hormonal finding confirmed the pre-Cushing's syndrome; the computed tomography showed bilateral adrenal masses and were in favor of the benign. Surgery was indicated for our patient, but we opted for observation, regularly taking into account the risk of perioperative complications. However, our patient is likely to develop complications from Cushing's syndrome. Adrenal tumors were more common in FAP than in the general population, but require the same followup.
Academic Editor: Christian C. Apfel Gardner's syndrome (GS) is a dysplasia characterized by neoformations of the intestine, soft tissue, and osseous tissue. Endocrine neoplasms have occasionally been reported in association with GS. Adrenal masses in GS are rare, and few have displayed clinical manifestations. In the current paper, The authors report a 37-year-old male patient with GS including familial adenomatous polyposis (FAP) and mandible osteoma who presented with an incidental adrenal mass. Computerized tomography adrenal scan identified bilateral masses. Functional analyses showed a hormonal secretion pattern consistent with pre-Cushing's syndrome. Other extraintestinal manifestations were hypertrophy of the pigmented layer of the retina and histiocytofibroma in the right leg. This paper describes a rare association of adrenocortical secreting mass in an old male patient with Gardner syndrome.
2. Case Report A 37-year-old male with a 6-year history of AFP syndrome was referred to our endocrinologist outpatients for evaluation of an adrenal mass. In 2003, the patient was found to have multiple adenomatous polyps in the colon as well as the rectum. A total proctocolectomy with ileal-anal pouch anastomosis was performed. The pathology of the specimen showed more than 100 colorectal adenomatous polyps, several of them showing carcinoma in situ and confirmed the diagnosis of FAP. The patient's mother previously died of colon cancer at the age of 39 years as well as his father and paternal uncle of his mother. This patient has one sister and one brother who died at age 19 and 27 years, respectively. In his family, he also has one sister and one daughter who have the mutation gene but are asymptomatic ( Figure 1 ). The mutation screening showed heterozygous mutation in exon 15 at the codon 2016-2017 Del AT, p. Ser 672fsX5 at the extreme 5* end of the gene, the adenomatous polyposis coli (APC) gene. After being operated on, he began regular followup endoscopic examinations of intestinal and gastroduodenal polyps, which were insufficiently destroyed as they rose. The pathological evaluation revealed a tubulovillous adenoma with high-grade dysplasia. A routine abdominal computed tomography (CT) scan incidentally identified a 2.5 × 4.0 cm right adrenal ( Figure 2 ). This patient was referred to our endocrinologist outpatients for exploration. Physical examination was unremarkable except for a tumefaction of the left mandible. His blood pressure was 130/75 mm Hg. His Weight was 92 Kg, his height 1,77 m, and his body mass index was 30,8 Kg/m2. The neurologic examination and the remainder of the systemic examination were normal. Signs of Cushing's syndrome were not noted. Chemistry profile was unremarkable ( Table 1 ). Endocrinological data showed alterations of the normal diurnal variation of cortisol ( Table 2 ). Cortisolemia was not suppressed by 0,5 mg of dexamethasone given every 6 hour for 48 hours (plasma cortisol level was 14,4 μ g/dL after suppression). Computerized tomography adrenal scan identified bilateral masses, with a spontaneous density at 6 UH. These masses have homogeneous density and measured 3,5 cm in the right adrenal and 1 cm in the left adrenal. These criteria were in favor of benign masses. Therefore, this patient was found to have bilateral adrenocortical mass. The functional analyses showed a hormonal secretion pattern consistent with pre-Cushing's syndrome. Despite the character of secreting masses that indicate surgery, we opted for observation and recommended a 1-year interval CT. In three months, the masses did not changed in size or character, and given the risk of perioperative complications, observation was again recommended. The patient continues to fare well. Our patient had another extra intestinal manifestations including: osteoma in the mandible without impacted or unerupted teeth objectified in the panoramic radiograph ( Figure 3 ); a typical hypertrophy of the pigmented layer of the retina; the dermatological finding revealed histiocytofibroma in the right leg which was surgically removed.
Conflict of Interests All the authors declare that there is no conflict of interest.
CC BY
no
2022-01-13 01:48:13
Case Rep Med. 2010 Dec 27; 2010:682081
oa_package/4d/49/PMC3014836.tar.gz
PMC3014837
21209733
1. Introduction Hyperlipidemia is an increasingly prevalent risk factor in children, concomitant with the worldwide epidemic of obesity [ 1 ]. Lipid disorders can occur either as a primary event or secondary to an underlying disease. The primary dyslipidemias are associated with overproduction and/or impaired removal of lipoproteins. The latter defect can be induced by an abnormality in either the lipoprotein itself or in the lipoprotein receptor [ 2 ]. Monogenic disorders that cause abnormal levels of plasma cholesterol and triglycerides have received much attention due to their role in metabolic dysfunction and cardiovascular disease. While these disorders often present clinically during adulthood, some present in the pediatric population and can have serious consequences if misdiagnosed or untreated [ 3 ]. Hypertriglyceridemia is defined as having plasma triglyceride above the 95th percentile for age and sex [ 2 ]. It is a rare disorder in childhood. According to the National Cholesterol Education Program (NCEP), normal triglyceride level is <150 mg/dL (<1.7 mmol/L) [ 4 ]. Primary hypertriglyceridemia is the result of various genetic defects leading to disordered triglyceride metabolism. Secondary causes are acquired and may include a high-fat diet, obesity, diabetes, hypothyroidism, and certain medications (e.g., estrogen and tamoxifen) [ 4 ]. Familial chylomicronemia syndrome (FCS) is disorder of lipoprotein metabolism due to familial lipoprotein lipase (LPL) or apolipoprotein C-II deficiency (Apo C-II) or the presence of inhibitors to lipoprotein lipase [ 5 ]. It is a very rare syndrome with prevalence of approximately 1 in 1 million for homozygotes. It is relatively common for heterozygotes, approximately 1 in 500 [ 6 ]. The disease has been described in all races. To date, several hundred patients with LPL deficiency have been described [ 7 – 9 ]. FCS is the most dramatic example of severe hypertriglyceridemia. Almost all patients with fasting triglyceride levels in excess of 1000 mg/dl (11.36 mmol/L) have FCS [ 4 ]. It manifests as eruptive xanthomas, acute pancreatitis, hepatomegaly, splenomegaly, foam cell infiltration of bone marrow, and lipemia retinalis. These patients usually have lipemic plasma due to marked elevation of triglyceride and chylomicron levels [ 10 ]. Several mutations in the LPL gene located on chromosome 8p22 have been identified with familial LPL deficiency [ 11 ]. More than 50 missense and nonsense mutations have been identified. The majority of mutations are located on exons 3, 5, and 6 which are responsible for the catalytic coding region of the gene [ 6 ]. Apo C-II gene mutation has also been identified [ 12 ]. Other extremely rare genetic disorders can present with chylomicronemia with severe hypertriglyceridemia. Examples of these are familial apoAV deficiency, familial lipase maturation factor 1 (LMF1) deficiency, and familial GPIHDLBP1 deficiency [ 13 ]. We report a family with two siblings affected by FCS. We describe the clinical features, course of the disease and its management, and review the literature.
4. Discussion FCS usually manifests in childhood, but 25% of cases manifested during infancy [ 14 ] and are rarely manifest in the newborn period, as in case 1 (AA), who was diagnosed in day two of life. In India, several cases have been reported in very young children aged between 20 and 60 days. Some presented with features of sepsis with systemic complications and acute renal failure with complete recovery [ 15 ]. As mentioned above, the genetic diagnosis for FCS is available but only for limited laboratories. For our cases, the diagnosis was clinical and genetic testing was not available. FCS is characterized by severe hypertriglyceridemia with episodes of abdominal pain, recurrent acute pancreatitis, eruptive cutaneous xanthomata, hepatosplenomegaly, and lipemia retinalis. However, evidence suggests that presentation during infancy can be heterogeneous and may include other signs such as pallor, anemia, jaundice, irritability, and diarrhea. These manifestations are variable in the time and severity of presentation [ 3 ]. One study conducted in Quebec, Canada, in which LPL deficiency was demonstrated in 16 infants who presented with heterogeneous features, irritability, pallor, anemia,and gastrointestinal bleed, while others presented with splenomegaly and positive family history. This is also demonstrated in case one, he was accidently found to have severe hypertiglyceridemia when he was evaluated for pallor and jaundice [ 16 ]. This syndrome is autosomal recessive, and a positive family history (e.g., a child in a family) will necessitate screening of other family members (parents and siblings). Even if the lipid profile is normal, close follow up with lipid profile is indicated. In our study, the second case was screened at age one month and kept under close follow up. When his triglyceride level was significantly raised, lipid-lowering agent was commenced. The family has a 2-year-old girl who is on regular follow up for lipid profile, and her laboratory results are still within normal limits. The most dramatic manifestation of FCS is acute pancreatitis. It is responsible for up to 7% of all cases of pancreatitis. Failure to consider and investigate chylomicronemia as a cause of pancreatitis may lead to an underestimation of incidence. Hyperchylomicronemia-induced pancreatitis rarely occurs unless triglyceride levels exceed 20 mmo/L (1760 mg/dL). Acute pancreatitis, due to any cause, is an emergency and necessitates an urgent intervention. However, in patients with hyperchylomicronemia, further management of hyperlipidemia to prevent future attacks is recommended [ 17 – 19 ]. Early diagnosis is important to prevent complications such as acute and chronic pancreatitis and pancreatic necrosis, although pancreatic function often deteriorates very slowly [ 3 , 20 ]. Cardiovascular risk may also be increased in these patients, though evidence has been inconclusive [ 3 ]. Unfortunately, FCS resulting from deficiency in LPL or apo C-II is very difficult to treat with existing pharmacologic agents. The most effective treatment modality is severe dietary triglyceride restriction. The recommended targets vary from less than 50 g per day, or under 25% of total daily caloric intake, to less than 20 g per day, or under 15% [ 2 , 21 , 22 ]. But a significantly persistent high triglyceride level necessitates pharmacological intervention. There has been a general reluctance to use drug therapy to treat lipid abnormalities in children; however, increasing evidence suggests effectiveness and short-term safety similar to those in adults [ 23 , 24 ]. Recently, the American Heart Association provides general recommendations for pharmacological management of high-risk lipid abnormalities in children and adolescents. They defined high-risk lipid abnormalities as primary and secondary conditions associated with extreme lipid abnormalities or conditions underlying high risk of cardiovascular disease whereby the presence and severity of lipid abnormalities may further exacerbate that risk [ 23 ]. The drugs studied and recommended for treating hypertiglyceridemia are fibric acid derivatives (e.g., Gemfibrozil, Fenofibrate). These have the effect of both raising HDL and lowering triglycerides. Main adverse effects observed were gastrointestinal upset together with an increased predisposition to cholelithiasis. Elevated liver transaminases and creatine kinase are transient. There is risk of myopathy and rhabdomyolysis-especially if used with other agents, particularly statins. Wheeler and colleagues performed a 6-month randomized cross-over trial of Bezafibrate in 14 children with familial hypercholesterolemia [ 23 , 25 ]. One patient had transient elevation in liver transaminase and one patient had elevation of alkaline phosphatase. The medication was well tolerated with no impact on growth or development. Other drugs, such as stains, were also studied and found to be effective in treating familial hypercholesterolemia but did not have much effect in lowering triglyceride level [ 23 ]. Niacin is not recommended because of poor tolerance, serious adverse effects, and limited available data [ 1 , 23 , 26 ]. Both siblings in our study were started on Gemifibrozil 300 mg twice a day at birth and at age 6 months, respectively. AA is now 7 years old and has had no attacks of acute pancreatitis, although he was admitted twice with acute gastroenteritis symptoms with suspicion of pancreatitis. He has also had lipemia retinalis since birth due to very high levels of triglyceride. Retinal evaluation after starting dietary and drug therapy revealed improvement. SA is now 4 years old and has never been admitted. There were no recorded attacks of abdominal pain. His retinal examination remained normal. Both children continue to tolerate Gemifibrozil very well. Triglyceride levels ranged between 11 and 25 mmol/L for AA and more than 16 mmol/L for SA. Both children have transient elevated liver transaminase, while alkaline phosphatase remains within normal limits. Our study noted that the hemoglobin (Hb) was low when the triglyceride level was high. This was initially noticed when AA was first diagnosed. His Hb was very low at 6.7 g/dL (normal 13.6–19.6 g/dL), with no evidence of hemolysis. The Hb level improved with his improving triglyceride level. This anomaly was also seen in SA, whose Hb, when initially seen, was 18.7 gm/dL and lipid profile was mildly elevated. Later, when he started to have significantly raised triglyceride levels the Hb continuously dropped. Recent tests showed HB 10.9 g/dL normal range (11.0–14.5 g/dL) with triglyceride level >16 mmol/L (normal <1.70 mmol/L). We have no explanation for this observation. We believe that this study is the first to report FCS in Saudi Arabia. A study was conducted in Saudi Arabia in 2001 for the prevalence of plasma lipid abnormalities in Saudi children and concluded that the hypertriglyceridemia was not seen as a major problem with only 1.96% of the total children felt to be in the high-risk group [ 27 ].
5. Conclusion Familial chylomicronemia syndrome (FCS) is a disease of late childhood and adolescence; however, cases have been reported in infants and neonates. The syndrome presentation is heterogeneous in a very young age group. Early diagnosis and medical intervention by lipid-lowering agents and dietary modification, at the time of diagnosis, can improve the prognosis and maintain a near normal lifestyle for affected children, as the risk of pancreatitis and frequency of hospital admissions is significantly reduced. Children tolerate these agents well and show no serious side effects. Long-term studies are still needed to ensure the safety and effectiveness of these agents in children.
Academic Editor: P. E. Schwarz There are no adequate data that evaluate the safety and effectiveness of lowering triglyceride levels in very young children. The authors report a family with two male siblings, 7 and 4 years old, affected by familial hyperchylomicronemia. The oldest was diagnosed at birth during evaluation of jaundice, and the youngest showed asymptomatic hypertriglyceridemia by 6 months of age. Due to high triglyceride levels, Gemfibrozil (a fibric acid derivative) was started at diagnosis. Close clinical followup and laboratory monitoring of these children showed no side effects from the drug, and the risk of acute pancreatitis was significantly reduced.
2. Case One AA is a 7-year-old boy delivered by spontaneous vaginal delivery in a primary health care center with uneventful pregnancy. While being investigated for jaundice in the 2nd day of life, he was discovered to have high cholesterol >500 mg/dL (>5.68 mmol/L, normal <4.40 mmol/L), low hemoglobin (HB) 6.7 g/dL (normal range 13.6–19.6 g/dL), and normal serum bilirubin and platelet count. The lipid profile was repeated. The laboratory work showed a very thick blood sample that was hyperlipidemic ( Figure 1 ). The repeated lipid profile showed (laboratory method used: AEROSET system and ARCHITECT c8000 system) serum cholesterol 7.4 mmol/L (normal <4.40 mmol/L), high density lipoprotein (HDL) 1.10 mmol/L (normal >1.55 mmol/L), and triglyceride (TG) 80 mmol/L (normal <1.70 mmol/L). Based on this very abnormal lipid profile compared to his age, the primary healthcare facility started him on lipid lowering agents: gemfibrozil (lipoid) 300 mg twice a day and pravastatin 10 mg once a day. AA was referred to a tertiary hospital at age 60 days. Further history revealed that AA is the first child born to first-degree consanguineous parents with positive family history of hyperlipidemia, maternal side in old age (father and aunt). There is no history of sudden death, premature cardiovascular disease, or recurrent pancreatitis in the family. Both parents had no history of hyperlipidemia. Examination revealed an active child with no dysmorphic features or skin lesions. Abdominal examination revealed hepatomegaly. Cardiovascular examination was normal, as well as blood pressure. He was referred to the ophthalmologist for retinal examination which showed lipemia retinalis. Laboratory investigation showed normal low-density lipoprotein (LDL), high-density lipoprotein (HDL), liver enzymes, baseline echocardiogram (ECHO), electrocardiogram (ECG), and ultrasound spleen. Ultrasound of abdomen confirmed hepatomegaly. Blood investigations are summarized in ( Table 1 ). Based on this incidentally discovered hyperlipidemia with very high TG and low to normal VLDL at such a young age, laboratory investigations indicate that this child probably has FCS. He was continued on Gemifirazil (lipoid) 300 mg twice a day. Pravastatin was stopped as no evidence supports its use in treating hypertriglyceridemia. The parents were referred to a dietitian and a low-fat diet was recommended, including mixing food with olive oil and giving skimmed dairy products as he is growing. Followup is ongoing. AA was admitted twice at the age of 6 months and again at the age of 2 years. Both times he had vomiting and loose stool. He was diagnosed with acute gastroenteritis and was managed accordingly. Acute pancreatitis was suspected but could not be proved. He had no further attacks of abdominal pain or admissions and was regularly followed up. Eruptive xanthomata developed on the elbows and ear lobes at the age of four years, which was transient and resolved spontaneously. Lipid profile, liver function, and complete blood count were closely monitored ( Table 1 ). 3. Case Two SA is a 4-year-old boy. He was referred at age 16 days because of the positive family history of hypertriglyceridemia. He had full-term, spontaneous vaginal delivery with uneventful pregnancy. Examination revealed an active, but not dysmorphic baby. No skin manifestation. Abdomen showed no hepatosplenomegaly. Examination of the cardiovascular system was normal, including blood pressure. Eye examination was normal. Baseline ultrasound of abdomen, ECG, and Echo were normal. Laboratory investigations revealed TG 5.92 mmol/L, high (normal <1.70 mmol/L), cholesterol 2.32 mmol/L normal (normal <4.40 mmol/L), HDL 0.47 mmol/L (normal >1.55 mmol/L), LDL <1 mmol/L (normal <2.26 mmol/L), and HB 18.7 g/dL (13.6–19.6 g/dL). He was given dietary advice but no lipid-lowering drug was started. He was observed with close follow up. Six months later, TG was significantly raised: TG 10.56 mmol/L (929 mg/dL), cholesterol 7.09 mmol/L, HDL 0.34 mmol/L, and LDL 1.03 mmol/L. Due to the positive family history and abnormal laboratory findings, he was started on Gemifirazil (lipoid) 300 mg twice a day. Liver enzymes were normal prior to commencing medications. SA was also referred to a dietitian for low-fat milk and dietary advice. On follow up he was noticed to have hepatomegaly and eruptive xanthomas on the elbows, which was transient and resolved spontaneously. There was no history of abdominal pain or hospital admission for suspected pancreatitis. The eye examination showed no evidence of lipemia retinalis. Lipid profile, complete blood count, and liver enzymes were continuously monitored ( Table 2 ).
Abbreviations Cholesterol Triglyceride High density lipoprotein Low density lipoprotein Very low density lipoprotein Alkaline phosphatase Aspartate transaminase Alanine transaminase Gamma glutamyl transferase Haemoglobin Hematocrit Liver function test Complete blood count.
CC BY
no
2022-01-13 01:48:13
Case Rep Med. 2010 Dec 27; 2010:807434
oa_package/1f/de/PMC3014837.tar.gz
PMC3014838
21209734
1. Introduction 2009 H1N1 influenza also called Swine Flu, is caused by a new strain of the influenza virus, and it has spread through many countries. Vaccines are available to protect against 2009 H1N1 influenza a vaccine, like any medicine, could cause serious problems such as a severe allergic reaction. However, the risk of any vaccine causing serious harm, or death, is extremely small [ 1 ]. Approximately 44% of people who were administered swine flu vaccine reported mild side effects within 7 days of receiving the first dose of swine flu vaccine of CSL Biotherapies. 2.5% of the vaccine recipients reported yet moderate local side effects, and there were no severe adverse events reported following immunization [ 2 ]. Lymphadenitis is the inflammation and/or enlargement of a lymph node. The most common symptoms of lymphadenitis are swelling of one or more lymph nodes which may feel slightly hardened and may be painful when touched.
3. Discussion 86% of the volunteers who received Novartis's H1N1 (7.5 μ g of MF59 adjuvanted) vaccine reported adverse reactions after one or both doses—the most common local side effect experienced was a pain in the injection site. The reactions were generally mild or moderate and resolved themselves after 72 hours [ 4 ]. The majority of reported local adverse events, or side effects occurring at the location where either vaccine had been administered, include tenderness pain, redness, hardening of skin, swelling, and bruising [ 2 , 4 ]. The boy presented in this report started complaining about pain and swelling on the night of the same day he had received H1N1 vaccination. Systemic effects were also reported by CSL Biotherapies and Novartis vaccine recipients. Approximately 36% of volunteers who received the swine flu vaccine manufactured by CSL experienced mild systemic side effects. 8% of the vaccine recipients reported moderate systemic side effects, and less than 1% experienced a severe adverse reaction to immunization. Severe side effects reported include malaise, muscle pain, and nausea. Muscle aches were the most common systemic side effect reported by participants receiving the H1N1 vaccine produced by Novartis, and no severe systemic side effects were reported. The following are common whole-body side effects occurring in response to either H1N1 vaccination: headache, malaise, muscle pain, chills, nausea, fever, and vomiting. In addition, researchers evaluated the occurrence of select neurological adverse events [ 2 , 4 ]. When a lymph node rapidly increases in size, its capsule stretches and causes pain. Pain is usually the result of an inflammatory process [ 3 ]. Most inflammatory diseases involve lymph nodes diffusely and homogeneously, generally preserving their oval shape. If threshold value of the long-to-short-axis ratio (L/S ratio) employed is low, then the accuracy of US in differentiating normal/reactive node (oval shape) from pathogic nodes (rounded shape) is also relatively low (sensitivity 71%, specifity 65%) [ 5 ]. If the ratio used is 2.0, sensitivity increases to 81–95% and specifity to 67–96%. The second parameter to be assessed is the hyper echoic central line of lymph nodes (the hilum). The sonographic detection of the hyper echoic hilum has always been related to the probable benign nature of the lymph node [ 5 ]. Postvaccinal lymphadenitis is a reactive response of the lymph node to the vaccination. Some of the vaccines such as BCG, varicella zoster, and pneoumococcal can cause reactive lymphadenitis, and it is a rather common complication of BCG vaccination [ 6 , 7 ]. Local reactions (generally erythema and induration with or without tenderness) are common after the administration of vaccines containing diphtheria, tetanus, or pertussis antigens. Occasionally, a nodule may be palpable at the injection site of adsorbed products for several weeks [ 8 ]. We followed this patient for nearly twenty days but we did not encounter any systemic reaction. This was not a suppurative lymphadenitis characterized by appearance of fluctuation with erythema and edema of the overlying skin. The size of lymphadenitis has decreased and disappeared completely after 15 days. We are of the opinion that further study is required for this type of side effect and a detailed histopathological examination could be of use. The parents turned down our request to take a biopsy for histopathological examination. However, our results have been conclusive enough that this case was a reactive lymphadenitis. In a conclusion, reactive lymphadenitis is an unusual side effect of swine flu vaccination requiring further study, and to the best of our knowledge no cases in children have previously been reported in the literature neither by CDC nor by WHO sources, apart from a case reported in an adult [ 9 ].
Academic Editor: Robert S. Dawe We present a 5-year-old boy who had the complaint of swelling and pain on the right vaccine shot and right axillary areas. The right axillary area was diagnosed as reactive lymphadenitis, which we believe is a rare local side effect of the swine flu vaccine. The key message to take away from this case is that the patient had lymphadenitis as a local side effect of the swine flu vaccine. Lymphadenitis should be reported as a possible local side effect of the swine flu vaccine.
2. The Case Study On December 9, 2009, a previously healthy 5-year-old boy with no history of illness was brought into the paediatrics clinic with complaints of pain in the upper part of the right arm (vaccine shot area), accompanied by swelling, bruising, and pain in the right axillary without any sign of fever. His past medical history, family history, and social history are unremarkable. He had swine flu vaccine administered intramuscularly in a local health authority clinic on the 8th of December 2009. Novartis H1N1 vaccine was administered to the child with a signed parental consent form requested by the Turkish Ministry of Health. Parents of the boy realised a swelling in the right arm where the vaccine was administered as well as a swelling in the right axillary area on the night of the same day when the boy started to complain about a pain. When he was brought into the paediatrics clinic on December 9, 2009, a day after the vaccination, examination revealed a hard and painful mass of nearly 2 cm in diameter on the right upper arm shot area with two other painful but small right axillary swellings. The examination of the other systems revealed no pathology. For the differential diagnosis, axillary ultrasonography (USG) examination was requested alongside tests for blood count (CBC) and C-reactive protein (CRP). Laboratory investigations of CBC and CRP were normal. The patient was not given any anti-inflammatory, antiallergic or antibiotic treatments. The boy was called into the clinic three days later on the 12th of December 2009 for a physical examination; it was found that swelling of the vaccine shot area still existed although the pain had decreased. The parents turned down our request to take a biopsy for histopathological examination. Ultrasound is a useful imaging modality in assessment of lymph nodes, and its features can help only in identifying abnormal nodes including size, shape, echogenic hilus, hypoechogenicity or isoechogenicity, echogeneity, coagulation necrosis, and a sharp nodal border [ 3 ]. Ultrasound features can help identify whether lymphadenitis is reactive or not. In this case, three USG results showed that this is a postvaccine lymphadenitis because of long-to-short-axis ratio (L/S ratio), oval shape, and hyperechoic hilum. A right axillary USG showed a reactive lymph nodule in the superior area with a size of 15 × 8 mm ( Figure 1(a) ) and two in the inferior area with dimensions of 9 × 6 mm and 7 × 6 mm ( Figure 1(b) ). They were of fuzzy form shape with hypoechogenic cortex with clear echogenic hilus indicator. In the Doppler mode, only hilus vascular structures were observed. These findings are meaningful for acute reactive lymphadenitis. Fuzzy form and echogenic hilus indicate a benign lymphadenitis as hypoechogenic cortex is usually observed in acute cases. The boy had been called into the outpatients' clinic a week later on the 16th of December 2009 when a second USG taken did reveal that 7 × 6 mm nodule had disappeared and there were no significant changes in the structure and size of the other two nodules. Bruising had completely disappeared. However, the third USG scan carried out two weeks later on the 22nd of December 2009 revealed a decrease in nodule dimensions from 15 × 8 mm to 12 × 6 mm ( Figure 2(a) ) and the second nodule changed from 9 × 6 to 10 × 4 mm as shown in Figure 2(b) . Furthermore, a decrease in the cortex echo and increased hilus echogeneity were also observed indicating reactive acute lymphadenitis. The physical examination carried out on the fifteenth day revealed that the patient's arm was back to normal.
Acknowledgment Both authors would like to thank Associate Professor O. Gundogdu of Kocaeli University, Turkey, for his help.
CC BY
no
2022-01-13 01:48:13
Case Rep Med. 2010 Dec 6; 2010:459543
oa_package/e1/38/PMC3014838.tar.gz
PMC3014839
21209735
1. Introduction Primary hyperparathyroidism is a rare disease, with hyperparathyroid crisis being one of its unusual manifestations. Large rises in PTH levels in benign parathyroid disease are unusual and have been associated with more sinister diseases [ 1 ]. We discuss a case of a patient with a benign cystic parathyroid adenoma presenting in sinister fashion with, in particular, a massively raised serum PTH level only previously seen in malignant disease [ 2 ].
3. Discussion Primary hyperparathyroidism is a rare disease, where the majority of patients now present asymptomatically after biochemical testing which shows a mildly raised serum CCa 2+ and marginally elevated PTH level [ 3 ]. However, our case highlights two important aspects of parathyroid disease that the authors would like to explore further. Firstly the difficulty in diagnosis of a patient with a markedly raised PTH and secondly the difficulty faced preoperatively in the management of a patient presenting with hyperparathyroid crisis. Preoperative and even histological differentiation between benign or malignant pathology is often difficult, and appropriate management requires judgement at the time of surgery as malignant lesions will need more radical procedures. The features suggestive of malignant lesions at the time of surgery would include adherence due to fibrosis, local infiltration, or lymph node involvement. There has been recent work into Preoperative diagnosis based on biochemical grounds with a raised CCa 2+ and PTH increasing the index of suspicion for malignant disease suggesting that serum PTH levels are mildly elevated in benign disease, but carcinoma causes levels up to 10 times that of normal [ 1 ]. To further stress the benefit of PTH in preoperative diagnosis, Robert et al. suggested that a PTH level <4 times the upper limit of normal excludes a malignancy [ 4 ]. Other studies support the above trend—a recent case series looking at solitary adenomas causing asymptomatic disease have derived mean values of Preoperative PTH as 186 [ 5 ] and 165 ng/L [ 6 ], while research into carcinoma show higher PTH levels of 714 [ 2 ] and 1220 ng/L [ 7 ]. This consolidates the unusual nature of a PTH level of 3957 ng/L in an adenoma with no radiological, Intraoperative, or histological evidence of malignancy. CCa 2+ is usually no greater than 1 mmol/dl above normal ranges in benign disease compared to over 4 times the upper limit of normal in malignant lesion [ 1 ]. Tumour mass has also been shown to correlate with diagnosis. The average adenoma weighs 1 g, but the average values for carcinoma weight are approximately 4 g [ 8 , 9 ] correlating well with Robert's study (1.3 g versus 4.9 g) [ 4 ]. Our case is unusual in that we report a benign adenoma with PTH, CCa 2+ , and weight that are all suggestive of parathyroid carcinoma. There are several published cases where benign disease presents suspiciously and PTH in particular reaches levels associated with carcinoma. These include parathyroid cysts, those presenting in developing countries, patients with oxyphil adenomas, and those presenting, as demonstrated in our case, in hyperparathyroid crisis. Parathyroid cysts are rare neck lesions [ 10 ]. Case reports and series investigating these have reported a wide range of PTH levels, fluctuating from normal to 2250 ng/L [ 11 ] with overlapping clinical and biochemical presentations of parathyroid cysts with carcinoma [ 12 ], making Preoperative differentiation difficult but important as surgical management will differ. In developing countries, benign adenomas are reported to present with a similar biochemical profile to carcinoma. Agarwal et al. reported PTH levels in benign disease similar to that in cancerous disease [ 13 ], attributing this to the late presentation of the disease and nutritional deficiencies. Oxyphil adenomas have been suggested to cause 3% of all adenomas. Recent case reports have suggested markedly elevated PTH levels up to 1291 ng/L [ 14 ], although often there are no systemic features suggestive of patients in crises. Our case of parathyroid crisis is a further example of suspiciously elevated PTH levels, suggestive of parathyroid carcinoma. A recent study published in 2008 looking at just under 300 patients undergoing procedures for hyperparathyroidism showed 2.8% present in crisis [ 15 ]. These patients present with 3-4 times higher PTH levels than those with asymptomatic disease and over 10 times that of normal (up to 1770 ng/L), consistent with PTH levels in malignant disease. They found that 88% were caused by underlying adenoma. Bleeding into adenomas that have undergone cystic degeneration is a well-recognised cause of crisis with histopathological findings revealing a large adenoma with cystic spaces filled with haemorrhagic fluid [ 16 ]. There is a suggested trend where the weight of tumour excised during a hyperparathyroid crisis is heavier than those with asymptomatic disease (means 7.5 g versus 1.6 g) [ 15 ]. Our case is an example of this, where a patient in crisis has an adenoma weight much higher than typical benign disease. Management of hyperparathyroid crisis has traditionally involved an emergency parathyroidectomy within 72 hours of presentation, which has a mortality of up to 14% [ 15 , 17 ]. However, recent evidence published by Phitayakorn and McHenry [ 15 ]supports an early excision after a period of optimisation, rather than emergency surgery. Once volume depletion is corrected, loop diuretic therapy and intravenous bisphosphonate therapy can be initiated [ 18 ]. This relatively rapid reduction in CCa 2+ thereby acts as a “bridge to surgery,” with a mean interval between presentation and operative intervention of 8 days [ 15 ]. This is not dissimilar to our interval of 10 days, thereby allowing for appropriate operative workup and management of concurrent medical problems. Surgical management of patients presenting with hyperparathyroid crisis secondary to adenomatous disease is effective, with a reported success rate of 92% [ 19 ]. While medical optimisation preoperatively is largely successful [ 15 ], there are reports where medical management has failed and disease has only responded to definitive surgical management [ 20 ]. Dialysis has also been used as a successful adjunct to medical therapy, in the interval between presentation and surgery [ 21 ]. With appropriate medical management prior to surgical excision, mortality rates have fallen, with rates for patients presenting in hyperparathyroid crisis reported as 2.8% [ 18 ]. In summary, our case involves a gentleman presenting in hyperparathyroid crisis with a massively raised serum PTH level that the authors believe to be one of the highest reported in benign disease published to date. It represents the presentation of benign cystic adenoma mimicking malignant disease. It also reinforces the importance of prompt initial medical management, Preoperative diagnostic and localising studies, and sound operative judgement, highlighting the difficulties facing the endocrine surgeon when dealing with lesions of the parathyroid gland.
Academic Editor: Chung Yau Lo Hyperparathyroid crisis is a rare manifestation of parathyroid disease. We present the case of a 53-year-old gentleman with a review of the current literature. He presented in acute renal failure with epigastric pain and vomiting. His serum-corrected calcium (CCa 2+ ) was raised at 5.2 mmol/L, in addition to a massively raised parathyroid hormone (PTH) level (3957 ng/L). Ultrasound studies of the neck revealed a 2 cm well-defined mass inferoposterior to right thyroid lobe. CT scans of the neck showed a normal mediastinum and confirmed no associated lymphadenopathy. Having undergone medical resuscitation for 9 days, a neck exploration revealed a cystic mass, which was excised. Histological investigations revealed a 9.25 g, cystic parathyroid adenoma with no features of malignancy. His PTH and CCa 2+ returned to normal postoperatively. This suspicious presentation of benign disease, including a marked elevation in PTH, highlights the challenges facing the endocrine surgeon in dealing with parathyroid disease.
2. Case Report A 53-year-old gentleman presented with a 2-week history of worsening epigastric pain, vomiting, and constipation. He reported mild confusion but no loss of consciousness. There was no history of polyuria or polydipsia. He reported longstanding gastroesophageal reflux symptoms but no other abdominal history. His medical history was negative for depression and renal calculi. There was no history of carcinoma or any radiotherapy treatment of any kind. He reported no significant family history. On examination he was pale with dry mucous membranes. His chest and abdominal examination were unremarkable. There was no evidence of any bony pain and no palpable lumps in the neck. At admission, he was tachycardic and hypotensive with an increased respiratory rate. He was in acute renal failure, with an elevated urea (19.5 mmol/L) and creatinine (272 μ mol/L). Liver function tests were normal. Laboratory tests revealed a markedly elevated serum-corrected calcium level (CCa 2+ ) of 5.20 mmol/L (normal range 2.12–2.65) and a parathyroid hormone (PTH) level of 3957 ng/L (normal range 12–75). Vitamin D levels were normal. A myeloma screen was negative. An ECG showed that he was in normal sinus rhythm and a chest radiograph was unremarkable. Given the acute presentation and massively elevated PTH levels, the major concern was a possible malignant parathyroid lesion. An ultrasound scan of his neck revealed a 2 cm well-defined oval hypoechoic mass posterior and inferior to right thyroid lobe ( Figure 1 ). A CT scan of the neck confirmed the 2 cm nodule as stated which abutted the right tracheal margin, with no associated lymphadenopathy and a normal mediastinum ( Figure 2 ). The patient underwent initial conservative management with aggressive intravenous fluid resuscitation, vitamin D replacement, intravenous loop diuretic treatment, and intravenous bisphosphonate therapy. On day 9, biochemical markers had improved (Urea 5.3, Creatinine 179), and CCa 2+ had fallen to 2.99 mmol/L. The patient underwent a neck exploration on day 10. During the exploration, the right inferior gland was found to be large, cystic, soft, and of a brown colour. There was a no evidence of local invasion and no lymphadenopathy. As such, a diagnosis of adenoma was made. Intraoperative PTH assay is not routinely used at our unit and as such was not performed. The right inferior parathyroid gland was excised. The right superior parathyroid gland was normal, and the contralateral neck was not explored. Histological investigations revealed a 3.5 cm × 2 cm × 1.5 cm encapsulated parathyroid adenoma weighing 9.25 g, composed of chief and oxyphil cells with cystic change. The Ki-67 proliferative index was low (1-2%). There was no local, capsular, or vascular invasion and no other features suggestive of malignancy. Given the suspicious clinical picture, the report was further confirmed at another centre. The patient recovered well postoperatively, with PTH levels falling to 45 ng/L within a day. Subsequent recovery included a period of hypocalcaemia with a raised Alkaline Phosphatase (ALP), which resolved with calcium replacement within a month.
CC BY
no
2022-01-13 01:48:13
Case Rep Med. 2010 Dec 20; 2010:596185
oa_package/94/4c/PMC3014839.tar.gz
PMC3014844
21209736
1. Introduction Hot water epilepsy (HWE) is a rarely seen, benign form of reflex epilepsy which is precipitated by the stimulus of bathing with hot water poured over the head. It is considered to be a geographically specific epileptic syndrome since it mainly occurs in India. Almost all cases of HWE are seen in healthy children, with the cases more frequent among male than female patients [ 1 ]. Interestingly, we report a 32-year-old pregnant woman with the onset of reflex seizures triggered by pouring hot water over the head while having a bath.
3. Discussion HWE is a reflex epilepsy in which the seizures are provoked by contact with hot water over the head [ 2 ]. A large number of patients with HWE have been reported from India.There have been some case reports from all round the world, such as Turkey. Traditionally, Turkish people bathe by sitting and pouring hot water over the heads with a bowl. The temperature of water varies between 40 and 50°C, and water poured on head can cause seizure. Similarly, the main precipitating factors for seizures in our case were bathing with hot water and pouring water over the head. It seems that traditional bathing habits are very important in this type of epilepsy [ 3 ]. To date, the pathophysiologic mechanism of HWE is not known clearly but apparently the thermoregulatory system, which is extremely sensitive to the rapid rise in temperature, seems to be detrimental [ 4 ]. HWE is mostly seen in the first decade of life, with cases more frequent among male than females (70%). However, some features of our patient such as the initiation age, gender, and additional existence of gestation were different from the literature. Because of this, our case is an unusual presentation of HWE. The pattern of epileptic seizure which is seen in HWE, consists of 67% complex partial seizure and 33% generalized tonic-clonic seizure. Interictal EEG studies are usually normal like in our case whereas ictal EEG usually shows focal epileptic activities and paroxysmal discharges characterized by secondary generalization [ 5 ]. İctal recording was not done for our patient because of the difficulty and disability in provoking such a reflex seizure. HWE is known as a benign and self-limited reflex epilepsy, only by avoiding hot water or long showers it may be sufficient to be seizure-free. However, approximately one-third of patients with HWE continue to have seizures even during regular baths. In these patients, carbamazepine, valproic acid, or intermittent oral prophylaxis with benzodiazepines before bathing might be preferred. In our case, we used carbamazepine, one of the conventional antiepileptic drugs, and achieved sufficient seizure control. Since seizures show a tendency to decrease, withdrawal of medication should be carefully undertaken only after several years [ 6 ]. Finally, HWE has usually favorable prognosis by firstly avoiding hot water and secondly using intermittent benzodiazepines or antiepileptic drugs.
Academic Editor: Michael S. Firstenberg Hot water epilepsy is a unique form of reflex epilepsy precipitated by the stimulus of bathing with hot water poured over the head. It is mostly seen in infants and children, with a predominance in males. Unlikely, we present a 32-year-old pregnancy woman with the incipient of reflex seizures triggered by pouring hot water over the head while having a bath during the gestation period and treated successfully with carbamazepine 400 mg/day therapy. Hot water epilepsy is known as a benign and self-limited reflex epilepsy, by firstly avoiding hot water or long showers and secondly using intermittent benzodiazepines or conventional antiepileptic drugs, may be sufficient to be seizure-free.
2. Case Report A 32-year-old, three-month pregnant woman came to our outpatient clinic with the complaint of incipient seizures while having a bath by pouring hot water over the head since two months earlier. She had auras preceding her seizures. These auras were associated with feeling a epigastric sensation, staring, oral automatism, and followed by loss of consciousness. Postictal state was characterised by a severe throbbing headache and drowsiness. Seizures occurred twice a month and always during bathing. Till the admission, she had four similar seizures. She had no spontaneous seizures before the onset of her reflex seizures. There were no family history of epilepsy and no past history of febrile convulsions, mental retardation, birth anoxia, or head trauma. Physical and neurological examinations were normal. Complete blood count, blood biochemistry, electrocardiography, interictal electroencephalography (EEG), and magnetic resonance imaging also revealed normal findings. Avoiding the seizures, short-lasting bathing with lukewarm water instead of hot water was recommended. One month followup, her seizures did not stop during regular bath. Hence, she was put on carbamazepine 400 mg/day and completely remained seizure-free.
Acknowledgment The authors report the absence of any significant financial support in any organization for this study.
CC BY
no
2022-01-13 01:48:13
Case Rep Med. 2010 Dec 13; 2010:134578
oa_package/5b/6c/PMC3014844.tar.gz
PMC3014845
21209737
1. Background HIV infection is a global pandemic which in 2007 affected 77,000 people in the UK and 33 million people around the world, according to the World Health Organisation [ 1 ]. The infection has widespread systemic effects which include the musculoskeletal system. It is thus essential that orthopaedic surgeons are aware of the condition and its sequelae. Antiretroviral therapies, first licensed in 1995, have altered the course of HIV and its manifestations. However, these drugs are also known to have a multitude of side effects including osteonecrosis and metabolic abnormalities. We report the case of an HIV patient presenting with spontaneous non-traumatic hip pain, who following radiological, microbiological, and serological tests was diagnosed as suffering from pseudogout. This is the first reported case of an HIV-infected patient with hip pseudogout. The case is discussed in the context of HIV and other causes of acute joint pain.
3. Discussion HIV has been linked with a variety of orthopaedic and rheumatological conditions. The first documented association in 1987 was between AIDS and Reiter's Syndrome, followed by gout, osteoporosis, avascular necrosis, septic arthritis, osteomyelitis, and tuberculosis [ 2 ]. 30%–40% of HIV/AIDS patients suffer from arthralgia. Though this can affect any joint, the knees, shoulders, and elbows are most commonly affected. When presented with such a patient, the orthopaedic surgeon must be aware of the possible differential diagnoses. 3.1. Gout Gout is estimated to affect around 0.5% of HIV/AIDS patients per year [ 3 , 4 ]. These patients often have urate abnormalities, 41% with hyperuricaemia and 5% with hypouricaemia, unlike HIV-negative patients whose urate levels are usually normal [ 5 ]. The elevated urate is likely to be the result of the HAART treatment itself [ 6 – 8 ] rather than of the HIV infection, and drugs such as the protease inhibitor ritonavir are known to have this association [ 9 ]. There are several hypotheses for why HAART might cause hyperuricaemia. The drugs may cause mitochondrial toxicity, which can increase the formation of lactate, which then competes with urate for tubular secretion in the kidneys. HAART drugs may also cause a respiratory chain failure that results in ATP depletion, which then increases urate production in the purine nucleotide cycle [ 10 , 11 ]. Thirdly, HAART drugs cause hyperlipidaemia, insulin resistance, and central adiposity, which in turn can lead to gout. 3.2. Pseudogout Despite the association between gout and HIV, there have been no documented cases of pseudogout-associated arthralgia. Whilst most cases of pseudogout are idiopathic, there are relationships with trauma, aging, and metabolic diseases, including hyperparathyroidism and haemochromatosis. Calcium pyrophosphate crystals in pseudogout are thought to develop from an increased adenosine triphosphate breakdown, which can lead to increased pyrophosphate levels in the joint [ 12 ]. It is difficult to know whether the HIV infection or HAART therapy was directly responsible for the pseudogout or whether it was secondary to another condition. Studies have shown a link between HIV or HAART and hypophosphatemia [ 13 ], hyperparathyroidism [ 14 ], hypothyroidism [ 15 ], and hypercalcaemia [ 16 ]. All these conditions are known risk factors of pseudogout. 3.3. Septic Arthritis Septic arthritis usually presents with a short history of joint pain and fever, with a raised ESR and an absence of peripheral leukocytosis. It is more prevalent in HIV/AIDS patients that are intravenous drug users or haemophiliacs [ 17 , 18 ], and the majority resolve with appropriate intravenous antibiotics [ 5 ]. However, if the patient fails to improve on the antibiotics, an open drainage of the joint should be performed in theatre. The most common pathogens are Staphylococcus aureus (60%) and Candida albicans (20%), and rarely pathogens such as Stenotrophomonas maltophilia and Proteus wickerhamii [ 19 ]. 3.4. Tuberculosis There has been a dual global epidemic of tuberculosis (TB) and HIV/AIDS. The World Health Organisation estimated that there were 9.27 million new cases of TB in 2007 (139 per 100,000 population); of these 1.37 million (14.8%) were HIV positive. The two conditions are interlinked, as the HIV-virus specifically eliminates macrophages and CD4 lymphocytes, which are cells essential for the prevention of active tuberculosis. There has been an increased incidence of extrapulmonary TB, including bone and joint TB, in HIV/AIDS patients [ 20 ]. The diagnosis of skeletal TB is often delayed in developed countries, as it is not commonly encountered. The most common site of musculoskeletal TB among HIV/AIDS patients is the spine (Pott's spine) [ 5 ]. The infection can also affect the weight-bearing joints, in particular the knee and hip. It presents with chronic pain with minimal inflammation. TB osteomyelitis classically presents with pain and swelling of the bone and surrounding soft tissues. There may also be enlarged regional lymphadenopathy or the presence of an abscess or sinus. 3.5. Osteopenia/Osteoporosis HAART causes metabolic changes in the body and has been shown to cause osteopenia and osteoporosis, putting patients at risk of low-energy fractures. A study of 600 HIV-infected individuals on antiretroviral therapy demonstrated a significantly higher prevalence of osteopenia compared to the national average in the US [ 21 ]. The underlying mechanism of bone loss in HIV-infected patients is not fully understood. Studies using markers of bone formation and resorption have shown an uncoupling of these events in individuals with HIV infection [ 22 ]. Other studies have found an increased number of proinflammatory cytokines tumour necrosis factor (TNF) and interleukin-6 (IL-6) in the HIV-infected patients, which have an important role in osteoclast activation and resorption [ 23 ]. 3.6. Osteonecrosis HIV/AIDs patients have an increased risk of developing osteonecrosis, believed to occur as a result of a vascular thrombosis, caused either by the actions of anticardiolipin antibodies or by a deficiency of protein S [ 24 ]. A study using magnetic resonance imaging (MRI) of 339 HIV-positive patients found that 4.4% suffered from asymptomatic osteonecrosis [ 25 ]. It is unclear whether this is the result of the HIV virus, the HAART treatment, or an increased appreciation of osteonecrosis on scans. The protease inhibitor drugs have been specifically reported to lead to osteonecrosis [ 26 , 27 ].
4. Conclusion HIV/AIDS is a global phenomenon and it is essential that clinicians are aware of its widespread effects and those of its drug treatments. This is the first documented case of hip pseudogout in an HIV/AIDS patient. Though it is unclear whether this was the result of the HIV infection, the HAART treatment, or an unrelated cause, the cases highlight the diagnostic difficulties one can encounter in these susceptible patients.
Academic Editor: Ingo W. Husstedt HIV infection is a global pandemic, currently affecting approximately 77,000 people in the UK and 33 million people around the world. The infection has widespread effects on the body and can involve the musculoskeletal system. It is therefore important that orthopaedic surgeons are aware of the condition and its sequelae. We present the case of a 46-year-old man with a 10-year history of HIV who presented with acute hip pain, difficulty weight-bearing, and constitutional symptoms. Following radiological, microbiological, and serological tests a diagnosis of pseudogout was established following microscopic analysis of the hip joint aspirate. The patient's symptoms resolved completely following the joint aspiration and NSAID therapy. Studies have shown a relationship between HIV infection and gout. The virus has also been linked to osteonecrosis, osteopenia, bone and joint tuberculosis, and septic arthritis from rare pathogens. However, it is difficult to fully ascertain whether these conditions are related to the HIV infection itself or the HAART (highly active antiretroviral therapy). There are no previously reported cases of HIV-infected patients with pseudogout. The case is discussed with reference to the literature.
2. Case Report A 55-year-old HIV-positive Caucasian man presented with a one-day history of spontaneous-onset left hip pain. The patient described a similar episode in the contralateral hip in 1997, which had resolved with analgesia and rest. He had been diagnosed with HIV in 2002 and was immediately commenced on “highly active antiretroviral therapy” (HAART) to control disease progression. His HIV treatment consisted of two nucleoside reverse transcriptase inhibitors (emtricitabine and tenofovir) given as a fixed-dose combination (Truvada 200/245) once per day and the protease inhibitor duranavir 800 mg boosted with ritonavir 100 mg once per day. The patient also had a history of hypercholesterolaemia and syphilis. He was a non-smoker and an infrequent drinker. On presentation he was mildly pyrexial and was unable to weight-bear through pain. There was a reduced range of both active and passive left hip movement (20–75 degrees flexion and 10-0-10 degrees of rotation), and it was held in 30 degrees of flexion at rest. Other musculoskeletal and neurological examinations were essentially normal. Blood investigations initially found a normal white blood cell count (WCC) of 8.3 × 10 9 /L, C-reactive protein (CRP) of <3 U/L, erythrocyte sedimentation rate (ESR) of 10 mm/hr, Creatine Kinase (CK) of 88 U/L (normal 25–150 U/L), and negative ANA and ANCA. Plain radiographs of the pelvis demonstrated only mild bilateral osteoarthritis (see Figure 1 ). The patient was admitted under the HIV physicians for bed rest, analgesia, and observation; however he did not improve. Over the subsequent two days his CRP increased to 109, ESR to 90, though his white cells remained stable (WCC of 8.1 × 10 9 /L). An MRI scan revealed a large left hip effusion and a moderate right hip effusion (see Figure 2 ). An aspiration of the left hip was performed under local anaesthetic and revealed a turbid yellow fluid, which was sent for microbiology and cytology. A differential count of the aspirate showed 95% neutrophils, and gram staining found no bacteria present. Microscopy revealed positively birefringent crystals, consistent with calcium pyrophosphate. A diagnosis of hip pseudogout was made and the patient was commenced on high-dose nonsteroidal anti-inflammatory drugs (NSAIDs). The patient improved clinically over the next 48 hours and was discharged home fully weight-bearing. He was followed up in both the orthopaedic and rheumatology clinics, where the latter excluded any other major risk factors for the development of pseudogout. He remained well at 6-month review. Conflict of Interests There is no conflict of interests. The authors received no financial or other type of support to carry out this study. This is an original paper and has not been published in any other journal. All authors read and approved this paper for publishing purposes. Consent Written informed consent was obtained from the patient for publication of this case report and any accompanying images. Authors' Contributions B. M. Dala-Ali, M. Welck, and H. D. Atkinson managed the patient. B. M. Dala-Ali and M. A. Lloyd wrote the paper. M. Welck and H. D. Atkinson assisted with the literature review and paper preparation. All authors have read and approved the final paper.
CC BY
no
2022-01-13 01:48:13
Case Rep Med. 2010 Dec 20; 2010:842814
oa_package/c0/5c/PMC3014845.tar.gz
PMC3014846
21209738
1. Introduction HIV wasting syndrome has been defined by the Centre for Disease Control (CDC), USA, as involuntary weight loss greater than 10% of baseline weight associated with either chronic diarrhoea for at least 30 days or chronic weakness or documented fever for at least 30 days in the absence of a concurrent illness or condition other than HIV infection that could explain findings (e.g., tuberculosis, cryptosporidiosis, or other specific enteritis) [ 1 ] Although the wasting syndrome has declined since the introduction of highly active antiretroviral therapy (HAART) [ 2 , 3 ], weight loss still remains a common cause of morbidity and mortality in HIV-infected patients receiving HAART [ 4 , 5 ]. In Nigeria, weight loss and severe wasting often complicates HIV/AIDS [ 6 – 9 ], but we are not aware of any report specifically describing the risk factors, features, clinical course, and outcome of the wasting syndrome as defined by CDC, especially in patients receiving HAART. We describe a case of the classical wasting syndrome in an unemployed Nigerian widow and discuss the influence of a failing HAART regimen, socioeconomic status, and other clinical variables in the wasting syndrome.
3. Discussion The pathophysiological mechanisms underlying the HIV wasting syndrome are related to three major factors including inadequate nutrient intake, nutrient malabsorption, and disturbances in metabolism (reviewed in references [ 10 – 12 ]). The burden of HIV infection impoverishes [ 13 ]; in sub-Saharan Africa, women and children are the most affected and they are more likely to suffer from neglect, discrimination, and abuse [ 14 ]. Poor socioeconomic status has been shown to be a strong determinant of weight loss in HAART experienced HIV-infected patients [ 4 ]. These patients were found not to be able to afford regular high-protein and high-calorie meals needed for weight gain and for restoration of lost body cell mass. In the reported case, poverty precluded adequate intake of nutritious meals before and during patient's illness and this problem was compounded during the illness by anorexia, oral sores, vomiting, and diarrhoea. Anorexia may also result from anxiety and depression, which are both common psychiatric complications of HIV infection [ 15 ]. Other recognised causes of inadequate nutrient intake include dysphagia and odynophagia, which may be due to infections of the oral cavity, posterior pharynx, or oesophagus. Both micronutrient and macronutrient malabsorption have been observed in the wasting syndrome, and nutrient malabsorption may occur with or without diarrhoea [ 10 , 16 ]. Chronic diarrhoea leading to malabsorption may either be due to the HIV virus itself (i.e., AIDS enteropathy) or due to occult opportunistic infections (e.g., Cytomegalovirus, Clostridium difficile, and Mycobacterium avium intracellular bacteria, among others) [ 17 ]. In AIDS enteropathy, no specific pathogen can be isolated from the gut, and the chronic diarrhoea, motility disturbances, and mucosal atrophy accompanying this condition have been attributed to the direct effects of HIV, especially as viral proteins have been found in the gut mucosa [ 18 ]. AIDS enteropathy is a diagnosis of exclusion; however, complete exclusion of all opportunistic causes of diarrhoea in HIV-infected patients is a challenging task sometimes requiring invasive techniques [ 17 , 19 ]. Consequently, the WHO recommends the use of empiric antimicrobials and constipating drugs such as loperamide, for treatment of chronic HIV-related diarrhoea in resource poor settings [ 20 ]. By leading to malabsorption and/or changes in pharmacokinetics of the ART medications, chronic diarrhoea may also contribute to HAART treatment failure. In a clinical study by Brantley et al. [ 21 ], AIDS-related diarrhoea and weight loss were associated with both subtherapeutic plasma levels of antiretroviral medications and protozoan pathogens in stool. Hence, in patients receiving HAART, prompt and effective treatment of chronic diarrhoea is essential to prevent both weight loss and HAART treatment failure. Various metabolic abnormalities leading to weight loss have been described in HIV-infected patients, and these abnormalities have been attributed to several etiologies including the HIV virus itself, concomitant opportunistic infections, cytokine dysregulation, and hormonal imbalances accompanying HIV infection, as well as ART medications [ 10 , 11 ]. A failing HAART regimen leads to persistent viral replication and progressive immunosuppression as reflected in the CD4 cell count. Correlations have been established between high HIV viral load, low CD4 cell counts, and weight loss [ 4 , 22 , 23 ]. In HAART experienced patients with suppressed plasma viral load, weight loss has been attributed to the persistence of HIV in peripheral blood monocytes and macrophages [ 24 ]. The persistence of HIV leads to excessive cytokine activation and dysregulation, and this in turn triggers various metabolic abnormalities that lead to weight loss such as increase in resting energy expenditure, proteolysis, and hypercatabolism. Cytokines may also inhibit anabolism by causing growth hormone resistance and by reducing hepatic production of insulin-like growth factor-1, a messenger of growth hormone [ 25 ]. The increase in resting energy expenditure and cytokine dysregulation in HIV infection may be intensified by concomitant opportunistic infections [ 10 ]. Various cytokines, such tumour necrosis factor α , interleukins-1 (IL-1), IL-6, and interferon gamma, have been implicated in these metabolic perturbations [ 11 ]. By interfering with lipid metabolism in muscle, elevated levels of proinflammatory cytokines such as TNF- α also lead to muscle weakness and muscle atrophy [ 26 ]; both are characteristics features of the wasting syndrome. The use of HAART has also been independently associated with an increase in resting energy expenditure, and this has been suggested as one of the factors perpetuating weight loss in the HAART era [ 4 , 10 ]. Although sometimes challenging to distinguish, even with appropriate body composition measurements, HAART-induced weight loss often leads to loss of fat loss and/or fat redistribution (Lipodystrophy) with little or no loss of lean body mass [ 10 ]. Conversely, the wasting syndrome is characterised by a complex interplay of lean body and fat loss depending on baseline body weight and other factors such as gender [ 10 , 27 ]. With progressive HIV infection, women lose a higher amount of body fat relative lean body mass while men lose more LBM than fat [ 27 ]. These gender differences in weight loss have been attributed greater premorbid fat stores in women than men, as well as due biological and hormonal factors [ 27 ]. Hypogonadism, represented by low levels androgen such as testosterone, may accompany HIV infection in men [ 28 ], and it may be as a result of the suppressive effects of cytokines on testicular steroidogenesis, as well as due to functional disorder of the hypothalamus and/or primary testicular failure [ 29 ]. Androgen deficiency inhibits protein synthesis and this may favour a greater loss of muscle mass relative to fat mass in men. In women, testosterone levels are normally low and although androgen deficiency has been reported in some women with weight loss, their contribution to weight loss in women is less understood [ 30 ]. Intensive nutritional rehabilitation to prevent or reverse weight loss remains the cornerstone of management of the wasting syndrome [ 10 , 12 ]. The aims are to improve appetite and nutrient absorption by addressing all immediate causes of anorexia and malabsorption, such as oral sores and diarrhoea, to improve intake of adequate calories made up of high-protein and low-fat meals in addition to micronutrients supplementation, and to correct psychosocial issues that affect nutrient intake such as poverty and depression, by providing social and psychological support. In view of the principal role of HIV in the pathogenesis of the wasting syndrome, effective HAART aimed at reducing viral load to undetectable levels and sustaining immune restoration as reflected in improving CD4 cell count is indispensable [ 12 ]. In combination with adequate caloric intake, fitness training by progressive resistance exercises (e.g., lift of light weights and body building exercises) increases muscle function and strength, as well as lean body mass and weight [ 31 ]. Conversely, aerobic exercises (e.g., walking, jogging, and running) may result in little or no increase in body mass or weight. Pharmacological treatments are usually reserved for patients who fail nutritional therapy. Appetite stimulants (e.g., Megestrol acetate), recombinant human growth hormone (Serostim), and androgenic steroids (e.g., testosterone) in men with hypogonadism have all been approved for the treatment of the wasting syndrome [ 12 ]. Cytokine modulators (such as thalidomide) have been investigated for treatment of wasting syndrome, but their success rates have been variable [ 11 , 12 , 32 ]. Thus, they are not yet approved for the management of the wasting syndrome, until convincing evidence of their efficacy is established in future studies.
4. Conclusion HIV wasting syndrome is a disorder characterised by multiple pathophysiological mechanisms, most mediated by the HIV virus itself and driven by nutritional abnormalities. HAART treatment failure is an emerging global challenge, especially in developing countries where HIV infection is still endemic and the use of HAART is being scaled up [ 33 ]. A failing HAART regimen leads to HIV persistence and acting together with psychosocial issues such as poverty, it sets the stage for the re-emergence AIDS defining illness such as the wasting syndrome. To prevent or reverse the resurgence of AIDS defining illness such as the wasting syndrome in this HAART era, ensuring effective uninterrupted HAART, socioeconomic empowerment of HIV-infected patients, and prevention of opportunistic infections are priorities for developing countries such as Nigeria.
Academic Editor: A. R. Satoskar The HIV wasting syndrome represented the face of HIV/AIDS before the advent of highly active antiretroviral therapy (HAART). Although the incidence of wasting has declined since the introduction of HAART, weight loss remains common in patients receiving HAART, especially in the setting of a failing HAART regimen. As we are not aware of any previous reports from Nigeria, we report a case of the classical wasting syndrome in a Nigerian female who had both virological and immunological HAART failure due to poor adherence. The influence of a failing HAART regimen, socioeconomic status, and other clinical variables in the wasting syndrome are discussed.
2. Case History A 32-year-old HIV-1-infected widow was admitted to our tertiary hospital with a 6-month history of progressive lethargy, anorexia, recurrent fever, vomiting, watery non-bloody diarrhoea, and progressive weight loss. She went from 50 kg to 21 kg in 5 months, a 42% weight loss. She was unemployed and had no source of regular income to care for herself or her 3 children. Hence, her meals before and during her illness were irregular and consisted mainly of carbohydrate diet. On examination, she was conscious, prostrated, severely wasted with generalised wasting of muscle groups, loss of subcutaneous fat, and prominence of bones. There was no evidence of localised subcutaneous fat atrophy or fat redistribution. Her body mass index was 10.2 kg/m 2 , and the mid-upper arm circumference was 10 cm (Figures 1 and 2 ). She had hypoproteinemic hair and skin changes, dehydration, fever (37.8°C), pallor, and oropharyngeal candidiasis but no lymphadenopathy. Chest, cardiovascular, and abdominal examinations were normal. She was on HAART (Zidovudine, Lamivudine, Nevirapine) for 16 months but adherence was poor (<70%). On HAART, her viral reduced from 421,000 copies/ml to 41,000 copies/ml, but CD4+ T cell count dropped from 168 cells/ul to 34 cells/ul. Full blood count revealed microcytic hypochromic anaemia (packed cell volume of 24%) and leukopenia of 1.2 × 10 3 /ul. Three separate stool microscopy and culture, including modified Ziehl Neelsen stain for cryptosporidiosis and isosporiasis, were all negative for parasites and bacteria, and also negative for cells such as red blood cells and white blood cells. Apart from mild hypokalemia (serum potassium of 3.0 mmol/l) and low serum albumin of 22 mmol/l, other renal and liver function tests were normal. Chest X-ray and abdominopelvic ultrasound were normal. Hepatitis B and C serology and blood film for malaria parasites were also negative. She was rehydrated, and diarrhoea was controlled with an antimotility agent (tablets loperamide) after empiric treatment for infectious diarrhoea using tablets albendazole 400 mg daily for 3 days, intravenous ciprofloxacin 500 mg twice daily for one week, and tablets tinidazole 2 g start. Oral fluconazole 200 mg daily for oral thrush, prohylactic septrin 960 mg daily, multivitamins, and haematinics were also given. She had intensive nutritional rehabilitation with locally available high protein and high energy diet as well as fruits and vegetables. Specifically, “kwashiokor pap” or “kwashi pap”, a high protein local diet made up of a mixture of ground guinea corn, ground soya bean, and ground crayfish, ground dried fish, roasted ground groundnuts, and boiled water was given orally three to five times per day, along with variety of other local foods. Additional adherence counselling was given, and ART was switched to Emtricitabine/Tenofovir and Nevirapine on account of anaemia. All symptoms gradually resolved and she gradually regained her physical strength with improved physical activity. At the end of 6 weeks, her weight increased by about 8 kg and she was thereafter discharged to the social welfare, nutritional, and adherence counselling unit. Author's Contributions We declare that this work was done by the authors and all liabilities pertaining to claims relating to the content of this article will be borne by the authors. The first author conceived the report; all authors were involved in patient management, manuscript preparation, and review, and they approved the final version for publication. Conflict of Interests No conflict of interests associated with this work. Ethics Consent was obtained from the patient for clinical photos.
CC BY
no
2022-01-13 01:48:13
Case Rep Med. 2010 Dec 27; 2010:192060
oa_package/16/80/PMC3014846.tar.gz
PMC3014852
21209739
1. Introduction Patients with cancers, particularly brain, are at considerable risk for deep vein thrombosis (DVT) and pulmonary embolism (PE). Twenty-six percent of patients with high grade gliomas developed DVTs, and 15% PEs, within 12 months of treatment [ 1 ]. Larger and more symptomatic PE's are often treated with thrombolytics, however surgical indications are expanding. Thrombolytics or high-dose anticoagulation for cardiopulmonary bypass is typically contraindicated in patients with significant intracranial pathology [ 2 ]. Our favorable surgical management of a patient with a postinfarct ventricular septal defect and acute intracranial hemorrhage [ 3 ] has prompted us to consider pulmonary embolectomy to patients with known brain cancers who develop massive PEs. Our patient is 47 years old, first diagnosed with a right temporal lobe Glioblastoma multiforme (GBM) in 2005 that was treated with surgical resection and reoperation in 2007 and 2008. He continued to receive cycles of chemotherapy and local irradiation for disease recurrence. He had mild right-sided weakness, short-term memory, and cognitive defects but was functional and living at home. In 02/2009, he developed acute shortness of breath and chest CT confirmed a large saddle pulmonary embolism ( Figure 1 ). He was hemodynamically stable and anticoagulated with a weight-based heparin protocol. Despite adequate anticoagulation, he became fatigued and hypoxemic (O 2 saturation: 90% on 3 liters via cannula). A transthoracic echocardiogram showed reduced global right ventricular function and his B-type Natriuretic Factor was elevated at 212 pg/mL. His Troponin was 0.14 ng/mL (normal <0.11). His heart rate was 105 bpm and respiratory rate was >20/min and he felt short of breath sitting up and could not lie flat. Because of his brain tumor, based upon established guidelines for the management of PE, thrombolytic therapy was felt to be an absolute contraindication [ 2 ]. Because of his anatomically large PE and worsening clinical picture, surgical therapy was offered. Median sternotomy was performed, cardiopulmonary bypass was initiated with full heparinization (goal activated clotting time >350 seconds), the aorta was cross-clamped and the heart was arrested. The main pulmonary artery was opened and large emboli were extracted from the main and left pulmonary artery and lobar branches. The right pulmonary artery was exposed between the superior vena cava and the aorta and addition clots were extracted manually and with a small suction catheter. The pulmonary arteries were irrigated clean with heparinized saline. Total bypass time was 40 minutes and the cross-clamp time was 30 minutes. Immediately postoperatively an IVC filter was placed. He was extubated within 24 hours. His neurologic function was unchanged and postoperative course was unremarkable. Venous duplex scanning showed extensive clot in both legs and systemic anticoagulation with heparin was started 12 hrs postoperatively. He was discharged to a rehabilitation facility, on room air and sodium warfarin, on postoperative day 8. Two weeks later, he returned home and at 6 weeks a repeat MRI showed no disease progression ( Figure 2 ). He survived for 12 additional months at home before returning with an overwhelming infection and septic shock at which time support was withdrawn.
2. Discussion Surgical management of massive PEs is an acceptable therapy. For example, Kadner recently reported a 30-day mortality of 8% following salvage pulmonary embolectomy [ 4 ]. Of note, 1 of the 2 deaths was from intracerebral bleeding. In their experience, most patients had significant hemodynamic compromise (32% had a preoperative arrest) and 16% had a cancer. But, the management of patients with massive PEs and an intracranial process can be challenging. Such patients are at risk for dying from shock or acute right heart failure. Therapies must be balanced against the risk of worsening or precipitating an intracranial bleed. A recent report with thrombolytic therapy in a patient with a PE and a brain tumor argues the <10% risk for hemorrhagic transformation must be balanced against the 25–100% case fatality rate for massive PEs [ 5 ]. However, there is little additional data to support this practice. Another treatment option is catheter directed therapy—even in patients with contraindications to systemic thrombolytic therapy. These techniques involve direct mechanical fragmentation, aspiration, or direct thrombolytic therapy and have been shown to be effective and safe in patients with massive PE [ 6 ]. These techniques, when successful, are associated with a quick resolution of symptoms and hemodynamic instability can often be accomplished quickly. However, there is a lack of standardized protocols and no device or catheter has an approved indication by the United States Food and Drug Administration (FDA) for the treatment of pulmonary embolism. Likewise, no thrombolytic agent has FDA approval for direct pulmonary infusion. Nevertheless, recent guidelines describing the management of PE suggest that catheter-directed therapy can be an acceptable option for the treatment of massive PE and can potentially serve a life-saving role in an institutional management algorithm [ 2 ]. Cardiac surgery, because of presumed bleeding risks has been historically contraindicated in patients with severe intracranial pathology. Our recent experience [ 3 ] and Fukuda's report of 3 successful cases of salvage pulmonary embolectomy in patients with recent intracranial bleeding [ 7 ] contribute to the rationale of operating on these acutely ill patients—particularly in whom full anticoagulation with heparin for bypass might be theoretically less harmful than systemic lytic therapy. Our case demonstrates the feasibility of extending the surgical management of massive PEs to patients with advanced brain cancers. While such interventions may be considered aggressive or futile in neurologically debilitated patients with known life-limiting medical problems, our experience suggests the contrary. Preoperative neuro-oncology consultation, projected a 6–12 months, albeit unpredictable, survival. In addition, this experience also emphasize the need for open communication, particularly with a Palliative Care team, and a reasonable set of expectations and advanced directives in patients with advancing diseases.
3. Conclusions As experience with the surgical management of acute massive pulmonary embolism grows, indications can include patients traditionally considered at prohibitive risk or in whom thrombolytics are contraindicated. Our cases demonstrate that pulmonary embolectomy can be successful in patients with advanced brain tumors. Our experience also emphasizes that in each case, regardless of the comorbidities and management plan, that there needs to be careful consideration given to the patient's baseline and anticipated functional status, expected duration of survival, and most importantly, any known advances directives. In some centers, depending on team preferences and skills, catheter-based therapies may be an acceptable alternative to surgical options. While we advocate—based upon our experiences—surgical management, regardless of the preferred approach, an institutional algorithm for the treatment of massive PE should include patients who have contraindications to thrombolytic therapy.
Academic Editor: Graham Frederick Pineo Pulmonary emboli are frequent causes of morbidity and mortality in patients with brain tumors. Treatment options are limited in these complex patients. We report a case of successful acute pulmonary embolectomy in a patient with an advanced brain cancer.
CC BY
no
2022-01-13 01:48:13
Case Rep Med. 2010 Dec 20; 2010:862028
oa_package/65/25/PMC3014852.tar.gz
PMC3014853
21209740
1. Introduction Brugada syndrome is a clinical entity first described in 1992 [ 1 ]. It is an autosomal dominant genetically predisposed disorder, characterized by an ST-segment elevation in the right precordial EKG leads and a high incidence of sudden cardiac death (SCD) in patients with structurally normal hearts. Almost 4% of patients presenting with SCD have Brugada syndrome, and therefore it should be looked for in patients with SCD. It is more common in males than females and typically first presents in the third decade of life, although it has been reported in children and the elderly too [ 2 ]. We present a case whose diagnosis is missed in the early presentation and developed SCD and its sequelae.
3. Discussion Brugada syndrome has three types according to the EKG patterns. Type 1 has a “coved-” type ST elevation with at least 2 mm J-point elevation, a gradually descending ST-segment and a negative T wave in more than one right precordial leads (V1–V3). Type 2 has a “saddleback” pattern with at least 2 mm J-point elevation, and at least 1 mm ST elevation, with a positive or biphasic T wave. Type 2 pattern can occasionally be seen in healthy subjects. Type 3 has a saddle back or coved pattern with less than 2 mm J-point elevation, and less than 1 mm ST elevation, with a positive T wave. Type 3 pattern is not uncommon in healthy subjects. The type 2 and type 3 Brugada patterns are not specific enough to be considered diagnostic [ 2 ]. The pathophysiology of the Brugada Syndrome is still in debate between depolarization and repolarization hypothesis. According to Wilde et al. there is compelling evidence favoring repolarization hypothesis [ 3 ]. Dominant loss-of-function mutations in cardiac myocyte sodium channel encoding gene, named SCN5A [ 4 ], have been well documented in several cases. Other mutations in Glycerol-3-phosphate dehydrogenase-like peptide, L-type calcium channel (CACNA1c and its β subunit CACNB2b), and potassium outward current channel also have an association with Brugada syndrome [ 5 ]. New mutations are being added to the current literature on regular basis. Genetic testing for Brugada syndrome may confirm a diagnosis in patients suspected of having Brugada syndrome, as well as differentiate between relatives who are at risk for the disease and those who are not. The Brugada pattern is a dynamic EKG finding, and it may not always appear on 12-lead EKG. A drug challenge test is used to establish the diagnosis; however, this test is not required if the type-1 Brugada pattern exists on the 12-lead EKG. Ajmaline, flecainide, pilsicainide, procainamide, disopyramide, and propafenone are the drugs utilized to unmask Brugada syndrome [ 2 , 6 ]. It is definitively diagnosed when a type 1 ST-segment elevation is observed in >1 right precordial lead (V1 to V3) in the presence or absence of a sodium channel-blocking agent, and in conjunction with one of the following: documented ventricular fibrillation (VF), polymorphic ventricular tachycardia (VT), a family history of sudden cardiac death at <45 years old, coved-type EKGs in family members, syncope, or nocturnal agonal respiration [ 7 ]. The Brugada syndrome becomes unmasked in association with tricyclic antidepressant use, vagotonic agents, cocaine, and alcohol intoxication and in a febrile state [ 8 , 9 ]. This is the reason why all the Brugada syndrome patients need to be educated about the aggressive treatment of fever and the medications to be avoided. ICD placement is the main stay of treatment as pharmacologic treatment has no mortality benefit. DEBUT study has shown that ICD treatment is superior to beta blockers [ 10 ]. New pharmacological agents are still under experimental studies. Symptomatic patients displaying the type 1 Brugada EKG (either spontaneously or after sodium channel blockade), who present with aborted sudden death, should receive an ICD without additional need for electrophysiologic study (EPS). Asymptomatic patients can be divided into 2 main categories: (1) those with a spontaneously occurring type 1 Brugada pattern and (2) those showing a type 1 Brugada pattern after a drug challenge. Role of EPS inducibility among asymptomatic patients is controversial. In spite of one third of asymptomatic patients having inducible ventricular arrhythmia, it does not seem to be equal to risk [ 11 ]. In the past, asymptomatic Brugada syndrome patients were associated with poor prognosis. Recent FINGER Brugada syndrome registry showed that in asymptomatic individuals the cardiac event rate per year was only 0.5%. Symptoms and spontaneous Type 1 Brugada syndrome are the strong predictors of future arrhythmic events. Gender, familial history of SCD, presence of an SCN5A mutation, and inducibility of ventricular arrhythmia were not predictive of arrhythmic events [ 12 ].
4. Conclusions Physicians, especially emergency department and general physicians, should be aware of Brugada EKG patterns in the differential diagnosis of ST-segment elevation in anterior precordial leads of EKG. In a majority of the cases, and especially in young patients, consultation with a cardiologist or electrophysiologist is required. In this case, early diagnosis and prompt intervention might prevent SCD and subsequent sequelae.
Academic Editor: William J. Brady Introduction . Brugada syndrome accounts for about 4% of sudden cardiac deaths (SCD). It is characterized by an ST-segment elevation in the right precordial electrocardiogram (EKG) leads. Case Presentation . We describe a 39-year-old healthy Caucasian man who was admitted to the intensive care unit after being cardioverted from ventricular fibrillation (VF) arrest. His past history was significant for an episode of syncope one month prior to this presentation for which he was admitted to an outlying hospital. EKG during that admission showed ST elevations in V1 and V2 leads, a pattern similar to Type 1 Brugada. A diagnosis of Brugada syndrome was missed and the patient had a cardiac arrest a month later. We discuss a short review of Brugada syndrome and emphasize the need to look for it in patients presenting with SCD and malignant arrhythmias. Conclusion . Physicians should always consider Brugada syndrome in the differential diagnosis of ST-segment elevation in anterior precordial leads of EKG and associated VT/VF. Although more than 17 years have passed since the first case was reported, increased awareness of this syndrome is needed to identify patients with EKG changes and treat them accordingly to prevent incidence of (SCD) and its deleterious complications.
2. Case Description A 39-year-old Caucasian man, with a past medical history of hypothyroidism, on synthroid, and no prior history of coronary artery disease, was in his usual state of health until he suddenly developed abnormal respiration and loud snoring while sleeping. His wife, who was awakened by the loud snort, found him unresponsive. Immediately she called emergency medical services (EMS) and initiated cardiopulmonary resuscitation. EMS arrived 10 minutes later and found the patient to be in ventricular fibrillation. He was successfully converted to sinus rhythm by multiple external defibrillations. He was intubated in the field and was then brought to the intensive care unit and initiated on a hypothermia protocol. After reviewing the records from his previous hospitalization, a significant episode of syncope occurring one month prior to this presentation was discovered. Electrocardiogram (EKG) at that admission showed sinus rhythm with incomplete right bundle branch block and persistent ST elevation in V1 and V2 leads, a pattern similar to type 1 Brugada syndrome ( Figure 1 ). During this admission he was ruled out for myocardial infarction with serial cardiac biomarkers and was followed with further workup. His initial transthoracic echocardiography (TTE) was significant for ejection fraction (EF) of 25% with global hypokinesis. Subsequent transesophageal echocardiography (TEE) showed normal left ventricular function with ejection fraction of 65%. A cardiac catheterization revealed no significant coronary artery disease. During the entire admission the patient was in sinus rhythm and developed visual changes secondary to anoxic brain injury. An implantable cardioverter-defibrillator (ICD) placement was done as per current recommendations and the patient was subsequently discharged.
Consent Written informed consent was obtained from the patient for publication of this case report and accompanying image. A copy of the written consent is available for review by the Editor-in-Chief of this journal. Conflict of Interests The authors declare that they have no competing interests. Authors' Contributions J. K. Kalavakunta was a major contributor for the paper. V. Bantu and M. Kodenchery were involved in the patient care and contributed to writing and reviewing the paper. H. Tokala reviewed the literature and also contributed to writing and reviewing of the paper. All authors read and approved the final paper.
CC BY
no
2022-01-13 01:48:13
Case Rep Med. 2010 Dec 14; 2010:823490
oa_package/ea/9c/PMC3014853.tar.gz
PMC3014854
21209741
1. Introduction Suxamethonium, a depolarizing muscle relaxant, is frequently used by anaesthetists to produce fast-onset muscle relaxation for endotracheal intubation. Part of its pharmacological properties include raised intraocular pressure (IOP) [ 1 ]. The mechanism by which it increased IOP has also attracted much comments and reports. It is still not completely understood. For a long time, it was accepted that contraction of the extraocular muscles (during fasciculations) causes compression of the globe and raises IOP. This idea became doubtful after the work of Kelly et al. [ 2 ]. They measured intraocular pressure in a series of patients whose extraocular muscles had been unilaterally severed before elective enucleation and compared the pressure changes with those of the contralateral eye, with the extraocular muscles intact. The authors noticed that there were no differences in intraocular pressure between the eyes, before and after intravenous injection of suxamethonium. There was significant increase in IOP in both eyes to about the same extent. This observation by Kelly et al. [ 2 ] stimulated the proposition of other hypotheses to explain IOP increase after suxamethonium. Kelly et al. [ 2 ] postulated that the most likely mechanism of raised IOP by suxamethonium is its effect on aqueous humour fluid dynamics. They relied on previous study that showed that suxamethonium has cycloplegic effects on the eye, causing relaxation of accommodation and reduction of axial thickening of the lens [ 2 ]. The time course of this cycloplegia parallels the metabolism of suxamethonium by pseudocholinesterase as well as that of raised IOP [ 2 ]. Due to the action of suxamethonium to raise IOP in the intact eye, its use in penetrating eye injuries and open globe surgery has been very contentious. Rational reasoning and literature reports [ 3 , 4 ] suggest that ocular contents, especially vitreous humour, could be expelled from the eye in such situations. This may cause permanent blindness in the patient. However, many competent authorities dispute these reasoning and reports [ 5 ] and argue that suxamethonium can be safely used in well-anaesthetised patients with open globe [ 5 , 6 ]. The anaesthetist is thus faced with the dilemma of what to do when rapid sequence induction of anaesthesia is desired in full-stomach patients, with penetrating eye injuries, billed for emergency surgery. Alternatives to suxamethonium in such instance do not exactly replicate the actions of suxamethonium in rapid onset and offset of action. For example, whereas large-dose rocuronium (1.2 mg/kg) approximates suxamethonium in onset of action and intubating conditions, suxamethonium is considered clinically superior for its shorter duration of action [ 7 ]. Reports that seem to implicate suxamethonium in extruding ocular contents in the anaesthetized patient are all anecdotal [ 3 , 4 , 8 ]. There is as yet, no formal, well-documented case report of this phenomenon in the medical literature, to our knowledge. We present this case report of the inadvertent loss of vitreous humour in a polytraumatized, road traffic accident patient, for emergency laparotomy and right corneal repair after a midazolam-ketamine-suxamethonium induction of anaesthesia.
3. Discussion The cause of vitreous humour extrusion in this patient, based on the concept of preponderance of probability, is most likely due to suxamethonium. Other more remote causes could be the use of ketamine at induction of anaesthesia and the pressure of the face mask on the right globe during anaesthetic preoxygenation of the patient. The report of the effect of ketamine on intraocular pressure in the literature is conflicting. While some reports claim it increased IOP [ 9 ], others claim drop in IOP [ 10 ], and yet others claim no effect [ 10 ]. Studies that showed increased IOP with ketamine used doses far in excess of what is used in clinical practice [ 10 ]. Thus, it can safely be stated that clinical doses of ketamine (<3 mg/kg i.v.) do not increase IOP. It is therefore unlikely that the induction dose of ketamine (100 mg) in this 74 kg man contributed to the vitreous humour extrusion. The probability that the pressure of the face mask on the globe is responsible is very unlikely, as extreme care was taken to avoid this during anaesthetic preoxygenation of the patient. This thus leaves suxamethonium as the only confounding variable to explain the inadvertent vitreous extrusion in this patient. There are theoretically other factors that could raise IOP and probably cause or contribute to the vitreous extrusion. These include inadequate depth of anaesthesia before intubation, the hypertensive response to intubation, carbon dioxide (CO 2 ) retention from suxamethonium apnoea, bucking, and straining from inadequate neuromuscular blockade. We do not think any of these apply to our patient. Steps were taken to ensure sufficient depth of anaesthesia at induction. Hence, we used a combination of intravenous midazolam (10 mg) and ketamine (100 mg). Each of these drugs at the dose used can induce anaesthesia on its own. In addition, midazolam reduces intraocular pressure [ 11 ]. Deep anaesthesia by itself, mitigates the hypertensive response to intubation and rises in IOP [ 12 ]. In this patient, there was a modest rise in blood pressure after intubation from the baseline value of 110/70 mmHg to 128/85 mmHg after intubation. This rise is not clinically significant. CO 2 retention and inadequate neuromuscular blockade are unlikely, given the fact that suxamethonium is still the fastest neuromuscular blocking agent with excellent relaxation. Duration of apnoea is thus minimal, with no bucking or straining at intubation with appropriate dose. Lincoff et al. [ 3 ], while reporting their study of the effects of suxamethonium on IOP, reported an anecdotal personal communication from surgical colleagues (ophthalmologists) and stated interalia: “since the publication of the previous article (describing the effects of succinylcholine on IOP), various communications have been received from ophthalmologists who used succinylcholine at surgery. These included several reports of cases in which succinylcholine was given to forestall impending vitreous prolapse, only to have a prompt expulsion of vitreous occur” [ 4 ]. Four instances of such personal communication were given, with no further details. In the same year that Lincoff's report appeared (1957), Dillon et al. [ 4 ] reported another anecdotal personal communication from an ophthalmologist colleague. In their report, they stated interalia: “it has been reported to us by Godman that a small amount of vitreous was lost from the eye of a patient undergoing cataract surgery wherein succinylcholine was administered to the patient under light anaesthesia at the time that the sclera had been incised and the anterior chamber opened” [ 4 ]. Dillon et al. [ 4 ] actually went on to conclude that “it would appear, therefore, that the administration of succinylcholine for intraocular surgery is at least hazardous and possibly contraindicated.” The anecdotal reports of Lincoff et al. [ 3 ] and Dillon et al. [ 4 ] were very instrumental in forging since their days, a near unanimous clinical opinion that suxamethonium is contraindicated in penetrating eye injury or open eye surgery. When Libonati et al. [ 5 ] reported in a retrospective study that there was no extrusion of ocular contents with the use of suxamethonium in penetrating eye injuries, there was a spate of discussion and letters to the editor. One of these letters contained another anecdotal report of the loss of ocular content with the use of suxamethonium in penetrating eye injury [ 8 ]. Rich et al. [ 8 ] reported thus: “the expulsion of intraocular content after succinylcholine induction is more than merely a theoretical concern. One of us (A. L. R.) has witnessed this complication, and the result was enucleation following a simple scleral laceration” [ 8 ]. Chidiac [ 12 ] reported a retrospective study at their institution, where suxamethonium was used in 8 cases of open eye surgery. There were no reports of vitreous loss, no lens or uvea extrusion, and no excessive intraocular bleeding. Chidiac, however, added that suxamethonium administration was preceded with drugs that attenuate its intraocular pressure effects, such as thiopentone, propofol, narcotics, nifedipine, or lignocaine [ 12 ]. Chidiac's paper [ 12 ], together with many others before and after it, suggests that suxamethonium may be used safely in open eye surgery after steps are taken to mitigate its tendency to raise IOP. These steps include pretreatment with nondepolarizing muscle relaxants and good depth of anaesthesia before suxamethonium administration. Many competent authorities believe that pretreatment with nondepolarizing muscle relaxants has no effect on suxamethonium-induced rise in IOP [ 13 ]. Other pharmacologic agents that reduce IOP can obtund the intraocular hypertension caused by suxamethonium. These include intravenous opioids like fentanyl, alfentanil [ 14 ], and alpha-2 agonist like dexmedetomidine [ 15 ]. While these agents may not prevent the rise in IOP with suxamethonium and intubation, the rise does not get beyond baseline values with alfentanil [ 15 ] and dexmedetomidine [ 15 ]. In retrospect, we believe that the vitreous extrusion in this patient may have been prevented if further steps were taken to deepen anaesthesia, by administering the fentanyl before giving the suxamethonium. This would have further reduced IOP (in addition to the effect of midazolam) and may be completely mitigate the IOP rise with suxamethonium and endotracheal intubation. In these days of evidence-based practice, perhaps only large-dose rocuronium (1.2 mg/kg) rivals suxamethonium in fast-onset of action [ 7 ] and thus offers a suitable alternative for rapid-sequence induction technique. It, however, has a longer duration of action. In situations of “cant intubate, cant ventilate,” sugammadex should be handy for rapid termination of muscle paralysis. Where this is not available, suxamethonium may still be the best option.
4. Conclusion The place of suxamethonium in open eye surgery has both proponents and opponents. It is pertinent to note that the misgivings about suxamethonium for open eye surgery have arisen from anecdotal reports, of which only 3 have been reported from 1957 to the time of writing this paper [ 3 , 4 , 8 ]. These anecdotal reports are not formal peer-reviewed case reports, but rather personal communications. On the other hand, several retrospective studies in humans [ 5 , 12 ] suggest that suxamethonium can safely be used in open eye surgery in the well-anaesthetized patient, without extrusion of ocular contents. It is further argued that anecdotal reports are not sufficient to sustain the teaching against the use of suxamethonium in penetrating eye injury or open globe surgery. However, our own experience with this patient has taught us that the dictum, primum nonnocere, is still very relevant with the use of suxamethonium in the open globe.
Academic Editor: Michael G. Irwin Introduction . Suxamethonium, a deepolarizing muscle relaxant, increases intraocular pressure. It is therefore advised to be avoided in open globe surgery, for fear of extruding ocular contents. Several anecdotal reports support this fear. Some workers however, dispute this claim. There is as yet no formal case report in the literature on the subject. Case Presentation . A 34-year old Nigerian male, was involved in a road traffic accident. He presented at the Accident & Emergency Unit of our hospital about 2 hours after the accident. Clinical examination revealed right corneal laceration (with intact ocular contents) and intra-abdominal visceral injury. Emergency laparotomy was scheduled, to be followed with corneal repair. Anaesthesia was induced with 10 mg midazolam, 100 mg ketamine, and 100 mg suxamethonium given intravenously in sequence. After laparotomy, the ophthalmologists reported for the corneal repair, only to find that the vitreous humour has been extruded. Conclusion . The fear about the use of suxamethonium in open globe situations is real. It will be good clinical judgment to use alternative drugs and techniques to effect rapid muscle relaxation, in the anaesthetic management of the open globe patient. This would be of interest to anaesthetists, ophthalmologists and clinical pharmacologists among others.
2. Case Presentation A 34-year-old man was brought to the Accident and Emergency Unit of the University of Benin Teaching Hospital, Benin City, Nigeria. The patient was a victim of road traffic accident about 2 hours before presentation. Clinical examination showed an anxious-looking man, with multiple bruises on the face, anterior abdominal wall, and right upper arm. He was clinically pale, with a pulse rate of 122 bpm, and blood pressure of 90/60 mmHg. The respiratory rate was 32/min, but the chest was clinically clear. The abdomen was distended globally tender with guarding and absent bowel sounds. Abdominal paracentesis revealed hemoperitoneum. Ophthalmological examination showed right corneal laceration with intact intraocular contents. Results of preoperative investigations were packed cell volume (PCV) of 22%, normal serum electrolyte and urea levels, and normal urinalysis. Four units of whole blood were cross-matched. A diagnosis of intra-abdominal visceral damage with right corneal laceration, in a polytraumatized patient, was made. A decision to perform an emergency laparotomy before corneal repair was made. The damaged right eye was strapped with sterile gauze. The patient was resuscitated with 2 litres of normal saline and 500 mL of isoplasma, given intravenously. He was given intravenous metoclopramide 10 mg and ranitidine 50 mg. In the operating theatre, monitors were attached. Baseline pulse rate was 100 bpm, and blood pressure was 110/70 mmHg. SpO 2 was 96–99% in room air. Anaesthesia was induced with intravenous midazolam 10 mg, ketamine 100 mg, while cricoid pressure was applied on loss of consciousness. Intravenous suxamethonium 100 mg was given Laryngoscopy and endotracheal intubation were done after the fasciculations. The blood pressure rose to 128/85 mmHg after endotracheal intubation and returned to baseline levels after about six minutes. The patient was connected to the anaesthetic machine via a circle absorber breathing system. He was mechanically ventilated with 100% O 2 and 1-2% halothane. Correct placement of the endotracheal tube was confirmed by chest auscultation and observation of the pulse oximeter. Intravenous fentanyl 100 μ g was given, followed by 6 mg pancuronium i.v. on return of spontaneous breathing. Anaesthesia went well, with vital signs within normal limits. The surgical findings at laparotomy were ruptured spleen and transverse colon. Splenectomy and colon repair were done. Thereafter, the ophthalmologists were invited to come and effect the right corneal repair, while the patient was still under anaesthesia. The sterile gauze dressing was removed. On examining the eye, it was found that the vitreous humour has been extruded. They decided to perform evisceration on a later date. The eye was redressed. Anaesthesia was terminated and the patient was ventilated with pure oxygen. Muscle paralysis was reversed with neostigmine 2.5 mg i.v. and atropine 1.2 mg i.v. The airway was suctioned and the patient was extubated on return of spontaneous breathing. He was transferred to the Post-Anaesthesia Care Unit (PACU). After about 45 minutes, the patient was transferred to the ward, with full consciousness and stable vital signs.
CC BY
no
2022-01-13 01:48:13
Case Rep Med. 2010 Dec 27; 2010:913763
oa_package/c8/2b/PMC3014854.tar.gz
PMC3014855
21209742
1. Introduction The main effect of parathyroid hormone (PTH) is to increase the concentration of plasma calcium level by increasing the release of calcium and phosphate from the bone matrix, increasing calcium reabsorption by the kidney and increasing renal production of 1.25 dihydroxy vitamin D-3 (Calcitriol) which elevate level of plasma calcium [ 1 , 2 ]. PTH also causes phosphaturia, thereby decreasing serum phosphate level [ 1 , 3 , 4 ]. Usually, four parathyroid glands are located posterior to the thyroid gland. Primary hyperparathyroidism (pHPT) is a disease characterized by hypercalcemia attributable to autonomous overproduction of PTH. Although some patients with pHPT may have normal serum calcium concentrations, but most of them have hypercalcemia. Therefore, pHPT can often be detected by routine serum calcium measurement. pHPT is present in about 1% of the adult population. The incidence of the disease increases to 2% or higher after age 55 and is 2 to 3 times more common in women than in men [ 5 , 6 ]. In approximately 80–85% of cases with primary hyperparathyroidism, a single adenoma is found [ 2 , 5 ]. Multiple gland hyperplasia or neoplasia is present in the 15% remaining [ 4 , 7 , 8 ]. pHPT is the commonest cause, affecting approximately 4 per 100,000 populations per annum and has a peak age of incidence of 50–60. It affects females more than males with a ratio of 3 : 1 [ 2 , 4 ]. pHPT is a severe, symptomatic disease with serious complications and high morbidity in Iran. Advanced skeletal disease is the most common pattern of presentation at a young age [ 2 ].
3. Discussion There are striking similarities between clinical and laboratory findings of pHPT from Iran and other eastern regions [ 8 , 9 ]. Testing of intact PTH level is the core of diagnosis of hyperparathyroidism [ 3 ]. Elevated PTH and serum-ionized calcium levels is a diagnostic method for pHPT [ 4 ]. A 24 hour urinary calcium measurement is necessary to rule out familial hypocalciuric hypercalcemia [ 4 ]. Patients with pHPT usually excrete more than 200 mg calcium per 24-hour, a calcium/creatinine clearance ratio <0.01 [ 4 ]. Imaging studies are not used to diagnosis, confirm, and to decide on surgical therapy of hyperparathyroidism [ 9 , 10 ]. If a limited parathyroid exploration is to be attempted, a localizing study is necessary [ 9 – 11 ]. In patients who have recurrent or persistent hyperparathyroidism after a previous surgery, an imaging study will be necessary [ 12 ], in which the T99m nuclear scan is the best initial test [ 4 , 10 ]. T99m nuclear scan is highly specific for abnormal parathyroid tissue, and its sensitivity is more than 90% in solitary adenoma, but in multiglandular disease its sensitivity is very low (55%) [ 10 ]. Combination of ultrasound Computed Tomography (CT) has incremental value in accurately localizing solitary parathyroid adenomas over either technique alone [ 10 ]. Ultrasonography, CT scans, and Magnetic Resonance Imaging (MRI) all have been used for localization, but they have been replaced largely with 99mTc-sestamibi. In the case of recurrent or persistent disease and in ectopic locations such as the mediastinum particularly, MRI may be useful [ 4 , 9 , 10 ]. Bilateral internal jugular venous sampling for parathyroid hormone determination may be used in patients with nonlocalized pHPT. Subperiosteal bone resorption and osteitis fibrosa cystica now are less commonly seen in pHPT. Osteitis fibrosa cystica (brown tumor) was seen only in 10–15% in older reports but new is seen rarely, because of increased incidence of milder forms of disease [ 7 ]. The pathognomonic feature of disease increased giant multinucleated osteoclasts in scalloped areas of the surface of the bone (How ship's lacunae) and replacement of the normal cellular and morrow element, by fibrous tissue [ 2 ]. Suspecting malignancy, the clinician should be highly alert to other possible causes of bony lesions. Brown tumor should be kept in mind during our practice [ 13 ]. Multiple maxillofacial brown tumors can be the primary hyperparathyroidism manifestation [ 14 ]. In this case decreased bone density and two pathologic fractures in the neck of the right femur and in the radius and ulna of the right hand were obvious on the first visit. We should remember that several types of malignancies present in the lung, head and neck, esophagus, breast, and renal cells, can cause paraneoplastic hypercalcemia and mimic signs and symptoms of parathyroid adenoma [ 15 ]. In the last 30 years the most contemporary series show an incidence of 20% or less. In most published series of patients presenting with urolithiasis the incidence of concurrent PHPT is between 2% and 8% [ 16 ]. Greater than 50% of patients with hyperparathyroidism have renal symptoms manifested by nephrolithiasis and nephrocalcinosis [ 17 ]. Recurrent acute pancreatitis can be the first and sole presentation of undiagnosed pHPT. Muscle weakness, particularly in the proximal extremity muscles, together with progressive fatigue and malaise, may occur in symptomatic pHPT [ 4 ]. Various degrees of depression, nervousness, and cognitive dysfunction may commonly occur in pHPT [ 4 ]. Hypertension is more prevalent among patients with hyperparathyroidism [ 18 ]. Nonspecific renal, neuralgic, gastrointestinal, or bone and muscle system signs and symptoms, can mislead the physician and cause significant delay in diagnosis. As you see in the present case, he had complained from those symptoms for more than 18 months and two previous surgeries without definite diagnosis during last six months. Some clinicians advocate surgical therapy in all patients with primary hyperparathyroidism but currently acceptable indications for surgery, if serum calcium level is less than 11.5 mg/dl, symptomatic disease presents, and 24-hour urinary calcium excretion more than 400 mg [ 4 – 6 ]. Our patient satisfied all of these criteria. The standard operation is complete neck exploration with identification of all parathyroid glands and removal of all abnormal glands. In the case of 4 gland disease, 3.5 gland parathyroidectomy must be performed. Approximately 50–70 mg of the most normal appearing glands must be left intact [ 4 , 8 , 11 ]. In this case three other parathyroid glands examined during surgery, all was normal thus only the adenomatous gland was removed. Parathyroidectomy reduced the risk of fracture in all patients with pHPT, when compared with observation [ 19 , 20 ]. The benefits of parathyroidectomy were reported in all patients with pHPT, regardless of age, calcium level, or bone mineral density [ 20 ]. Although, offering parathyroidectomy to all patients with pHPT, regardless of age and other variables may have good advantages, but it should be considered very carefully in older patient [ 21 ]. A recent cost analysis study emphasizes the importance of early parathyroidectomy, demonstrating that parathyroidectomy is cost-saving compared to observation and serial monitoring of patients with PHPT.
Academic Editor: Per Hellman The pattern of clinical presentation of primary hyperparathyroidism (pHPT) has changed dramatically from a severe disease to an asymptomatic condition in Western countries. The story is completely different in Eastern countries. Bone and joint related sign and symptoms like bone pain and multiple fractures are common in these patients. Imaging and nuclear medicine studies will be helpful specially in patient who candidate for surgical removal of the abnormal parathyroid gland. Here, we present a 48-year-old man with multiple typical fractures in long bones and a single adenoma in his right inferior parathyroid gland. pHPT is a severe, symptomatic disease with serious complications and high morbidity in Iran. Advanced skeletal disease is the most common pattern of presentation.
2. Case Report A 48-year-old man was referred by an orthopedic surgeon due to suspicious generalized osteoporosis and multiple long bone fractures. He had two long bone fractures in his right extreme upper and lower points occurred during last six months, with negative history of the major traumatic events. The patient fractures were managed by open reduction and internal fixation surgery, respectively ( Figure 1 ). On the first visit, he was complaining from generalized bone pain and muscle weakness. He also showed a history of polyuria and polydipsia with a clear history of severe renal colic with the passage of large stones . Laboratory data obtained are summarized in Table 1 . Ultrasonographic examination of kidneys and thyroid gland revealed multiple kidney stones in both side and a well-defined hypoechoic mass measuring 12 × 13 × 11 mm in the right inferior thyroid lobe. A generalized osteoporotic feature was obvious in extremities, thoracic and lumbosacral vertebrae, and iliac bones. At the T99m MIBI nuclear scan a focal and persistent active spot at the lower pole of the right thyroid gland, consistent with parathyroid adenoma was detected ( Figure 2 ). A classic neck exploration with a horizontal thyroid incision was performed. A 13 × 14 mm yellowish brown rubbery mass at the inferior parathyroid region on the right side was detected and excised. Histological study confirmed the diagnosis of parathyroid adenoma. All other three glands were examined grossly during surgery and no abnormality was found. Serum calcium level returned to normal level (9.7 mg/dl) in 24-hour postoperatively. PTH level on 3rd postoperative day was in the normal range. Patient complaint of muscle weakness and bone pain disappeared during the first week postoperatively.
Acknowledgment The authors want to express their great appreciation to Fakher Rahim for his valuable help in preparing and revising the paper.
CC BY
no
2022-01-13 01:48:13
Case Rep Med. 2010 Nov 29; 2010:357029
oa_package/7e/02/PMC3014855.tar.gz
PMC3014856
21209743
1. Introduction Tungiasis is a cutaneous parasitic infection caused by the penetration of a sand flea into the skin of its host. In Europe the disease is almost exclusively seen in travelers returning from endemic areas. With the increase of foreign travel and immigration the chances of physicians encountering this tropical disease is rising, and an early diagnosis is required since several complications can be seen if the disease is neglected.
3. Discussion Tungiasis is an ectoparasitosis caused by the penetration in the skin of the gravid female of the sand flea Tunga penetrans . Other popular designations for the parasite includes: chigga , chica , jiggers , bicho do pé , moukardam, and pico . The flea infestation is associated with poverty and is endemic in some Caribbean, South America, Asia, and Africa countries [ 1 ]. The current epidemiological situation on the African continent is not well known and is mainly based on anedoctal observations. Recent studies in Nigeria, Cameroon, and Brazil reported similar high prevalence of tungiasis (45%, 49%, and 51%, resp.) [ 2 – 4 ]. In the study of Collins G. et al, the prevalence of the disease in a rural Cameroon area was greater in children than in adults. This was seen mainly because of the use of open or damage shoes. In addition, children work on the farm from a young age adding a greater exposure to infection [ 5 ]. Human infections are linked to important sociocultural factors associated with poverty: the practice of walking barefoot or wearing only sandals, sandy floor inside the house, living in a house made of palm products, lack of personal and soil hygiene, and the free movement of animals (pigs, dogs, and rats) between and into houses. Tunga infection affects many species of domestic animals, in addition to humans. The pig has been considered to be the main reservoir, but Tunga penetrans has been reported in cows, dogs, cats, goats and rats. In Europe the disease is rare with only one autochthonous case being reported [ 6 ]. Most of the European reports belonged to travelers returning from endemic areas. As far as we are aware, this is the first case of Tunga penetrans infection imported from Guinea-Bissau. The transmission of the flea occurs by walking barefoot on the sandy soil of disease-endemic regions. In the skin the flea burrows a cavity with the head turned toward the upper dermis, in order to feed on the host's blood and begins to produce eggs (150–200 eggs over a period of 2-3 weeks) [ 7 ]. Only the end portion of the abdomen extrudes and the flea can reach a diameter of 1 cm. The penetration is usually asymptomatic with pain developing only when the flea increases in size. The clinical findings depend on the stage of infestation (Fortaleza Classification). From penetration of the parasite into the skin to the healing it takes 4–6 weeks [ 8 ]. Infestations, usually present with papular or nodular lesions, either single or multiple, white or gray in color with a small brown central opening, corresponding to the posterior portions of the abdomen. Plantar wart-like lesions, as well as pustular, ulcerative bullous lesions have been described [ 9 – 11 ]. Most lesions are localized on the feet and toes, mainly in the subungual and periungual areas, but other ectopic localizations such as hands, back, buttocks, wrists, perineum, and breasts have been documented [ 12 , 13 ]. Several dermoscopic findings have been reported as useful tools for an early diagnosis. It includes a black area with a plugged opening in the center (corresponds to the opening of the exoskeleton), a peripheral pigmented ring (corresponds to the posterior part of the abdomen), gray-blue blotches (corresponds to the eggs in the abdomen) [ 14 ], and a radial crown (a zone of columnar hemorrhagic parakeratosis in a radial arrangement) [ 15 ]. Diagnosis of tungiasis is based on the characteristic aspect of the lesions in a patient who recently visited an area where the disease is endemic and can be supported by the dermoscopy findings mentioned above. An early diagnosis reduces the possibility of bacterial infection complicated by ulceration, cellulitis, lymphangitis, tetanus, osteomyelitis, gangrene, and spontaneous amputation of the toes [ 16 ]. Surgical extraction of the flea under sterile conditions and a topical antibiotic applied afterwards are considered the treatment of choice. During the excision care should be taken to prevent tearing of the flea and to avoid parts of the flea being left behind due to the risk of severe inflammation ensuing. Topical treatment includes: cryotherapy, electrodesiccation, formaldehyde, or chloroform. A randomized trial showed that topical metrifonate, ivermectin, or thiabendazole can each reduce the number of lesions [ 17 ]. Oral ivermectin has been reported to be effective, although recently Heukelbach et al reported that its efficacy was practically nil [ 18 ]. Systemic treatment with oral thiabendazole has also been reported to be successfully used in patients with generalized infestations [ 19 ]. Prevention of the infestation is essential and includes the use of closed footwear, the use of repellents and immediate extraction of embedded fleas. In endemic areas replacing sand and mud floors with concrete or tiled floors as well as avoiding the contact with animals that could be infected are important measures to prevent the spread of infestation [ 3 , 20 ]. In conclusion, tungiasis is a rare disease in nonendemic areas and with the global warming, increased foreign travel and immigration from endemic areas the parasite may further disseminate to new geographical areas such as Europe. Thus, physicians should be familiarized with this parasitic disease, since an early diagnosis and an adequate treatment can prevent serious complications.
Academic Editor: Thomas J. Zgonis Tungiasis is an endemic disease in certain poor areas around the world. Imported infestations in travelers are becoming more frequent and can lead to considerable morbidity. We report a case of a 50 year-old-man who returned from a trip to Guinea-Bissau with an infection caused by Tunga penetrans .
2. Case Report A 50-year-old male presented with a fifteen day history of multiple painful lesions on the soles. He reported a “burning sensation,” pruritus, and progressive pain with marked limitation in walking. He denied history of insect bite or trauma to the affected area. He had been traveling in Guinea-Bissau, were he walked in bare feet on a beach. There were pigs and goats. His prior medical history was unremarkable. On physical examination multiple 1 cm, round, white papular lesions with a small brown-black central core where distributed on the soles and lateral aspect of the third and fifth toes of the right and left foot, respectively (Figures 1 and 2 ). A diagnosis of tungiasis was suspected based on the clinical history and physical findings. Surgical incision of the lesions with deep removal of the content was performed under local anesthesia ( Figure 3 ). The content from the wound was analyzed throught optic microscopy and showed multiple Tunga penetrans eggs ( Figure 4 ). The patient went on topical fusidic acid cream with healing of the lesions.
Financial Support and Conflicts of Interest None
CC BY
no
2022-01-13 01:48:13
Case Rep Med. 2010 Dec 16; 2010:681302
oa_package/3c/30/PMC3014856.tar.gz
PMC3014857
21209744
1. Introduction CCF is a rare condition in which a communication exists between a coronary artery and a cardiac chamber. Most of CCFs are discovered incidentally during angiographic evaluation for coronary vascular disorder. Although most patients are asymptomatic, it can lead to symptoms of angina pectoris [ 1 ]. We report a case of CCF with angina pectoris. Selective coronary arteriography revealed diffuse CCF involving the (LAD) emptying in to left ventricle (LV) and showed significant two-vessel coronary artery stenosis including LAD, circumflex artery (Cx).
3. Discussion Coronary artery fistulas are communications between one of the coronary arteries and a cardiac chamber (CCF) or a major vessel (venae cavae, pulmonary artery, veins, or coronary sinus). CCFs are seen in 0.1% of patients undergoing coronary angiograms. Major sites of origin of fistula are the right coronary artery (55%), left coronary artery (35%), and both coronary arteries (5%). Major termination sites are the right ventricle (40%), right atrium (26%), pulmonary arteries (17%) and less frequently the superior vena cava or coronary sinus and least often the left atrium and left ventricle. Coronary artery-left ventricular fistulae are exceedingly rare with the incidence being reported as 1.2% of all coronary artery fistulae [ 2 ]. Cardiac catheterization with coronary angiography remains the gold standard for the diagnosis of coronary artery fistula. It can demonstrate the size, anatomy, number, origination, and termination site of the fistulas. Cardiac echocardiography is also useful for diagnosis. Magnetic resonance imaging and multidetector computed tomography are also used to evaluate the anatomy, flow, and function of CCF [ 3 – 5 ]. In some cases, fistula can cause ischemia by coronary artery steal phenomenon which leads to ischemia of the segment of the myocardium perfused by the coronary artery [ 1 , 6 ]. Kwang Kon Koh et al. demonstrated myocardial ischemia on treadmill test and Holter monitoring in patients who have CCF [ 7 ]. Kiuchi K et al. reported CCF leading acute myocardial infarction with coronary steal phenomenon [ 8 ]. There is general agreement that symptomatic patients should be treated. All symptomatic patients with coronary artery fistula should undergo closure of the fistula by either surgical or transcatheter approaches. Catheter closure techniques have been performed to treat coronary fistulas with devices, including detachable balloons, stainless steel coils, controlled-release coils, controlled-release patent ductus arteriosus (PDA) coils, and Amplatzer PDA plug [ 9 ]. The advantages of the transcatheter approach include less morbidity, lower cost, shorter recovery time, and avoidance of thoracotomy and cardiopulmonary by-pass. Hemodynamically significant fistula with a left to right shunt may lead to congestive heart failure, pulmonary artery hypertension, and myocardial ischemia due to a steal phenomenon. The hemodynamic consequence of the coronary cameral fistula depends on the size of the fistula and the communicating chamber. Most coronary artery fistulae are small and usually do not cause any ischemic symptoms and excellent long-term prognosis [ 10 ]. López-Candales and Vivek Kumar demonstrated that in patient with coronary artery to left ventricle fistula can be asymptomatic [ 11 ]. We present a case of fistula originating from the left anterior descending artery and draining into the left ventricle with two-vessel disease. As a result, in view of the stable angina, absence of heart murmur, and no objective evidence of coronary artery steal, the patient can be managed conservatively.
Academic Editor: Peter M. Van Ooijen Coronary-cameral fistula (CCF) is an anomalous connection between a coronary artery and cardiac chamber. Most of CCFs are discovered incidentally during angiographic evaluation for coronary vascular disorder. We report a case of CCF with angina pectoris. Selective coronary arteriography revealed diffuse CCF involving the left anterior descending artery (LAD) emptying into left ventricle (LV) and showed significant two-vessel coronary artery stenosis.
2. Case Presentation A 59-year-old man was admitted to our hospital complaining of anterior chest pain on exertion. He had a history of hypertension and diabetes. His blood pressure was 130/80 mmHg, and the pulse rate was 72 beats/minute. There was no audible murmur on the chest wall, and his electrocardiography (ECG) was normal. His exercise treadmill test (ETT) revealed ischemic changes accompanied by chest pain. Therefore, selective coronary angiography was performed via the right femoral approach (Seldinger technique). Coronary angiography showed critical lesion in LAD, Cx, and the contrast agent entered the left ventricle from the left anterior descending artery (LAD) during diastole (see Figures 1(a) and 1(b) ). We performed successful percutaneous coronary intervention and stenting to lesion in LAD and Cx. To our knowledge, this is the first case reported in the literature. The patient's post-PTCA course was uneventful with the disappearance of angina and symptoms. One month after discharge, ETT was performed to the patient and demonstrated no ischemic ECG changes.
CC BY
no
2022-01-13 01:48:13
Case Rep Med. 2010 Dec 6; 2010:362532
oa_package/f8/ac/PMC3014857.tar.gz
PMC3014863
21209745
1. Introduction Optic neuritis in childhood is usually attributed to viral causes (66%) and in most cases is bilateral, affecting both eyes simultaneously [ 1 ]. As for the therapeutic approach, there are no clinical trials with corticosteroids in children. In both unilateral and bilateral attacks with minor visual loss, a periodical followup is recommended whereas in unilateral and bilateral attacks with medium to severe visual loss pharmaceutical therapy is advised with intravenous methylprednisolone 15 mg/kg/day for 3 days and periodical followup. Per os therapy is not required. However, recent studies compare methylprednisolone IV alone or followed by oral corticosteroids, the second being more effective in preventing relapses [ 2 ]. Prognosis concerning vision is poor compared with adults, although the risk of developing multiple sclerosis is lower. Children with unilateral optic neuritis present better vision prognosis (100% > 20/40) but develop multiple sclerosis (MS) more often compared to children with bilateral optic neuritis (75%) [ 1 ]. However, other studies conclude that bilateral and not unilateral optic neuritis in children precede more often MS [ 3 ]. Patients developing MS are older at the onset of optic neuritis (12 years old) compared to those who do not develop MS (9 years old) [ 1 ]. We present an interesting case of unilateral optic disc edema in a paediatric patient and discuss the concerns involved in diagnosis and management of similar cases.
3. Discussion Differential diagnosis of optic disc oedema usually includes intracranial tumours, benign intracranial hypertension, hydrocephalus and optic neuritis due to demyelinating, and viral, infectious, and idiopathic causes. However, it can also appear in ischemic optic neuropathy, neoplasmatic infiltration, sarcoidosis, syphilis, and toxoplasmosis ( Figure 4 ). Furthermore, eye conditions such as uveitis, hypotonia, and central retinal vein thrombosis or systemic diseases such as malignant hypertension, severe anaemia, and hypoxaemia can lead to disc oedema. Finally, compression due to Grave's disease, orbital lesions, and trauma can cause optic disc oedema, as well as the hereditary Leber neuropathy [ 4 ]. Many of the situations included in the differential diagnosis are extremely rare in children and seldom present by isolated unilateral optic disc oedema. There is still disagreement whether lumbar puncture should be indicated in the differential diagnosis of unilateral optic disc oedema in paediatric patients. In a study of 15 patients with unilateral optic disc oedema, 10 had idiopathic intracranial hypertension. Neuroimaging did not reveal any causes of the oedema, the other optic nerve was normal and the visual disorders were similar to those with typical bilateral oedema due to idiopathic intracranial hypertension [ 5 ]. Although unilateral optic disc oedema due to idiopathic intracranial hypertension among paediatric patients is a rare condition, the use of lumbar puncture is not clearly indicated. Nevertheless, despite being an interventional diagnostic technique, lumbar puncture can be useful not only to measure intracranial pressure, but also to analyse cellularity, proteins, IgG production, and specific antibodies including Aquaporin 4 Antibodies (AQP4-Ab), that may be important in defining followup and risks of evolution to multiple sclerosis. Similar scientific controversy exists regarding the corticosteroid treatment of paediatric patients with unilateral optic disc oedema [ 6 ]. It is well known that longstanding untreated disc oedema results in atrophy and permanent vision impairment [ 4 ]. Patients treated with steroids only per os show a higher relapse risk [ 7 ]. Although patients treated with intravenous corticosteroids show an earlier improvement, the long-term results (6 months-1 year) are equal for both groups. There is also a significant relapse risk when intravenous therapy is not followed by per os treatment. In the 1980s, the optic neuritis treatment trial (ONTT) was developed to evaluate corticosteroid treatment for optic neuritis. This multicenter randomized clinical trial showed that high-dose intravenous methylprednisolone followed by oral prednisone accelerated visual recovery but did not improve the 6-month or 1-year visual outcome compared with placebo, whereas treatment with oral prednisone alone did not improve the outcome and was associated with an increased rate of recurrences of optic neuritis. Another finding was that those who received intravenous corticosteroids followed by oral corticosteroids had a temporarily reduced risk of development of a second demyelinating event consistent with MS [ 8 ]. On the other hand, considering the side effects of steroids, it is difficult to conclude in a therapeutic protocol. In any case, it is the clinical doctor who has to evaluate the quality of life, the potential dangers for the patient, and the visual function and to take a decision regarding treatment. The followup of paediatric patients with unilateral optic disc oedema should include examination of visual acuity, colour vision, and pupillary reflexes. Also visual field examination, visual evoked potentials, and CT/MRI of brain and orbits should be undertaken. The risk of subsequent MS development can now be reliably estimated and MRI is established as the single most important predictor. However, the Optic Neuritis Study Group found that still 25% of optic neuritis with no lesions on brain MRI evolved to MS after an initial episode; therefore a long followup by a paediatric neurologist is strongly recommended [ 8 – 10 ]. Concerns that have to be taken into account regarding diagnosis and treatment of similar cases are related to the frequency and type of followup, since in the majority of cases the underlying cause seems to be a viral infection and the risk of developing multiple sclerosis is rather low.
Academic Editor: Mamede de Carvalho Introduction . We report a case of unilateral optic disc edema in a paediatric patient and discuss the concerns involved in diagnosis and management of similar cases. Materials and Methods . Female aged 10 years was referred to our clinic due to progressive visual loss of the LE over a few days. Her visual acuities (VA) were RE 10/10, LE 3/10, and she had a relative afferent pupillary defect and decreased colour vision in her LE and normal and painless eye movements. Fundoscopy showed a remarkably swollen disc of the LE, and visual field (VF) examination revealed enlargement of the blind spot and presence of horizontal inferior papillomacular scotoma. Neurological examination, CT of brain and orbits and blood tests were normal. Visual evoked potentials revealed an obstacle in the myelin substance before the optic chiasma of the LE. Results . The patient was treated with intravenous methylprednoslone for 3 days and with oral methylprednizole for 15 days in progressively diminished daily doses. This led to gradual improvement of VA, colour vision, and visual field and resolution of optic disc oedema. Discussion . Concerns that have to be taken into account regarding diagnosis and management of similar cases are related to lumbar puncture indications, treatment with corticosteroids, and appropriate followup.
2. Case Presentation A 10-year-old female patient was referred to our clinic due to progressive visual loss of the LE described as blurred vision over 3 days without further symptoms. On examination, the visual acuities (VA) were RE 10/10, LE 3/10; she had a relative afferent pupillary defect and decreased colour vision in her LE and normal and painless eye movements. Cycloplegic examination revealed hypermetropia of +0.75 sph in the LE while fundoscopy showed remarkably swollen optic disc ( Figure 1 ). Her past medical history included periodical bronchial asthmatic attacks and pneumonia a year ago, while family history was clear. Taking the differential diagnosis of optic disc edema into consideration, further investigations were performed. Visual field examination revealed enlargement of the blind spot and presence of horizontal inferior papillomacular scotoma in the LE ( Figure 2 ). Neurological examination was thorough and unremarkable. Computed tomography (CT) and magnetic resonance imaging (MRI) of brain and orbits and all blood tests (full blood count, basic biochemical analysis, CRP, blood coagulation examinations, FT3, FT4, TSH, Anti-TPO, Anti-TG, IgM, IgE, IgG, IgA, RF, ANA, ANCA, and acL) were normal. The existence of adenovirus and RSV antibodies was attributed to cross-reaction or to the use of cortisone and thus was not evaluated. Finally, visual evoked potentials of the occipital lobe revealed prolongation of the P100 latencies after the stimulation of the LE (172.8 ms in LE versus 114.3 ms in RE, at 17′′, and 151.8 ms in LE versus 101.4 ms in RE, at 70′′) indicative of an obstacle in the myelin substance before the optic chiasma. The patient was treated with intravenous methylprednizolone 500 mg daily for 3 days (Solumedrol 500 mg, Pfizer) and afterwards with oral methylprednizole (Medrol, Pfizer, 1 mg/kgr body weight divided in 2 daily doses) for 15 days in progressively diminished doses. Five days after initiation of treatment (8 days after the onset of symptoms) VA of the LE increased to 7/10, colour vision and pupillary reflexes were recorded as normal, the optic disc swelling improved, and the patient was released. At the followup, ten days later VA of the LE was 10/10, there was a remarkable improvement of the optic disc oedema ( Figure 3 ) and visual field examination was normal. After 20 days the VA remained 10/10 and the optic disc was found absolutely normal in fundoscopy. Written informed consent was obtained from the patient.
CC BY
no
2022-01-13 01:48:13
Case Rep Med. 2010 Dec 9; 2010:529081
oa_package/a0/4c/PMC3014863.tar.gz
PMC3014864
21167081
Background Iodine deficiency disorders (IDD) had been widely recognized as one of the important public health problems especially in developing countries throughout the world [ 1 ]. Thailand, one of the developing countries in South East Asia has started the public health activities on elimination of IDD endemic areas since 1989. In 1991, a national survey on total goitre prevalence (TGP) in 20,596 schools from 3,366,867 children was done in 53 provinces throughout the country with the mean TGP of 15.79% [ 2 ]. Later in 1994, World Health Organization (WHO) produced a document in collaboration with United Nations International Children's Emergency Fund (UNICEF) and International Council for Control of Iodine Deficiency Disorders (ICCIDD) for the guidance concerning the IDD surveillance indicators and salt iodization has been selected as a strategy to control and elimination of IDD [ 3 ]. Since then Thailand has emphasized various kind of strategic planning for elimination of IDD endemic areas such as iodization of drinking water for villagers, administration of iodine capsules every 6-10 months for population in remote areas where transportation was the main problem for the enrichment of iodine in communities, iodization of edible salt at 30 ppm for daily use in household etc. In 2006, a survey for the use of iodine salts from 819 households were done in iodine deficiency endemic areas in Udon Thani province and found that only 10.26% of the households consumed iodine edible salts which were passed the standard of edible iodized salts (not less than 30 ppm of iodine) declared by Thailand FDA [ 4 ]. In addition, the TSH index for monitoring IDD by WHO/UNICEF/ICCIDD guideline, showed that during the years 2003 - 2006 the number of neonates having TSH >5 mU/L were 13.54%, 15.28%, 21.55%, 19.56% respectively [ 5 ]. These TSH index showed that Thailand was exposed to iodine deficiency condition and also indicated that the current public health activities did not achieve the goal for the elimination of IDD. A new pilot programme has been proposed to public health sector to introduce a more stable form of iodine in daily nutrition through the existence of biological active iodo-organic compounds from animal and plant origins in the natural food chain which will be sustainable in the long-term. The design of the programme was initiated with the aim to increase the content of iodine in eggs and vegetables. National Statistic Data on Thailand eggs consumption during 2006 and 2007 were 9,789 and 9,376 million eggs respectively or 142 and 150 eggs per person per year [ 6 ]. Thus, the hen eggs would be one of the reliable sources for iodine consumption in daily nutrition. This new programme would be implemented with the philosophy of Sufficient Economy Concept introduced by His Majesty the King of Thailand to conduct all Thai peoples living for better quality of life with the appropriate needs of socio-economic development in moderation, reasonableness and self-immunity for sufficient protection from both internal and external impacts arising with knowledge and spirit to supply their long-term necessities throughout the country starting from the level of the families, communities as well as the nation. It was believed that with this new concept of the biological active iodo-organic compound enrichment in the natural food chain through the Sufficient Economy Philosophy, the new programme for elimination of iodine deficiency endemic areas would be successfully implemented through the cooperation with all concerned sectors throughout the country.
Methods The design of the study was to determine the iodine status of communities by using urine iodine excretion concept before and after the implement of iodine enrichment in the natural food chain. The collecting of 858 urine of child bearing age women as first morning mid-stream urine from 5 districts in Udon Thani province were done in 2006 in order to establish base-line iodine status of the areas before implementing the programme. The urine specimens were transported to the laboratory immediately in ice boxes and kept frozen at -20°C until the analysis. The urine iodine was determined by Inductively Coupled Plasma Mass Spectrometry (ICP-MS) with the use of tellurium as internal standard. During the assay performance, the internal quality control samples prepared from urine iodine reference standard code 2670 from US-National Institute of Standards and Technology (NIST) were inserted in every ten samples of the running assay [ 7 , 8 ]. In 2008, the model programme was operated in cooperation with Napu Sub-district Municipality and the experimental design was approved from the Napu Sub-district Municipal Committee in compliance with the Helsinki Declaration. The propose programme on iodine enriched eggs in the form of biological active iodo-organic compounds from animal origins was started by selecting two neighbourhood villages in those base-line study areas: Ban Nong Nok Kean and Ban Kew in Napu sub-district of Udon Thani province. A hen egg farm located in these two village areas was selected as a model farm for the supply of iodine enriched eggs to the communities. The base-line of iodine content in eggs from regular feeding at this model farm was done by collecting 30 fresh eggs weight between 50-55 g, and determining the iodine content according to the TCM-040 based on Compendium of Methods for Food Analysis [ 9 ]. Then, the programme was initiated by replacing regular feed with the iodine enriched feeding formula prepared by the addition of iodine as potassium iodide (KI) to yield a final concentration of 4 mg of iodine per kilogram of poultry feed for the daily feeding process of the farm. The hens consumed the iodine poultry feed at the amount of 120-130 g per hen per day. After one month feeding, 30 eggs were collected and sent to the laboratory for the determination of iodine content. The biological active form of iodine from the iodine enriched eggs was evaluated by the determination of urine iodine excretion of volunteers before and after consuming a hard boiled iodine enriched egg continuously for five days, as one item in their breakfast and no other iodine enriched food was taken in the volunteers' meals during these five days. The first morning mid-stream urine from each volunteer was collected and the urine specimens were kept frozen at -20°C until the analysis. There were 124 women volunteers from these two villages, age between 20 - 63 years. All village volunteers were explained about the study details and consent forms were given to the volunteers for their agreement for study participation with an included statement that they could dropped out from this study at any time. The iodine content in urine was determined using the ICP-MS procedure. The median urine iodine was calculated and ANOVA was used for statistic analysis of the urine samples from the village volunteers for the treatment before and after eating iodine enriched eggs.
Results Table 1 showed that in August 2006 the median urine iodine of 858 child bearing-age women volunteers was 5.16 μg/dL (range from 3.26 - 7.68 μg/dL in all 5 districts) which meant that the villagers had an iodine deficiency condition even though the campaign from the provincial health office and district health centers emphasized their public health education programme on the use of iodized salt. Table 2 showed the iodine content of hen eggs produced by the model farm. It was found that the average iodine content in eggs from the regular feeding formula was 25.31 μg per egg (or 75.96 μg per 100 gram of fresh weight) whereas the iodine enriched feeding formula yielded the mean iodine content in the range of 93.57 - 97.76 μg per egg (or 182.67 - 184.58 μg per 100 gram of fresh weight). The production batches that was provided to the village volunteers for their breakfast had an iodine content in the range of 90.97 - 104.14 μg per egg for egg size #2 (60 - 65 g per egg), and a range of 87.76 - 98.18 μg per egg for egg size #3 (55 - 60 g per egg). Table 3 showed the level of urine iodine of the village volunteers. It was found that the base-line median urine iodine before consuming the iodine enriched eggs from 65 volunteers in Ban Nong Nok Kean was 7.04 μg/dL (SD = 8.54, n = 65), and from 59 volunteers in Ban Kew was 7.00 μg/dL (SD = 7.15, n = 59). The median urine iodine from 124 volunteers in these two villages was 7.03 μg/dL (SD = 7.89, n = 124) which indicated the mild iodine deficiency condition of these two village areas. Table 4 showed the results for urine iodine levels of the village volunteers after consuming a hard boiled iodine enriched egg continuously for five days as one item in their breakfast meals. The result of a median urine iodine from 55 volunteers of Ban Nong Nok Kean was 13.95 μg/dL (SD = 10.76, n = 55), and from 57 volunteers of Ban Kew was 20.76 μg/dL (SD = 13.63, n = 57). The median urine iodine from the volunteers in these two villages after having iodine enriched eggs was 16.57 μg/dL (SD = 12.56, n = 112). Table 5 demonstrated the comparison using statistic ANOVA analysis of urine iodine content of the women volunteers from the two villages in the study: before and after consuming iodine enriched eggs. The result showed that there was a highly significant difference (P value < 0.001) in the urine iodine content of volunteers before and after consuming iodine enriched eggs.
Discussion Before the implementation of the iodine enrichment programme in the natural food chain of study areas, the median urine iodine from 858 volunteers (Table 1 ) in 5 districts of Udon Thani province was 5.16 μg/dL (range 3.26 - 7.68 μg/dL). This demonstrated that the community urine iodine level was at a mild iodine deficiency status [ 10 ], even though there was an active campaign in the region to encourage consumption of edible iodized salts by the provincial health office. The iodine enriched feeding formula was successfully prepared and yielded about 3.8 fold higher iodine content in the produced eggs than the regular formula (Table 2 ). Thus, the iodine enriched feeding formula could be used instead of the regular one and the cost of iodine added in the formula was only 0.33% of the cost per kg of poultry feed. This would not significantly affect the cost of hen egg production. The result of urine iodine survey before the dietary enrichment of iodine diet in two villages of the study area: Ban Kew village and Ban Nong Nok Kean village in Napu sub-district, Udon Thani province was 7.03 μg/dL (Table 3 ) which confirmed that these two communities were still in the condition of mild iodine deficiency. These results also showed that during the period of 2006 - 2008 the diets consumed by the community were still lacking iodine and the concerned public health sectors had not created any awareness to eliminate this iodine deficiency crisis. For the five day study of continuously consuming an iodine enriched egg as one item at breakfast, there were 124 volunteers that participated at the beginning of the study and 14 volunteers dropped out in later stage. There were 53 volunteers from Ban Nong Nok Kean village and 57 volunteers from Ban Kew village that participated through out the study programme. By using an ANOVA method for statistical assessment, the result showed that the median urine iodine level before and after consumption of iodine enriched eggs were highly significant different at P < 0.001 (Table 5 ). The effect of this innovative process produced a remarkable increase of the urine iodine level from the condition of iodine deficiency (median urine iodine content 6.87 - 7.11 μg/dL) to the optimal level of iodine (median urine iodine content 13.09 - 20.76 μg/dL) in the villagers of the study areas. The urinary iodine excretion could also be used as a valid marker for reporting the recent dietary iodine intake as well as one of the key indicators for monitoring the IDD situation at the community level and the sustained adherence to public health efforts to eliminated IDD.
Conclusions In summary, the pilot model farm for the production of iodine enriched eggs supplied to the neighbouring communities was successfully developed which resulted in a self-support system of nutritional iodine enrichment for the communities. The WHO/UNICEF/ICCIDD has recommended that the daily iodine in take for various age groups be in the range between 90 - 200 μg. Since eggs could be consumed as daily food products in every Thai family, it would be possible to supply iodine eggs as the new iodine daily diet source to all Thai communities due to the fact that the cost for the addition of iodine as potassium iodide or potassium iodate to poultry feed was very low and did not significant increase the production cost of eggs from the farm. In addition, the preparation of iodine enriched formula poultry feed could be self processed at the farms and did not require any complicated equipments for the iodine enrichment steps. This innovative and inexpensive strategy could be easily applied to all remote areas throughout the country with the community programme of Sufficient Economy Concept to overcome the problem of iodine deficiency endemic in Thailand.
Background Evidence showed that the occurrence of iodine deficiency endemic areas has been found in every provinces of Thailand. Thus, a new pilot programme for elimination of iodine deficiency endemic areas at the community level was designed in 2008 by integrating the concept of Sufficient Economic life style with the iodine biofortification of nutrients for community consumption. Methods A model of community hen egg farm was selected at an iodine deficiency endemic area in North Eastern part of Thailand. The process for the preparation of high content iodine enriched hen food was demonstrated to the farm owner with technical transfer in order to ensure the sustainability in the long term for the community. The iodine content of the produced iodine enriched hen eggs were determined and the iodine status of volunteers who consumed the iodine enriched hen eggs were monitored by using urine iodine excretion before and after the implement of iodine enrichment in the model farm. Results The content of iodine in eggs from the model farm were 93.57 μg per egg for the weight of 55 - 60 g egg and 97.76 μg for the weight of 60 - 65 g egg. The biological active iodo-organic compounds in eggs were tested by determination of the base-line urine iodine of the volunteer villagers before and after consuming a hard boiled iodine enriched egg per volunteer at breakfast for five days continuous period in 59 volunteers of Ban Kew village, and 65 volunteers of Ban Nong Nok Kean village. The median base-line urine iodine level of the volunteers in these two villages before consuming eggs were 7.00 and 7.04 μg/dL respectively. After consuming iodine enriched eggs, the median urine iodine were raised to the optimal level at 20.76 μg/dL for Ban Kew and 13.95 μg/dL for Ban Nong Nok Kean. Conclusions The strategic programme for iodine enrichment in the food chain with biological iodo-organic compound from animal origins can be an alternative method to fortify iodine in the diet for Iodine Deficiency Endemic Areas at the community level in Thailand.
Competing interests The authors declare that they have no competing interests. Authors' contributions WC was responsible for the conception and the design of the model programme, interpretation of data, field works, drafting and approval of the manuscript. PS participated in design the programme on the aspect of iodine enrichment for animal feeds, field works, data analysis and drafting the manuscript. PT participated in the analysis of urine samples by ICPMS. JW participated in field works and specimens collection. All authors read and approved the final manuscript.
Acknowledgements The authors would like to thank Dr. Saksom Attamangkune for the technical assistance on the aspect of iodine enriched eggs and the supports from the staff of Department of Medical Sciences, Kasetsart University and Napu Sub-district Municipality are also appreciated.
CC BY
no
2022-01-12 15:21:36
Nutr J. 2010 Dec 20; 9:68
oa_package/ff/ee/PMC3014864.tar.gz
PMC3014865
21122113
Background While excess energy intake and declining energy expenditure are clearly important contributors, individual susceptibility to obesity is also strongly influenced by genetic factors. Twin, adoption, and family studies have indicated that 40-70% of inter-individual variation in body mass index (BMI) is heritable [ 1 , 2 ]. A compendium of evidence for the genetic bases of obesity have been accrued from single-gene mutation studies, Mendelian inheritance patterns, transgenic and knockout murine models, animal and human quantitative trait loci (QTL), candidate-gene association studies, and genome scan linkages and have been incorporated into the Obesity Gene Map database [ 3 ]. Also recently, a number of genome-wide association studies (GWAS) have demonstrated associations of single-nucleotide polymorphisms (SNPs) to qualitative and quantitative indices of adiposity in several populations [ 2 , 4 - 10 ]. A combination of independent studies and meta-analysis of existing GWAS data have implicated a total of 18 genetic loci as relevant for body weight regulation to date [ 11 ]. In addition to DNA sequence variants, genetic influences are also manifested through differences in gene transcription, leading to differential messenger RNA levels. While such differences might be expected to occur in biologically relevant tissues (muscle and adipose tissue in obesity, for example), several recent studies have demonstrated an alteration in the peripheral blood transcriptome in diseases of non-hematologic origin. These include disorders such as chronic fatigue syndrome, schizophrenia and colon cancer [ 12 - 17 ]. Additionally, the blood transcriptome has also been found to be responsive to diverse environmental and socio-economic stimuli including ionizing radiation in cancer therapy, benzene exposure, socio-economic status, etc. [ 18 - 21 ]. These findings raise the intriguing possibility that blood transcriptome profiles might provide a valid biological readout for otherwise hard to study disease processes in humans and additionally generate information of high predictive and diagnostic content. In line with this argument, we postulated that differences in transcript abundance might also occur in blood from obese subjects compared to lean subjects, as a consequence of either pre-existing genetic variations, or as an adaptive response to obesity, independent of the genetic background. To test this hypothesis, we have carried out transcriptional profiling of peripheral blood from obese subjects and well-matched lean controls and conducted enrichment analysis to identify biological pathways that are preferentially associated with obesity. Our study demonstrates significant gene expression differences in blood from obese subjects compared to lean controls, particularly along the lines of differential expression of genes in key metabolic pathways regulating cell survival, protein synthesis and energy harvest. These findings are important on three levels. First, our results demonstrate the importance of blood as a biologically informative tissue in the elucidation of the obese state. Second, as differences in gene expression are often driven by sequence variants in gene regulatory regions, our study provides a mechanism for the selection of obesity-associated candidate genes for the determination of possible regulatory sequence variants. Finally, the identification of adiposity related gene expression differences in a clinically accessible tissue such as blood leads the way for the determination of biomarkers of weight regulation that could be implemented in a clinical setting.
Methods Study Subjects Twenty consecutive obese subjects enrolled in the Ottawa Hospital Weight Management Program at the Ottawa Hospital, Ottawa, with a body mass index (BMI) of 30-50 kg/m2, were recruited for study. All subjects were of Northern European White genetic ancestry. Patients were excluded on the basis of medical conditions possibly affecting whole blood gene expression, including out of normal range thyroid indices (TSH, free T3) at week 1 or week 13, diabetes mellitus treated with insulin or oral hypoglycemic agents, cigarette smoking, congestive heart failure, obstructive sleep apnea, active malignancy. Patients treated with weight-altering medications including tricyclic antidepressants, paroxetine, mirtazepine, lithium, valproate, gabapentin and typical and atypical antipsychotics, fluoxetine in doses greater than 20mg, bupropion, topiramate, systemic glucocorticoids and weight management drugs were also excluded. Blood samples were collected at baseline prior to initiation of weight loss therapy. Twenty lean subjects from the same genetic ancestry (Northern European White), with a BMI ≤ the 10th percentile for age and sex and no prior history of having had a BMI> 25th percentile for more than a 2 year consecutive period, were recruited from the Ottawa community. Lean subjects were excluded if they had any medical conditions affecting weight gain such as hyperthyroidism, anorexia nervosa, bulimia, major depression, or malabsorption syndromes. BMI for obese and lean subjects was categorized according to the population percentiles for age and sex using the Canadian Heart Health Survey data for subjects over the age of 18 years (data on file; Health Canada). The study protocol was approved by the Human Research Ethics Committees of the Ottawa Hospital and the University of Ottawa Heart Institute and informed consent was obtained from all participants prior to their enrolling into the program. Sample preparation for transcriptome analysis 2.5 ml of fasting whole blood was drawn from study subjects by standard venipuncture and directly transferred to PAXgene blood RNA tubes (Qiagen, Santa Clara, CA). PAXgene tubes were processed at designated times after phlebotomy by the PAXgene protocol. Isolation of total RNA was accomplished according to the manufacturer's instructions. Prior to further processing, RNA quality was ascertained by electropherograms on the Agilent 2100 Bioanalyzer. Extracted RNA from all samples was stored -70°C until processed for microarray hybridizations. Microarray hybridization and data analysis Hybridization of 100 nanograms of labeled cRNA from each sample was carried out on Affymetrix GeneChip ® Human Genome U133 Plus 2.0 Arrays according to the manufacturer's instructions. Microarray data was deposited in the Gene Expression Omnibus data repository (accession number GSE18897). Gene expression signals were generated from hybridized and scanned Affymetrix arrays by the GC-RMA algorithm [ 54 ]. Probesets with a normalized average expression level of less than 50 units in all of the tested groups were eliminated from further analysis. Significance of differential gene expression was ascertained via the signal-to-noise algorithm from the GenePattern Comparative Marker Selection module [ 22 ], employing a permutation-based t-test and false discovery rate (FDR) control. The Signal-to-Noise feature selection method is a variation of the more commonly used t-test statistic and looks at the difference of the means in each of the classes scaled by the sum of the standard deviations: Sx = (μ0-μ1)/(σ0 + σ1) where μ0 is the mean of class 0 and σ0 is the standard deviation of class 0. The Signal-to-Noise statistic penalizes genes that have higher variance in each class more than those genes that have a high variance in one class and a low variance in another. Pathway analysis Bioinformatic pathway analysis was conducted with the Gene Set Enrichment Analysis (GSEA) software package [ 27 , 55 ]. GSEA is a computational method to detect statistically significant, concordant differences in a priori defined gene sets (pathways) between two biological states. GSEA accomplishes this task by calculating a weighted Kolmogorov-Smirnov statistic, adjusted for gene-set size (known as the Normalized Enrichment Score, NES) for each gene-set, based on the over-representation of members of a gene-set towards the top or bottom of a list of genes ranked by the strength of their correlation (positive or negative) to one of the two phenotypes. The statistical significance of NES score is estimated by a permutation test based on random shuffling of the phenotype or tag (gene) labels. GSEA addresses the problem of multiple testing (testing hundreds of gene-sets simultaneously) by calculating a false-discovery rate and a family-wise error rate on the ES p-values. Quantitative real time polymerase chain reaction (RT-PCR) Whole blood was collected in PAXgeneTM blood tubes (Qiagen, Santa Clara, CA) and total RNA was extracted using the PAXgeneTM blood kit. All RNA was treated with DNase I to remove genomic DNA contamination. The RNA was converted to cDNA in a 96-well microtiter plate on an ABI PRISM 7700 Sequence Detector System (Applied Biosystems, Foster City, CA) using the Applied Biosystems High Capacity cDNA archive kit. Gene expression was conducted on the Applied Biosystems 7900 using TaqMan ® RT-PCR technology. A global median absolute deviation (MAD) was computed from the gene expression values by taking the median deviation for each set of technical replicates, using either the Ct values or log 2 calculated abundances. Outliers were defined as having more than five times the global MAD. Following technical and biological outlier identification the data was normalized using reference housekeeper genes. The mean Ct value of all reference genes across all samples ("global mean Ct") was subtracted from the mean Ct value of all reference genes within each sample ("sample reference mean") to determine a normalization factor for each sample. The normalization factor for a given sample was then subtracted from its Ct value resulting in a normalized Ct. All Ct values were then converted to log2 abundances. Class Prediction from gene expression Class prediction (obese or lean) from gene expression data was carried out through the WEKA Explorer and WEKA Experimenter applications. First, 183 genes belonging to the 3 obese-upregulated pathways (ribosome, apoptosis and oxidative phosphorylation) were used to identify a subset of maximally informative features (genes) for classifier testing while removing irrelevant or redundant features that could negatively impact algorithm performance. Feature selection was accomplished by two independent 'filtering-based' algorithms (Information Gain and Cfs Subset Evaluator) and using 10-fold cross validation for each method [ 56 , 57 ]. We did not use 'wrapper-based' feature-selection because we wanted the selected features to be independent of classification algorithms [ 58 ]. Both procedures resulted in a list of genes that were then ranked based on their importance in each feature selection method. From these ranked lists, we selected a total of 11 genes that were ranked within the top 20 genes in both lists. Gene expression signals for these 11 genes were then used as input in 4 different classifiers (Naïve Bayes, Logistic Regression, Random Forests and ZeroR) representing 4 different algorithmic approaches (Bayesian, regression, decision trees and rule-based, respectively) which were independently tested for predictive performance (Additional File 13 ) [ 59 , 60 ]. Classifer-specific parameters were kept at the defaults provided in WEKA Experimenter. Each classifier used 66% of the samples for training (from a total of 34 obese plus lean subjects) and 33% for testing (chosen at random for each round) for a total of 100 iterations. For each classifier, the true positive rate, true negative rate, false positive rate, and false negative rates were calculated (average plus standard deviation over 100 iterations) and the values used to compare individual classifiers for their predictive performance.
Results Phenotypic characterization of study subjects Demographic and phenotypic characteristics of the subjects included in the current study are shown in Table 1 . The obese and lean subjects showed statistically significant differences (p < 0.05 level) in almost all metabolic parameters tested with the exception of cholesterol, LDL-cholesterol and thyroid stimulating hormone status. Also, levels of glycated haemoglobin (HbA1c), insulin and fasting glucose were statistically significantly different but within normal clinical ranges in both groups. There were 53% (9/17) and 70% (12/17) females in the obese and lean groups, respectively. Both cohorts were closely matched for age and hormonal status (6/12 lean and 5/9 obese women were postmenopausal). Ascertainment of data quality We ascertained the overall quality of the whole genome expression profiling signals by comparing the Affymetrix microarray generated expression patterns of a subset of 61 genes (with a 20% or greater change in expression between obese and lean cohorts) to expression signals generated by real-time, quantitative PCR (Taqman). The genes selected cover a range of approximately 7 logs (base 2) representing over 100-fold differences in the magnitude of gene expression on Affymetrix microarrays (average log2 signal of 4.66 for protein tyrosine phosphatase, receptor type, S to an average log2 signal of 11.35 for RAB31 gene in the obese cohort). The results are shown in Figure 1 . In ~ 75% of genes tested (45/61 genes), the direction of gene expression changes between obese and lean subjects were in agreement between the Affymetrix and Taqman platforms suggesting high reproducibility of gene expression data between the two approaches. Additionally, analysis of muscle and adipose-specific marker gene expression demonstrated no evidence of contamination in the study samples (Additional File 1 ). Principal components analysis of gene expression data We performed multivariate, principal components analysis to determine whether blood gene expression signals were capable of distinguishing between the obese and lean subjects. Figure 2 shows a scatterplot representing the first two principal components based on gene expression profiles from 17 obese and 17 lean subjects. Analysis of the principal component model performance indicated that 27% of the total variance in gene expression was modelled in the first principal component (R2X) with a cross-validated prediction of 22.4%. The cross-validation results indicate that the variability captured in the first component is statistically greater than the significance limit of 2.9% (Additional file 2 ). Identification of differentially expressed genes Genes showing differential expression between the obese and lean subjects were identified via the Comparative Marker Selection module in GenePattern [ 22 ], using the signal-to-noise algorithm for ranking genes. A permutation testing was performed to compute the significance (nominal p-value) of the rank assigned to each gene. A false discovery rate (FDR) was also calculated to control for multiple testing. A total of 12127 probesets were detected above background (set to 50 units) among which 374 probesetes were overexpressed (2-fold or greater) and 75 probesets were underexpressed (2-fold or greater) in the obese samples compared to the leans. The results of the differential gene analysis are presented in Additional Files 3 and 4 . Inspection of the gene list showed that a majority of the genes upregulated in the obese subjects were genes known to be selectively expressed in erythrocytes/reticulocytes. These included genes such as carbonic anhydrase, ferrochelatase, synuclein, glycophorin B, etc. This finding is consistent with previous observations of higher red blood cell counts (hematocrit) in obesity [ 23 - 26 ] and provides evidence for the expansion of transcriptionally active reticulocytes in obesity. Conversely, several genes related to immune function showed reduced expression in the obese subjects. Pathway analysis of gene expression difference between lean and obese subjects The transcriptome data was next subjected to bioinformatic pathway analysis by the Gene Set Enrichment Analysis (GSEA) algorithm [ 27 ]. The values for the GSEA algorithmic parameters used in the current study are indicated in Additional File 5 and details about the GSEA algorithm have been explained in Materials and Methods. Pathway analysis was conducted either with the Kyoto Encyclopedia for Genes and Genomes (KEGG) metabolic pathway database [ 28 ], or a user-created custom database consisting of pathways drawn from several sources (Additional File 6 ). Pathways were evaluated by their normalized enrichment score (NES), nominal p-values (permuted) and false discovery rates, as described in [ 29 ]. KEGG Pathway analysis Enrichment analysis of gene expression profiles against KEGG pathways identified 5 pathways at p permuted < 0.05 level (Additional File 7 ). Notable among them were the 'apoptosis' , ' ribosome ', and ' oxidative phosphorylation ' pathways. The pathway enrichment plots and expression profiles of a subset of genes contributing significantly to the enrichment of these 3 pathways are collectively shown in Figure 3 . A number of genes, including apoptotic protease activating factor 1, baculoviral IAP repeat containing 2, caspase 7, Fas, interleukin 1 beta, interleukin 1 receptor associated kinase 4, etc. contributed to the core enrichment of the 'apoptosis' pathway in the obese subjects (Additional File 8 ). Enrichment of the ribosome pathway was effected by coordinate upregulation of several ribosomal protein genes (ribosomal protein L31, S7, S24, L35, L7 for example). Several genes involved in the mitochondrial process of electron transfer and ATP synthesis demonstrated increased expression in the obese cohort leading to a significant enrichment of the 'oxidative phosphorylation' pathway in this group. Some of the genes contributing to core enrichment of this pathway included cytochrome c oxidase subunits 6C, 7B and 7C, NADH-coenzyme Q reductase, NADH deyhdrogenase beta subcomplex 3, etc. Custom Pathway analysis In addition to investigating pathway enrichment based on the KEGG database, we also subjected a set of 'custom' pathways to analysis by GSEA (Additional File 6 ). GSEA analysis of the custom pathways identified 2 pathways as significantly upregulated in the obese, at a nominal p-value < 5% and FDR < 5%. These were the 'electron transport chain pathway' and the 'erythrocyte/reticulocytespecific_affytechnote' pathways (Additional File 9 ). The 'electron transport chain pathway' (National Cancer Institute Pathway Interaction Database [ 30 ]) is a subset of the KEGG 'oxidative phosphorylation' pathway. The 'erythrocyte/reticulocytespecific_affytechnote' pathway consists of genes reported to be selectively enriched for expression in erythrocytes/reticulocytes (Affymetrix, [ 31 , 32 ]). Identification of this gene-set as an obesity-upregulated pathway further supports our earlier observation of increased expression of individual erythrocyte/reticulocyte specific genes in the obese subjects. Details are provided in Additional Files 10 and 11 . Effects of gender on pathway enrichment Since our study cohort contained both male and female subjects, the contribution of gender to pathway enrichment was investigated. To determine whether pathway ranks were influenced by gender, we carried out independent gene-set enrichment analyses on subgroups comprised of female or male subjects only. We compared the relative ranks of the KEGG pathways in the three analyses as an indication of their sensitivity to gender. 'Apoptosis' was ranked 7 th , 8 th and 3 rd and 'oxidative phosphorylation' was ranked 10 th , 12 th and 18 th for All subjects, Females and Males respectively. The 'ribosome' pathway was the top ranked pathway for All subjects and Females analysis, but was ranked 27 th in the analysis involving the Males . We repeated the same subgroup analyses on the custom pathway set and in all cases the 'electron transport chain pathway' and the 'erythrocyte/reticulocytespecific_affytechnote' pathways remained the top 2 ranked pathways for all groups tested. Details are provided in Additional File 12 . Effect of cell populations on pathway enrichment Since whole-blood consists of a mixture of various cell types, we investigated the relation between the observed enrichment in "ribosome", "apoptosis" and "oxidative phosphorylation" pathways in the obese and enrichment of reticulocytes/erythrocytes in obese subjects as previously reported [ 23 - 26 ]. We scaled the gene expression data independently by the expression of 2 erythrocyte-specific transcripts, hemoglobin D (HBD) and erythrocyte membrane protein, band 2 (EMPB2) and subjected the scaled data to gene-set enrichment analysis. Of the original 3 pathways found to be enriched in the obese subjects, the "ribosome" pathway was still the top differentially expressed pathway with both unscaled and scaled data. However, the "apoptosis" and "oxidative phosphorylation" pathways were no longer significantly enriched, with either of the scaled datasets. Pathway enrichment results with scaled data are provided in Additional File 13 . Class prediction via blood gene expression We next examined whether biological pathways implicated from gene-set enrichment analysis of the current study could provide a set of mechanism-based gene predictors that would be capable of predicting obese and lean subjects with high accuracy. We created an initial, inclusive set containing all genes (features) belonging to the ribosome, apoptosis or oxidative phosphorylation pathways (183 genes). Since this list was also likely to contain redundant and non-informative genes, we applied two independent feature selection algorithms to identify a smaller set of genes that would be capable of distinguishing between the obese and lean phenotypes with high success rates, based on the metrics specific to the two algorithms used (described in detail in Materials and Methods). A search for overlapping genes scoring high in both algorithms (ranked within the top 20 genes in both) resulted in a set of 11 genes. The logged gene expression signals from the full (183) and filtered (11) gene-sets were then used as inputs into four different classifiers representing distinct algorithmic approaches to classification and prediction. These included the Naive Bayes, Logistic Regression, Random Forests and ZeroR classifiers. A full description of the classifiers is presented in Materials and Methods and Additional File 14 . Each classifier was first trained on a randomly selected 66% of the samples and then used to predict the class for the remaining 33% samples. The process was repeated 100 times for each classifier. Classifier performance was evaluated by four parameters (true positive, true negative, false positive and false negative rates). A description of the performance evaluators can be found in Additional File 14 . The classifier ZeroR simply predicts the same class for all instances and was used as a baseline classifier. Any classifier should perform significantly better than ZeroR in order to be considered useful. Table 2 compares the performance of the four classifiers with either the full gene-set (183 genes) or the filtered set (11 genes). For each of the four performance evaluators, we plotted the average and standard deviation values for the four parameters over the 100 iterations. Overall, the Naïve Bayes and logistic regression classifiers performed better than the decision-tree based classifier (Random Forests) and all three classifiers performed significantly better than ZeroR. A comparison of the classifier results with the full (183) or filtered (11) gene-set inputs showed that both inputs had similar true positive and false negative rates. Both Naïve Bayes and logistic regression classifiers displayed high sensitivity as indicated by true positive rates close to 1.0. These two classifiers also demonstrated lower false-positive rates with the filtered gene set compared to the full gene set. Additionally, the filtered gene set classifiers displayed higher specificities (true negative rates) compared to the full gene set based classifiers. Based on these results, we found the 11-gene based Naïve Bayes or logistic regression based classifiers to perform better compared to the 183-gene classifiers for predicting class membership. The identities of the 11 genes are shown in Table 3 and appear to be primarily composed of genes from the oxidative phosphorylation and apoptosis pathways.
Discussion Our study demonstrates significant gene expression differences in whole blood from age-matched obese and lean subjects of Northern European White genetic ancestry. These differences further lead to the identification of differentially enriched biological pathways in obesity and lead to an increased appreciation and understanding of genomic changes in whole blood related to body mass expansion. The current study is not designed to resolve whether the observed transcriptional differences are causal or caused, i.e. whether the differences in gene expression are related to the development of obesity or reflect an adaptive mechanism in response to increased body mass. Although blood is usually not considered to be a target organ for obesity, certain observations are pertinent. First, the physiological role of blood as a sentinel tissue and a systemic integrator of tissue and organ-level perturbations could lead to adaptive responses in response to major metabolic perturbations such as excessive build-up of body mass and the attendant increases in the demand for nutrient and oxygen transport. Secondly, the chronic low-grade tissue inflammation observed in obesity [ 33 ] is expected to have a direct effect on circulating leukocytes, including immune dysfunction and apoptosis. Finally, macrophages in blood share many functional and antigenic properties with preadipocytes and adipocytes and transcriptome profiles of preadipocytes are reportedly closer to the macrophages than to adipocytes [ 34 ]. In this context, our study provides the first detailed investigation of the blood transcriptome in relation to obesity and provides evidence in favor of its dynamic involvement in the process. It is important to note here that the between-group differences in gene expression were usually small and there was considerable heterogeneity in individual gene expression values among subjects in the obese or lean categories. However, the between-group variation exceeded the within-group variation for several genes leading to statistically significant differences between the groups. Additionally, as demonstrated by principal components analysis, blood gene expression profiles were able to distinguish lean subjects from obese subjects even when the subject classes were not exposed a priori (unsupervised clustering). Since gene expression measures were used as input for the PCA analysis, these results suggest that the differences in blood transcript levels between obese and lean subjects were significant and informative enough to cause a separation between the two classes. The application of pathway analysis provided additional information and insight into the biological processes that are differentially regulated in obese and lean blood samples. Some of the pathways with increased component transcript abundances included the "ribosome", "apoptosis" and "oxidative phosphorylation" pathways. Upregulation of the ribosomal pathway in the obese subjects was due to an increased expression of several ribosomal protein-encoding genes, indicative of enhanced protein synthesis in blood cells, possibly as a consequence of enhanced metabolic demands in the obese state. This observation is consistent with a recent report that links ribosomal RNA synthesis to cellular energy supply through activation of the AMP-activated protein kinase [ 35 ]. The presence of increased apoptosis in the obese phenotype has also been well documented in animal and human cell culture models. For example, increased cardiomyocyte apoptosis has been reported in leptin-deficient ob/ob mice and leptin-resistant db/db mice [ 36 ]. Prolonged exposure to free fatty acids also have pro-apoptotic effects on human pancreatic islets [ 37 ] and circulating cytokines, such as tumor necrosis factor alpha (TNF-α) have been reported to induce apoptosis in cultured human preadipocytes and adipocytes [ 38 ]. Our findings now provide evidence for activation of a similar apoptotic program in blood from obese subjects. While the current study does not allow us to pinpoint the cause of the enhanced apoptosis, we speculate that obesity-associated chronic inflammation [ 39 , 40 ] or lipotoxicity are contributing factors. Finally, the observed upregulation of the 'oxidative phosphorylation' pathway in obese subjects is consistent with a response to increased energy demands in obese subjects. Functional and gene expression studies have previously indicated impairment in oxidative phosphorylation and mitochondrial function in subjects with type 2 diabetes compared to controls [ 29 , 41 , 42 ]. Our findings are consistent with Takamura et al., who demonstrated an upregulation of oxidative phosphorylation genes in the livers of obese, type 2 diabetic patients compared to non-obese diabetics [ 43 ]. More interestingly, our findings now point to a similar involvement of energy-harvesting mechanisms in obese blood and provide further evidence in favor of a role for mitochondrial dysfunction in obesity [ 44 , 45 ]. A gender-based sub-analysis demonstrated relative stability of the "apoptosis" and "oxidative phosphorylation" pathway ranks in both genders; in contrast, the "ribosome" pathway differed significantly in rank between females and males, suggesting a gender-specific effect (Additional File 7 ). Since a majority of genes upregulated in the obese subjects are highly expressed in erythrocytes and reticulocytes, we scaled the gene expression data independently by the expression of two erythrocyte-specific transcripts, hemoglobin D (HBD) and erythrocyte membrane protein, band 2 (EMPB2) and subjected the scaled data to gene-set enrichment analysis. Of the three pathways found to be differentially upregulated in the obese subjects, the "ribosome" pathway remained the top differentially expressed pathway (with the scaled data) whereas the "apoptosis" and "oxidative phosphorylation" pathways were no longer significantly enriched, with either of the scaled datasets. These findings suggest that an increase in erythrocyte/reticulocyte numbers in the obese (differential hematocrit) is a possible explanatory mechanism for the observed increase in transcript levels for "apoptosis" and "oxidative phosphorylation" in the obese subjects. The results for the "ribosome" pathway, in contrast, suggest a significant upregulation of the transcripts for the component genes of this pathway in the obese subjects, even after adjustment for erythrocyte-specific gene expression. We note one caveat to the scaling approach used here for investigating cell number effects. Since the same amount of cRNA was used from each sample for hybridization, the relative enrichment of cell types is expected to have a real effect on gene expression only for genes that are differentially expressed among the cell types (e.g. hemoglobin transcripts that are expressed only in reticulocytes and not lymphocytes). For genes expressed at comparable levels across cell types, the differential cell type representation should not have an effect on expression unless there is a true upregulation or downregulation of these genes between the two groups (although the cellular origin for the differential expression may not be known). Scaling the gene expression data by the expression of reticulocyte/erythrocyte specific genes cannot distinguish between the above two mechanisms of enhanced gene expression and can lead to potentially incorrect conclusions. However, our results clearly demonstrate that inter-individual variations in hematocrit, especially between obese and lean subjects, may affect interpretation of expression data and should be considered as an important co-variate in future studies. Several recent publications have reported on the successful application of gene expression signatures as classifiers or predictors of phenotypic class, disease progression and therapeutic prognosis, primarily in the area of diagnosis and treatment of several types of cancers [ 16 , 46 - 48 ]. However, the biological mechanisms linking the predictive genes to the outcomes being predicted are not always clear. This lack of mechanism has often been criticized as a barrier to the clinical utility of the gene predictors. One solution to the problem is to choose gene predictors from biological pathways associated a priori with the phenotype or outcome of interest. This approach was pursued in this study and led to the identification of an 11-gene based classifier that could distinguish and predict obese and lean subjects with high accuracy. Our motivation for this exercise was to provide proof-of-concept data to test if blood gene expression patterns can have predictive value in the context of obesity. While such prediction is not necessarily required for distinguishing obesity from leanness, blood based gene biomarkers can significantly advance the clinical management of obesity by, for example, allowing the prediction of weight loss success from diet or bariatric surgery. One potential downstream application of differential gene expression analysis in whole-blood is the selection of candidate genes with possible regulatory polymorphisms (single nucleotide polymorphisms in promoter regions, for example) that associate with obesity and help explain the observed differences in expression. Comprehensive sequencing of the regulatory regions of such candidate genes are expected to yield additional insights into the genetics of obesity such as the identification of expression QTLs (eQTLs). While a direct subject-level association of gene regulatory polymorphisms to gene expression levels is outside the scope of the current work, we conducted a preliminary analysis of the existence of putative regulatory variants in the 11 gene predictors identified in our analysis. Based on data from the NCBI dbSNP database (Build 131), several genes contained common sequence variants near the 5'-end of the gene spanning a region 2000 bases upstream of the start codon (SNPs rs2515192 and rs3019164 for ATP6V1C1, rs1317775 and rs1318199 for BIRC2, rs11709092 for PRKAR2A, etc.). It is reasonable to speculate that a subset of these upstream sequence variants could influence transcription. Our study relied on whole-blood collected in PAXgene tubes instead of peripheral blood mononuclear cells (PBMCs), consistent with our ultimate goal of identifying clinically relevant and useful predictors of weight loss success. This procedure, however, has the disadvantage of investigating a relatively heterogeneous cell population where noise could mask gene expression differences in specific cell types. PBMC's, consisting of lymphocytes and monocytes provide a consistent and homogeneous sample for transcriptome analysis. However, the extra fractionation procedure for PBMCs requires a prolonged period before RNA stabilization leading to significant ex vivo changes in gene expression profiling [ 49 ]. Additionally, compared to whole blood, several cell types including neutrophils, basophils, eosinophils, platelets, reticulocytes and erythrocytes are depleted in PBMCs which lead to loss of important transcription information. On the other hand, PAX samples show a decrease in the number of expressed genes and lower gene expression values with higher variability compared to the PBMCs [ 50 ], primarily due to the high abundance of globin transcripts that constitute over 70% of whole blood mRNA [ 51 ]. However, the PAXgene system employs an easy way to collect, store, transport and stabilize RNA from whole blood and based on our overall goals, was the method of choice for our analysis. In this context, the ability of gene expression signatures from biologically relevant pathways to accurately classify and predict obese and lean classes, as observed in this study, provides further validation of our approach and suggests future suitability of the PAXgene based whole blood transcriptome for yielding clinically usable biomarkers related to weight regulation. Additional sensitivity could be obtained in future studies via selective reduction of the globin transcript from whole blood RNA samples [ 52 , 53 ]. There are the following limitations to the current study. First, since the study employed whole blood, the relative contribution of the number and transcriptional programs in specific cell types towards the observed gene expression differences cannot be clearly delineated. Second, the relatively small sample sizes reduced the power for detection of subtle differences in expression. Also, due to small sample numbers, we had to rely on cross-validation methods for calculation of prediction errors instead of testing candidate predictors on new samples. The possibility of over-fitting cannot, therefore, be entirely ruled out.
Conclusions Gene expression profiling in whole blood demonstrated significant differences in transcript levels that were capable of separating obese and lean phenotypes in multivariate analysis. Gene-set enrichment analysis further identified differences in biological pathways relating to cell survival, protein synthesis and energy harvest between the obese and lean groups. A subset of genes responsible for pathway enrichment also acted as efficient predictors of phenotype (obese or lean) when their expression signatures were used as inputs to Naive Bayes or logistic regression based classifiers. Together, our study is the first to investigate the information content in whole blood in relation to obesity. Our findings demonstrate that the investigation of gene expression profiles from whole blood can inform and illustrate the biological processes related to regulation of body mass. Additionally, the ability of pathway-related gene expression to predict class membership suggests the feasibility of a similar approach to identify blood-based robust predictors of weight loss success in response to dietary and surgical interventions.
Background Obesity is reaching epidemic proportions and represents a significant risk factor for cardiovascular disease, diabetes, and cancer. Methods To explore the relationship between increased body mass and gene expression in blood, we conducted whole-genome expression profiling of whole blood from seventeen obese and seventeen well matched lean subjects. Gene expression data was analyzed at the individual gene and pathway level and a preliminary assessment of the predictive value of blood gene expression profiles in obesity was carried out. Results Principal components analysis of whole-blood gene expression data from obese and lean subjects led to efficient separation of the two cohorts. Pathway analysis by gene-set enrichment demonstrated increased transcript levels for genes belonging to the "ribosome", "apoptosis" and "oxidative phosphorylation" pathways in the obese cohort, consistent with an altered metabolic state including increased protein synthesis, enhanced cell death from proinflammatory or lipotoxic stimuli, and increased energy demands. A subset of pathway-specific genes acted as efficient predictors of obese or lean class membership when used in Naive Bayes or logistic regression based classifiers. Conclusion This study provides a comprehensive characterization of the whole blood transcriptome in obesity and demonstrates that the investigation of gene expression profiles from whole blood can inform and illustrate the biological processes related to regulation of body mass. Additionally, the ability of pathway-related gene expression to predict class membership suggests the feasibility of a similar approach for identifying clinically useful blood-based predictors of weight loss success following dietary or surgical interventions.
Competing interests The authors SG, SG(Gorman), and JS are former or current employees of GlaxoSmithKline and have equity in the company. The other authors (RD, RM, MEH) have no disclosures. Authors' contributions SG carried out the experimental design, data analysis, interpretation and drafted the manuscript. RD provided phenotype information and samples for transcriptome analysis and edited the manuscript. MEH participated in experimental design and interpretation of microarray data and manuscript editing. SG(Gorman) performed the Taqman analysis. JS coordinated the sample management, RNA isolation and microarray hybridizations. RM had overall oversight of the study and helped prepare the final version of the manuscript. All authors read and approved the final manuscript. Pre-publication history The pre-publication history for this paper can be accessed here: http://www.biomedcentral.com/1755-8794/3/56/prepub Supplementary Material
Acknowledgements This work was conducted with a grant support from GlaxoSmithKline. Part of the study was supported by NIH grants NHLBI-5R25HL059868-10 and NIDDK-1R21DK088319-01 (Ghosh) and a grant from the Heart & Stroke Foundation of Ontario (NA-5413; McPherson, Dent and Harper).
CC BY
no
2022-01-12 15:21:36
BMC Med Genomics. 2010 Dec 1; 3:56
oa_package/da/62/PMC3014865.tar.gz
PMC3014866
21122118
Background The rich-poor gap in maternal outcomes has been examined largely by means of quantitative data and explained principally in terms of poorer women's reduced chances of receiving prenatal care [ 1 ]. During prenatal care, women undergo screening and receive treatment for conditions that could be life-threatening. Poverty hampers women's ability to use otherwise available maternal care services. For instance, lack of resources to pay for transportation could frustrate access to quality care at critical moments. Other accounts have emphasized poorer women's elevated odds for depression; to use alcohol, tobacco, and other harmful substances [ 2 , 3 ]; higher risk for food insufficiency and insecurity and poor feeding practices and habits, resulting in malnutrition and obesity; less access to vitamins and minerals; maternal thinness; decreased blood flow; infections; gestational diabetes; pre-eclampsia; large size for gestational age; fetal macrosomia; and cesarean delivery [ 4 - 6 ]; increased risk for STIs [ 7 ]; including HIV: and higher probability to experience closely-spaced births which leads to substantial loss of vital stores of micronutrients; decreased opportunity to restock nutrients; and maternal depletion; morbidity and mortality[ 8 ]. But how do poor women themselves view economic disadvantage vis-a-vis undesirable maternal outcomes? How, in poor women's views, does poverty engender adverse health outcomes? And what are poor women's lived experiences of the poverty-adverse maternal outcomes link? Prior research has scarcely paid attention to lay views of the relationship between poverty and maternal health outcomes. While studies exist on poor women's views of the factors that constrain their access to formal obstetric care services, very little of these have addressed women's views of the specific ways through which poverty, as a particular and oft-mentioned social determinant of health, impinges on maternal health as a whole. One recent paper, for instance, addressed barriers related specifically to formal emergency obstetric care utilization among poor urban Kenyans women [ 9 ]. While it spotlighted poverty as a key inhibitor of urban Kenyan women's use of emergency obstetric care services, the study did not explore lay views of the relationship between economic disadvantage and maternal health as a whole. Yet, it is important to understand 'ordinary' people's viewpoints of the interaction between economic disadvantage and health status. Lay views can expose complexities in the relationships between wellbeing and livelihoods, and thicken contextual understanding of the social and cultural determinants of health. Lay views can also offer researchers the resources with which to more fully contemplate the lives and experiences of individuals affected by real-world problems, understand how human needs nurture new possibilities and alternative practices, and reveal the intricate dynamics and channels through which poverty impacts local people and communities [ 10 ]. The current study ponders the direct views of urban poor Kenyan women on the relationship between economic disadvantage and poor maternal outcomes. Focus on urban poor Kenyan women is, on the whole, important. Research shows that they are among the worst hit by adverse maternal outcomes in Kenya. For instance, while the Kenya Demographic and Health Survey of 2003 shows that about 414 maternal deaths occur per 100 000 live births, some poor urban areas of Kenya have maternal mortality figures of 1300 deaths per 100 000 live births[ 11 , 12 ]. Results of recent research in the same slums where the current study was implemented showed a maternal mortality ratio of 706 deaths per 100,000 [ 13 ]. There is also ample evidence of the disproportionate representation of urban poor women among the thousands of reproductive-age Kenyan women who suffer short and long-term maternal-related complications and morbidities [ 12 , 14 , 15 ] . In addressing women's views of the relationship between economic disadvantage and adverse maternal outcomes, this paper thus seeks to enlarge understanding of the role of poverty in maternal outcomes well beyond questions of poor nutrition and under-utilization of appropriate maternal services.
Method The current study is a secondary analysis of qualitative data collected in two slums in Kenya. Conducted as part of a larger three-country investigation of maternal health in cities in Ghana, India, and Kenya, the parent study sought to illuminate issues surrounding maternal and child health as well as emergency obstetric care utilization in resource-poor urban contexts. The Kenya study was conducted in two slum settlements in Nairobi: Viwandani and Korogocho. In these two sites, the African Population and Health Research Center (APHRC) operates the Nairobi Urban Health and Demographic Surveillance System (NUHDSS), with about 60,000 registered persons. Among many other issues that emerged from the study, the current paper addresses women's perspectives of the impact of poverty on maternal health. Other themes that have been explored using the rich qualitative data generated in this study include the persistence of homebirths, notions of hospital and home-based deliveries, and lay constructions of barriers to the utilization of emergency maternal care [ 9 , 12 , 16 ]. The current study relies on data from focus group discussions (FGDs) and in-depth individual interviews (IDIs), held with a variety of poor women. In total, 12 FGDs and 12 IDIs were held. Table 1 below describes the sample and recruitment strategy. Respondents were selected using a multistage sampling design. The first stage was a household survey of reproductive-age women to identify those with experiences of pregnancy-related 'near-misses' and complications in the two years preceding the study. In each study site, three FGD sessions each comprising about eight purposively-selected women were held. These women were identified during the household survey as having suffered pregnancy-related 'near-misses' and complications. Purposive selection ensured the participation of women with varied demographic, reproductive, and maternal health experiences. One FGD session was also held in each slum community with eight purposively-selected women leaders drawn from critical religious, civic, political and cultural publics. Given their popularity in the study communities, we also interviewed TBAs. Two FGDs (one in each community) were held with twenty three TBAs. These TBAs were identified and recruited with the aid of key informants. Further, in-depth individual interviews (IDIs) were held with twelve purposively-selected women reporting personal experiences of pregnancy-related 'near-misses' and complications. Four different interviewing guides (three FGD guides and one IDI guide) targeting the different categories of respondents defined in Table 1 were used in the study. The guides were developed based on information from the literature as well as survey data from APHRC's maternal health project, and were also pretested during the pilot phase of the study. The interviews were conducted using a focus qualitative interviewing schedule, administered in Swahili (Kenya's national language) by trained female fieldworkers with high-level experience in conducting qualitative interviews. The fieldworkers, who were mostly university graduates or students, were employees of APHRC's NUDHSS at the time of the study. They had an average age of 29 years and were all Kenyans. All discussions were audio-recorded and later transcribed into English. FGDs took place in private settings (a community school classroom, civic hall, or other rented spaces. The IDIs were conducted at the home of the respondents. When this was not ideal or possible, fieldworkers worked with interviewees to agree on an alternative place. In all cases, we insisted on locations and spaces free of the watchful eyes, threat of sanctions, and influence of nonparticipating onlookers and gatekeepers. Interview sessions typically lasted an average of forty-five minutes. Among other things, interviews sought respondents' views regarding women's use of particular types of birthing facilities; their beliefs related to the use of particular birthing services, their knowledge of the factors that affect maternal outcomes; as well as their experiences and perceptions about poverty as it relates to maternal outcomes. The Ethical Committee of the Kenyan Medical Research Institute approved the research procedures. Informed consent was also obtained from participants before the interviews were conducted. APHRC enjoys widespread rapport in the communities having maintained research presence there for nearly a decade. As a result, every respondent approached to participate in the study assented. Most of the fieldworkers were also residents of the study communities and thus known to the participants. Using NUDIST software, interview data were simultaneously but separately coded by the authors and a qualitative data coder. Later, the authors and coder met to review the coding outcomes and ensure inter-coder concordance. Data saturation was not an issue because the sample was both small and comprised women with varied reproductive health experiences. A qualitative inductive approach involving thematic examination of the narratives was adopted to analyze the data [ 17 ]. This approach is geared toward improving understanding of meanings and messages in narrative data through the continual investigation of the themes emerging from the transcripts for categories, linkages, and properties. Transcripts were also further discussed with participants and colleagues and clarifications made based on their feedbacks and inputs. The categories that emerged were then contrasted with one another to guarantee the distinctiveness and specificity of their properties. In many instances, word-for-word quotations are used to exemplify responses on pertinent issues and themes. To guarantee anonymity, pseudonyms are used in the paper. The findings of the current study have also been presented to participants, community members and scientific audiences; and feedbacks from these presentations have informed the analysis presented here.
Results The respondents Responding women were from a diversity of socio-demographic and livelihood backgrounds. Their ages ranged from 16 to 70, averaging roughly 39. Livelihoods for the bulk of them were based primarily on informal economic activities: petty trading, manual laboring, and craftsmanship. TBAs, full-time housewives, and women without personal income sources were also in the sample. Among slum dwellers in Nairobi, incomes are generally very low, with slum women often earning more poorly than their men counterparts[ 18 ]. Respondents frequently self-reported as married. Only a handful of them were divorced, single, or widowed. While respondents' educational profile indicates that they mostly had primary-level schooling, a substantial number reported secondary-level education. Participants with tertiary-level or without formal education were marginal in the sample. The women we studied self-reported largely as Luos, Kikuyus, Luhyas, and Kambas. The other reported ethnicities were Somali, Taita, Gare, Kisii, Olakaye, Borana, and Kuria. Most respondents self-identified as Christians. They were also nearly uniformly distributed between the two slum sites. Perceptions of maternal health in the slums Study narratives unequivocally underlined the critical importance of maternal health. 'When a mother is unwell, the whole family is unwell'; 'Without a strong and healthy mother, everybody in the household will suffer", responding women regularly admitted. Women's health was described as key to children's survival, household wellbeing, and societal continuity. Good health reportedly equips mothers to more competently care for their children and households, bear healthy children, and contribute positively to family upkeep and wellbeing. Poor maternal health was, in contrast, reported as likely to sap family resources, lead to deficient child care, and foster household poverty. Interlocutors frequently noted that healthy mothers and women contribute more positively to the community, and participated more in neighborhood development and organization efforts. In general, respondents considered maternal health in the slums to be very poor. 'Many women here have poor health...and many of them die during pregnancy and childbirth'; asserted one woman community leader. The respondents noted that maternal mortality and morbidity was common in their communities. Large numbers of women living in the communities reportedly die or take ill during pregnancy and the post-partum period. Pregnancy loss, fetal deaths, stillbirths, unsafe abortions, and HIV were also said to be very common in the slums. Most interlocutors themselves admitted to having suffered life-threatening maternal health problems; and several of them also knew at least one woman in the community who had a maternal health problem. 'It is common for women here to be sick during pregnancy, sometimes you will see them with swollen legs and others looking really sickly'; observed a responding TBA. Hemorrhage, anemia, hypertension, malaria, placenta retention, premature labor, prolonged/obstructed labor, and convulsion/seizures (pre-eclampsia) were the commonly-mentioned maternal health problems in the study communities. These problems reportedly often resulted in fetal deaths, premature births, pregnancy loss, and maternal mortality, morbidity, and deformity. In the very apt language of one respondent: In this community, most people are poor...the women are always sick and sometimes they do nothing about their health because they do not have money to seek treatment. It is common to see women here die from bleeding, convulsion, and premature labor and births. The other day, my neighbor almost died from prolonged labor. The baby died. She was in labor for days... and was delivering at home. Traditional birth attendants (TBAs); private, missionary, and public formal facilities; itinerant peddlers of western medicines; chemists; herbalists; and religious and magical healers were identified as key providers of maternal health care in the slums. Each provider-type reportedly had its strengths and weaknesses. For instance, the availability of providers and equipment that could make pregnancy and childbearing safer was mentioned as the major benefit of hospital-based care. Hospital-based providers purportedly had the competency to make childbearing safer and hospital-based delivery put women under the care of skilled providers and ensured the ready availability of equipment for managing emergencies and difficult deliveries. Informal providers (e.g., TBAs) reportedly lacked these skills and tools. Martha, a respondent, admitted that hospital-based providers saved her life. She sought delivery services from a TBA and stayed two nights in the TBA's house writhing in labor pains. Finally, the baby arrived feet first. Martha was scared and asked to be transferred to a hospital, but the TBA refused, promising that she could handle the situation. However, Martha recognized she was in grave danger and crawled out of the TBA's house. She was lucky to find a taxi to take her to a hospital in the city. She passed out upon reaching the hospital and remembered waking up with a baby girl by her side. Martha is convinced that she would have died if she had remained at the TBA's home. Yet, hospital-based deliveries were generally considered to be very expensive and often out of the reach of slum women. Hospital-based maternal care providers were also perceived as harsh and unsympathetic toward poor women. Noted one woman: 'Even those facilities belonging to government or churches and offering free or discounted services, it is not easy for us to make use of them. They may not even ask for anything from you, but ... the whole thing is not easy for us ... You still have to convey yourself there, pay for tests, and buy drugs ... sometimes; we just can't pay for all these because of poverty ... so we go to the TBAs'. While respondents frequently admitted to the superiority of the hospital as a delivery site, they viewed it primarily as a birthing site for women anticipating or at risk of obstetric emergencies and difficult deliveries. Respondents tended to consider the management of uncomplicated deliveries to be the time-honored role of TBAs, who were depicted as naturally and divinely gifted to assist during deliveries. TBAs' inborn expertise and skills were also viewed as more effectual and reliable than the learned practice of hospital-based providers. One responding woman's view that TBAs were divinely-gifted with the abilities to help women received massive support among the participants. The same approving response greeted the view of a middle-aged FGD participant that "Many TBAs are better than hospital providers when it comes to handling deliveries. It is their work and many of them are really good at it." Another woman also noted: 'They (TBAs) may not be as good as the doctors and nurses, but they help us a lot'. Self-treatment during pregnancy and the post-partum and self-assisted deliveries were also commonly reported by the women. One woman admitted that she does not go to hospitals or to TBAs for any pregnancy-related conditions. She self-treats by buying medicines from chemists. Another woman reported that she delivered all her last three children assisted only by her teenage daughter. Poverty and adverse maternal outcomes Respondents generally acknowledged their economic disadvantage and vulnerability, commonly commenting that: 'Most of us here are poor'; 'The poorest people in Kenya live here'; 'Only the poor like us live here'; 'Most people you see here are poor'; 'Here in the slums, you will find the poorest of the poor in Kenya'. Responding women widely associated poverty with key social problems, including insecurity, deprived housing conditions, poor nutrition, unsafe abortion, inability to educate one's children, alcoholism, drug use, crime, delinquency etc. The narratives we collected strongly linked poverty and negative maternal outcomes, casting poverty as the major killer of women in the slums and a key hindrance to women's wellbeing and survival during pregnancy and the post-partum period. Of course, they linked poverty to decreased utilization of appropriate antenatal care and delivery services as well as to poor nutrition. Due to poverty, slum women were reportedly often undernourished, scarcely used quality maternal services, and delivered at home. Poor nutrition also reportedly left women with poor quality blood and insufficient nutrients to go through pregnancy and the period surrounding it. Starving, weak, or underfed mothers were said to be common sight in the slums. Such women usually die, get sick, or suffer complications during pregnancy and the post-partum. Poverty was also said to hamper slum women's access to quality care. Facilities that offer good services in the slums were often privately-run and charged exorbitant prices. They were thus beyond the reach of poor women. The cost of reaching quality public maternal health services located outside the slums also emerged as a major hindrance to women's access to quality care. Several responding women also frequently implicated poverty in their own problematic maternal outcomes and for the maternal complications of other women personally known to them. Respondents admitted to seeking homebirths because of their affordability. During homebirths, women did not have to pay for transportation, registration, laboratory, and other costs, including bribes reportedly offered to formal providers to facilitate services. They also did not have to pay for supplies such as transfusion blood, syringes, needles, drugs, and sanitary materials, which would be incurred during a hospital stay. Josephina, a mother of four, brought into bold relief the implications of poverty for women's uptake of hospital-based maternal services. She gave birth to her first baby in a public health facility in Nairobi at a time when she was unemployed and her husband did not have a stable job. Josephina recalled going to the hospital numerous times for consultations and says that she spent a lot of money during the period. There were days she would trek to the hospital due to lack of transport fare. In addition to paying various amounts for minor services, she also regularly bribed hospital staff to ensure that she would receive swift attention in the hospital. Josephina also paid in advance for blood that she would be transfused with, although she never received any at delivery and was never refunded her money. She was also requested to buy her own supplies (e.g., sanitary towels, cotton wool, and syringes), which were deposited in the hospital. Labor began for Josephine at night and her husband had to pay about 600 Kenyan shillings ($10 U.S.) to hire a taxi to transport her to the hospital. While acknowledging the risks in homebirths, Josephine says, "unlike homebirths, hospital-based deliveries make poor people poorer...' However, in the context of the slum, the women maintained that poverty engenders undesirable maternal outcomes not primarily by preventing women's access to quality nutrition and maternal services, but by exposing them to extremely heavy workloads during pregnancy, to intimate partner violence, as well as to inhospitable and poor treatment by service providers. In what seemed like the mind of most our interlocutors, 32-year-old Anna asserted that 'Women here are poor, but they also devotedly attend antenatal services and they try to eat well during pregnancy...but poverty still causes us to have problems during pregnancy because of other things'. In her longer narrative, Anna suggested that among slum women, poverty operates through dynamics, other than restricting women's access to quality services and nutrition, to cause adverse health outcomes among slum women. The heavy workload which poverty reportedly pushes women into during pregnancy and the post-partum period was a prominent explanation offered for adverse maternal outcomes among slum women. For many responding women themselves, it was the heavy workload which they perform during pregnancy and the postpartum period that caused them adverse maternal experiences. Due to poverty, slum women reportedly continued to do so much hard work during pregnancy and the period surrounding it. They would work in construction sites as head-carriers and loaders, stay out late selling their wares, or go from door to door looking for work, etc. Hard work during pregnancy and the period surrounding it reportedly sapped women's energy and blood, leaving them weak and fragile. For the respondents, women worked harder during the period of pregnancy in order to save enough money to prepare for birthing. To be able to cater for their babies, some women also reportedly resumed heavy work immediately after delivery. Julie blamed heavy workload during her pregnancy for the severe anemia she suffered. Always exhausted, Julie said she never rested adequately and blamed it all on poverty: she needed to save enough money to prepare for the baby and the time she will spend at home after delivery. As she continued to toil during this precarious period, Julie got burnt out, and became anemic. Aloeci, a 27-year-olf mother, who reported that she nearly died 5 days after her delivery also linked her 'near-miss' experience to heavy workload. She worked as a cleaner till two weeks before her delivery and resumed her job 4 days after delivery. It was on her first day at work after delivery that she suffered heavy bleeding. 'I had to start working immediately or I would starve with my children. I would have stayed at home and rested. But now I needed money. I nearly died'. Women's workload also reportedly increased during pregnancy and soon after birthing because husbands never bring home enough for the upkeep of households. Some men also reportedly run away when their wives become pregnant. In the case of Moriga, her husband chased her out of the house when she became pregnant, saying he did not have the resources to have another child. To prepare for the delivery of the baby, Moriga said she took a job as a bar tender. However, she often worked late, rested little, and stood for long periods. One day, out of exhaustion, she collapsed at work and lost the baby. Another frequently reported means through which poverty promoted adverse maternal outcome in the slums was by exposing women to experiences of intimate partner violence during pregnancy and the period surrounding it. Owing largely to poverty, hardship, and unemployment, men in the slums were reported as often extremely frustrated and desperate, and which lead them into violent behaviors toward their wives. The physical abuse of women by their male partners was reported as common in the slums, and held as a key issue in slum women's adverse maternal outcomes. Wanjiru admitted that she lost her pregnancy to the constant beating she received from her jobless and alcoholic husband. 'He used to be a good man before, but when he lost his job; he became frustrated and beat me all the time. Here, men drink a lot and go home to beat their wives". She asserted. In her longer narrative, Wanjiru noted that most poor men in the slum get worried when their wives get pregnant. 'Because of the burden children bring, they get frustrated and often vent their frustration on their wives. It is not uncommon for men to kill their wives here when they are pregnant. That's what poverty causes here'. One unemployed man reportedly beat his wife until he gave birth prematurely. Another alcoholic and jobless husband allegedly kicked his pregnant wife in the stomach, killing her. A respondent had a neighbor who started beating his pregnant wife when he lost his job, until she suffered a miscarriage. There was also an account of a woman who experienced 4 stillbirths following 5 years of unrelenting physical mistreatment in the hand of an abusive, alcoholic, and jobless husband. A third major way mentioned by the women through poverty reportedly engendered adverse maternal outcomes among slum women was by exposing them to inhospitable treatment by service providers. Respondents agreed that though while most slum women were often very willing to use modern maternity and delivery services, they usually suffered poor treatment when they presented in these facilities. As the women's narratives stoutly implied, their poverty was to blame for this. Providers reportedly were uncharitable toward poor health seekers, often abandoning them or ignoring them when they present at formal facilities. Among other confirmatory narratives, a 27-year-old Korogocho mother observed that when poor women walk into hospitals with their inexpensive dresses, they are easily identified by nurses and doctors, some of whom even act towards them as if they smelled. She said, "Some of them are so wicked that they will not pay you any attention until you are dying''. Providers were said to be deferential toward well-off care-seekers, who offer tips and bribes. Poor women who cannot afford to give bribes and tips have to wait for long periods of time before receiving attention. Some poor women reportedly only receive attention when they faint in waiting lines or are almost dying. 'Sometimes you go to the clinic and you are in labor; they will just ignore you because you are poor. They know there is nothing you will give them; so they only come to you when you are on the floor, dying in your own pool of blood or water'. Many women here have problems because they are poor and providers mistreat them'. The poor treatment received by poor women when they present at the hospital reportedly also push them to use less efficacious services, such as TBAs. Martha (aged 34) also noted, "Child delivery costs a lot in the hospital and when poor people like us go there, we are treated shoddily''.
Discussion and Conclusion This study addressed poor Kenyan women's views and lived experiences of the link between poverty and adverse maternal outcomes. Its major limitations are that it is a secondary analysis, which confined us to an existing dataset; relies on information gathered from women who are not typical of a national or local sample of poor Kenyan peoples, and is characterized by a certain ethnographic skinniness as long-term field observations did not accompany the interviews. The small size of the sample also raises the possibility that data saturation may not have been achieved, which implies that the themes and issues under consideration may not have been exhaustively treated. Yet, it contains a number of important findings. For instance, like the women studied by other scholars, our participants underscored the importance of maternal health [ 9 , 19 , 20 ]. They recognized healthy mothers and women to be key to the wellbeing of children, households, and communities; described their communities as impoverished and characterized by very high levels of maternal mortality and morbidity; and were knowledgeable about the maternal health problems which face women in their community. Responding women's robust awareness about maternal health problems presents an important resource for current efforts to foster change. Women's views of the health problems affecting them are critical in the drive towards sustainability in maternal healthcare delivery[ 9 ]. Such information can assist program implementers create more responsive and acceptable interventions. They can also contribute to the process of identifying important questions and relevant outcomes, and support the implementation of research findings [ 21 ]. Study respondents admitted to the many adverse impact of poverty on their lives, including the high-levels of morbidity and mortality in their communities. According to them, however, among slum women, poverty produced adverse maternal outcomes not primarily by hindering adequate nutrition and the utilization of appropriate maternity services among women. Rather, they argued that in the slums, poverty engenders adverse maternal outcomes largely by compelling pregnant and puerperal women to do heavy workloads; exposing them to experiences of intimate violence; and rendering them vulnerable to inhospitable treatment by service providers when they present for care. The potential of poverty to particularly expose women to heavy workloads during pregnancy and the period surrounding it has yet to gain topicality in the literature on adverse maternal outcomes. However, the women we studied held heavy workloads during pregnancy as the key means through which poverty engenders adverse maternal outcomes among them. They generally noted that they worked harder during pregnancy in order to earn and save enough money to prepare for delivery. Maternal depletion syndrome, a major predictor of maternal and child health, has been linked to heavy workloads [ 22 ]. Among poor households, pregnant women often take on additional workloads to prepare for birthing [ 23 ]. Mebrahtu reports that in several African communities, poverty forces women to work untiringly during pregnancy and immediately afterwards [ 24 ]. The potential of poverty to particularly expose women to heavy workloads during pregnancy and the period surrounding it is a neglected dynamic which current efforts to reduce adverse maternal outcomes in poor communities must begin to recognize. Slum men were also reported as often physically abusive towards their wives during pregnancy and the post-partum. Intimate partner abuse is very widespread in sub-Saharan Africa and occurs with increased regularity and severity among economically-disadvantaged pregnant women [ 25 ]. Costs associated with pregnancy may add to the exasperation and despondency of poor men, predisposing them to violent behavior [ 26 ]. Campbell associates intimate partner violence (IPV) with adverse maternal and fetal outcomes. Men's role in ensuring quality maternal outcomes in poor communities must go beyond their ability to recognize danger signs in pregnancy [ 27 ]. Men need help to realize how poverty may drive them to behave in ways that endanger the lives of their wives or female partners. Providers were also reportedly very uncharitable toward slum women because they were poor. When poor women present at facilities, providers would reportedly abandon them, not listen to them, not ask them important questions, and not attend to them. Such mistreatments reportedly contributed to fatalities among slum women and discouraged some of them from seeking formal providers. Poor patient-provider relationships and provider inattention to health seekers' needs are foremost barriers to the uptake of formal care services and frequently-mentioned factors in poor maternal outcomes in developing societies . Caregivers lose a unique opportunity to contribute to the elimination of health disparities by treating low-income women unfairly [ 28 ]. Better patient-provider relationship is a practical area of focus towards improving maternal outcomes for poor women and households. In the current study, TBAs were considered accessible and affordable; they treated women more kindly and thus enjoyed more patronage. This is important and suggests the critical role of TBAs in maternal health delivery among poor communities. In this respect, TBAs have a relevance which can be strengthened through extended training as well as back-up support from formal providers [ 16 ]. To conclude, our respondents recognized poverty as a major killer of slum women and a key hindrance to women's wellbeing and survival during pregnancy and the period surrounding it. Their lived experiences of the impact of poverty on maternal outcomes suggested however, that poverty engenders adverse maternal outcomes largely by driving pregnant and puerperal mother into heavy workloads; exposing them to experiences of intimate violence; and making them vulnerable to hostile treatment by service providers. Overall, the evidence presented here is critical for social and policy action aiming to improve the maternal outcomes among poor women. It suggests wider and more complex implications of poverty for maternal outcomes than are readily acknowledged in extant research, raising urgent need for efforts aiming to promote better maternal outcomes to be guided by a vigorous understanding of the multiplicity of ways through which women's livelihoods mediate their health outcomes.
Discussion and Conclusion This study addressed poor Kenyan women's views and lived experiences of the link between poverty and adverse maternal outcomes. Its major limitations are that it is a secondary analysis, which confined us to an existing dataset; relies on information gathered from women who are not typical of a national or local sample of poor Kenyan peoples, and is characterized by a certain ethnographic skinniness as long-term field observations did not accompany the interviews. The small size of the sample also raises the possibility that data saturation may not have been achieved, which implies that the themes and issues under consideration may not have been exhaustively treated. Yet, it contains a number of important findings. For instance, like the women studied by other scholars, our participants underscored the importance of maternal health [ 9 , 19 , 20 ]. They recognized healthy mothers and women to be key to the wellbeing of children, households, and communities; described their communities as impoverished and characterized by very high levels of maternal mortality and morbidity; and were knowledgeable about the maternal health problems which face women in their community. Responding women's robust awareness about maternal health problems presents an important resource for current efforts to foster change. Women's views of the health problems affecting them are critical in the drive towards sustainability in maternal healthcare delivery[ 9 ]. Such information can assist program implementers create more responsive and acceptable interventions. They can also contribute to the process of identifying important questions and relevant outcomes, and support the implementation of research findings [ 21 ]. Study respondents admitted to the many adverse impact of poverty on their lives, including the high-levels of morbidity and mortality in their communities. According to them, however, among slum women, poverty produced adverse maternal outcomes not primarily by hindering adequate nutrition and the utilization of appropriate maternity services among women. Rather, they argued that in the slums, poverty engenders adverse maternal outcomes largely by compelling pregnant and puerperal women to do heavy workloads; exposing them to experiences of intimate violence; and rendering them vulnerable to inhospitable treatment by service providers when they present for care. The potential of poverty to particularly expose women to heavy workloads during pregnancy and the period surrounding it has yet to gain topicality in the literature on adverse maternal outcomes. However, the women we studied held heavy workloads during pregnancy as the key means through which poverty engenders adverse maternal outcomes among them. They generally noted that they worked harder during pregnancy in order to earn and save enough money to prepare for delivery. Maternal depletion syndrome, a major predictor of maternal and child health, has been linked to heavy workloads [ 22 ]. Among poor households, pregnant women often take on additional workloads to prepare for birthing [ 23 ]. Mebrahtu reports that in several African communities, poverty forces women to work untiringly during pregnancy and immediately afterwards [ 24 ]. The potential of poverty to particularly expose women to heavy workloads during pregnancy and the period surrounding it is a neglected dynamic which current efforts to reduce adverse maternal outcomes in poor communities must begin to recognize. Slum men were also reported as often physically abusive towards their wives during pregnancy and the post-partum. Intimate partner abuse is very widespread in sub-Saharan Africa and occurs with increased regularity and severity among economically-disadvantaged pregnant women [ 25 ]. Costs associated with pregnancy may add to the exasperation and despondency of poor men, predisposing them to violent behavior [ 26 ]. Campbell associates intimate partner violence (IPV) with adverse maternal and fetal outcomes. Men's role in ensuring quality maternal outcomes in poor communities must go beyond their ability to recognize danger signs in pregnancy [ 27 ]. Men need help to realize how poverty may drive them to behave in ways that endanger the lives of their wives or female partners. Providers were also reportedly very uncharitable toward slum women because they were poor. When poor women present at facilities, providers would reportedly abandon them, not listen to them, not ask them important questions, and not attend to them. Such mistreatments reportedly contributed to fatalities among slum women and discouraged some of them from seeking formal providers. Poor patient-provider relationships and provider inattention to health seekers' needs are foremost barriers to the uptake of formal care services and frequently-mentioned factors in poor maternal outcomes in developing societies . Caregivers lose a unique opportunity to contribute to the elimination of health disparities by treating low-income women unfairly [ 28 ]. Better patient-provider relationship is a practical area of focus towards improving maternal outcomes for poor women and households. In the current study, TBAs were considered accessible and affordable; they treated women more kindly and thus enjoyed more patronage. This is important and suggests the critical role of TBAs in maternal health delivery among poor communities. In this respect, TBAs have a relevance which can be strengthened through extended training as well as back-up support from formal providers [ 16 ]. To conclude, our respondents recognized poverty as a major killer of slum women and a key hindrance to women's wellbeing and survival during pregnancy and the period surrounding it. Their lived experiences of the impact of poverty on maternal outcomes suggested however, that poverty engenders adverse maternal outcomes largely by driving pregnant and puerperal mother into heavy workloads; exposing them to experiences of intimate violence; and making them vulnerable to hostile treatment by service providers. Overall, the evidence presented here is critical for social and policy action aiming to improve the maternal outcomes among poor women. It suggests wider and more complex implications of poverty for maternal outcomes than are readily acknowledged in extant research, raising urgent need for efforts aiming to promote better maternal outcomes to be guided by a vigorous understanding of the multiplicity of ways through which women's livelihoods mediate their health outcomes.
Background The link between poverty and adverse maternal outcomes has been studied largely by means of quantitative data. We explore poor urban Kenyan women's views and lived experiences of the relationship between economic disadvantage and unpleasant maternal outcomes. Method Secondary analysis of focus group discussions and in-depth individual interviews data with women in two slums in Nairobi, Kenya. Results Urban poor women in Nairobi associate poverty with adverse maternal outcomes. However, their accounts and lived experiences of the impact of poverty on maternal outcomes underscore dynamics other than those typically stressed in the extant literature. To them, poverty primarily generates adverse maternal outcomes by exposing women to exceedingly hard and heavy workloads during pregnancy and the period surrounding it; to intimate partner violence; as well as to inhospitable and unpleasant treatment by service providers. Conclusions Poverty has wider and more intricate implications for maternal outcomes than are acknowledged in extant research. To deliver their expected impact, current efforts to promote better maternal outcomes must be guided by a more thorough perspective of the link between women's livelihoods and their health and wellbeing.
Competing interests The authors declare that they have no competing interests. Authors' contributions All authors were involved in conceptualizing the paper and analyzing the data. COI wrote the first draft and DPN reviewed it and made suggestions for improving it. Both of us read and approved the final manuscript. Pre-publication history The pre-publication history for this paper can be accessed here: http://www.biomedcentral.com/1472-6874/10/33/prepub
Acknowledgements This study was supported by the World Bank (Grant # 713 6587 and Grant # 304 406-29), the Wellcome Trust grant # GR 078530M, and the Hewlett Foundation support grant # 2006-8376.
CC BY
no
2022-01-12 17:22:39
BMC Womens Health. 2010 Dec 1; 10:33
oa_package/27/08/PMC3014866.tar.gz
PMC3014867
21126342
Background The small size and subsequent relative increase in surface area-to-volume ratios of nanoparticles (NPs) results in special and desired properties for which NPs are engineered. It is hypothesized that this also contributes to making nanoparticles chemically more reactive leading to unexpected and aberrant effects upon interaction with biological systems compared to sub-micron sized particles of the same material. In a number of studies, the toxicity of nanoparticles (NPs) has been related to their small size and therefore high surface area per unit mass as well as proportionally having more atoms on the surface available for chemical reactions [ 1 - 7 ]. When performing a toxicity study, characterization of the test substance is of crucial importance in order to understand the parameters that can be used to predict the hazard of NPs. Primary particle size, shape, surface area, surface chemistry, surface charge, solubility or dissolution rate, purity and agglomeration state can all influence NPs toxicity [ 8 ]. In this study, the focus is on agglomeration state of particles of two different sizes and how this can influence the biological response after administration into the lung. Agglomeration of particles is a basic process that results in a reduction of surface free energy by increasing their size and decreasing their surface area. Agglomeration of nanoparticles is due to adhesion of particles to each other by weak forces leading to (sub)micronsized entities. In contrast, nanoparticle aggregates are due to the formation of covalent or metallic bonds that cannot be easily disrupted. Both agglomeration and aggregation raise questions how to address the evaluation of safety of NPs when they are no longer in the nano range, but are present as larger entities [ 9 ]. It is not known whether agglomerated particles could become single particles again when introduced in a biological system, while several studies have shown that single particles have the ability to form agglomerates in a biological matrix [ 10 , 11 ]. From a toxicological perspective, it is important to determine how the human body deals with single nanoparticles compared to agglomerates. In the workplace, inhalation of engineered nanoparticles is a realistic exposure scenario during production and handling [ 12 ]. Nanoparticles can exist in the form of single (primary) particles as well as agglomerates or aggregates. Also the primary size of a particle (nano- versus submicron sized) could result in a different biological response. It is currently thought that single nanoparticles could (also) pose a greater hazard compared to larger particles due to their ability to translocate across barriers [ 13 , 14 ] and possibly to by-pass the pulmonary immune system e.g. by less efficient macrophage uptake [ 15 ]. The impact of agglomeration state of particles on these effects is not extensively studied. Colloidal gold particles of 50 nm and 250 nm with a citrate coating are chosen as model particles to study effects on pulmonary endpoints, since they can be synthesized with a narrow size distribution as stable suspensions. Toxic effects of gold nanoparticles have been observed in vitro . For example, 14-nm colloidal gold particles were found to cross the cell membrane of dermal fibroblasts in culture and accumulate into vacuoles. The presence of the particles is responsible for abnormal actin filaments and extracellular matrix constructs, thereby inducing adverse effects on cell viability [ 16 ]. Conflicting results have been obtained regarding the cytotoxicity of gold, ranging from non-cytotoxic for 15 nm particles and 3.5 +/- 0.7 nm particles capped with lysine and poly-L-lysine [ 17 ] and to cytotoxic for 1.2 and 1.4 nm particles [ 18 ]. Gold 2 nm nanoparticles functionalized with cationic groups were moderately toxic in contrast to particles with anionic groups that were not toxic [ 19 ]. Gold particles have also been used to determine the fate as a function of size after different exposure routes. To what extent the agglomeration state will influence this, is largely unknown. In most studies, particles have been administered in an agglomerated state or as a mix of single and agglomerated particles via intravenous injection, intratracheal instillation and inhalation. After inhalation of nanogold particles (with a primary size of 5-8 nm and present in the aerosol as agglomerates of 30-110 nm or at least smaller than 100 nm, respectively), small quantities of particles translocate from the lung to other organs [ 20 , 21 ]. After intravenous injection [ 22 ] and intratracheal instillation [ 23 , 24 ], nano-sized particles have the ability to reach more distal regions of the body compared to their larger counterparts. In this study we determine to what extend the agglomeration state of 50 nm and 250 nm particles influences pulmonary effects in the rat. A single dose of 1.6 mg/kg body weight of spherical gold particles of either 50 nm or 250 nm is intratracheally instilled in the rat lung after diluting either by 1/10 volume of ultrapure water to obtain single particles or by 1/10 volume of 10× phosphate buffered saline (PBS) to obtain agglomerates. This method of delivery was chosen over the physiological route of exposure via inhalation based on the possibility to administer particle solutions containing single particles versus agglomerates, more exact dosing, reduction of costs as well as less complexity in exposure of the animals. Intratracheal instillation is a widely accepted alternative for delivery of particles to the lung [ 25 , 26 ].
Methods Animals Male WU Wistar-derived rats of 8 weeks of age and around 250 grams of body weight were obtained from Harlan, The Netherlands. Animals were bred under SPF conditions and kept barrier maintained during the experiment. Conventional feed (Special Diets Services) and tap water were provided ad libitum. Husbandry conditions were maintained according to all applicable provisions of the national law: Experiments on Animals Act. The experiment was approved by an independent ethical committee prior to the study. Experimental set-up A single dose of 1.6 mg/kg bw in 0.5 ml of 50 nm or 250 nm gold particles was delivered in the rat lung by intratracheal instillation under isoflurane anaesthesia. The 50 nm and 250 nm gold suspensions (BBI International, UK) were custom prepared at 2 mg/ml. Solutions containing the same trace elements and reagents except for 50 and 250 nm gold particles (BBI International, UK) were used as a vehicle control (4.5 volumeparts control solution 50 nm and 4.5 parts 250 nm control solution and 1 part ultrapure water). Suspensions containing single 50 nm or 250 nm particles were prepared by diluting 9 volumeparts of 50 or 250 nm particles with 1 volumepart sterilized ultrapure water. Agglomerated (agg) suspensions were prepared by diluting 9 volume parts of particles with 1 volumepart sterilized 10× PBS. A single dose of 1.6 mg/kg bw quartz (DQ12, crystalline silica) in ultrapure water was used as a positive control. All solutions were sonicated for 30 seconds in an ultrasonic water bath prior to administration to the animals. Animals were sacrificed at 3 hours or 24 hours after administration of the particles, and bronchoalveolar lavage fluid (BALF) and blood were collected. Treatment groups for gold consisted of 6 animals and for quartz of 3 animals. Characterization of gold particles and quartz Gold particles of 50 nm and 250 nm particles were purchased in sterile bottles and contained besides colloidal gold with a citrate shell, trace elements of substances used during synthesis. Endotoxin levels of the gold solutions of 50 and 250 nm were determined in a LAL assay and there were no detectable levels. The pH of the 50 and 250 nm solutions and control solutions were measured using indicator strips in the range of pH 1-10 and 6.4-8.0 (Merck). The zetapotential (Zetasizer, Malvern Instruments, UK) was determined in a triplicate measurement of a 20 μg/ml sample as a 10% dilution with ultrapure water or 10× PBS. The samples were first diluted with either ultrapure water or 10× PBS and then further diluted to the desired concentration using ultrapure water. Prior to preparing the quartz solutions, DQ12 was baked at 220°C for 3 hrs to inactivate possible endotoxin on the particle surface. The concentration of gold in the commercial available solutions was determined by Inductively Coupled Plasma-Mass Spectrometry (ICP-MS) by MiPlaza Materials Analysis, Philips, Eindhoven, the Netherlands. Three independent samples were digested with aqua regia in a heating block system. The size distribution of the 50 nm and 250 nm gold particles directly after preparing the suspensions for intratracheal instillation were determined in 6 separate measurements using tracking analysis of Brownian motion with a laser illuminated microscopical technique (LM20, NanoSight Ltd, UK). The size of the 250 nm agglomerates were in the maximal range of the measurement technique resulting in three representative measurements. Transmission electron microscopy (TEM) analysis A Tecnai 20F electron microscope equipped with a field-emission gun operated at 200 kV was employed to investigate the structure and the chemical composition of the gold particles with diameters of 50 and 250 nm. Samples were prepared by putting a drop of the suspension of the gold particles on a holey carbon film applied on a copper grid placed on filter paper. The gold particles were present well dispersed on the carbon film. Images were taken using conventional transmission electron microscopy (TEM) and scanning transmission electron microscopy (STEM) with a high-angle annular dark-field (HAADF) [ 43 ] and a secondary electron detector. STEM enabled us to execute elemental analysis by energy-dispersive X-ray (EDX) analysis at predefined spots. To assess the crystallographic structure of the gold particles, selected area electron diffraction and lattice imaging were performed. TEM analysis of particles in lung cells in the BALF was performed by pooling cells per exposure group and by fixation in 2% glutaraldehyde in cacodylatebuffer (pH 7.2-7.2), supplemented with 0.025 mM CaCl 2 and 0.05 mM MgCl 2 . The cells were embedded in gelatin, which was allowed to solidify to obtain "cell-tissue-blocks". The blocks were fixed in 4% paraformaldehyde (PFA), and postfixed with osmium tetroxide (OsO4) and potassium ferrocyanide (KFeCN), dehydrated with the use of ethanol and embedded in Epon. Ultrathin sections were made, post stained with lead citrate and uranylacetate and examined in a FEI Tecnai 12. Of the 24 hr groups at least three sections out of 12 sections were analyzed. Biological effect markers At 3 and 24 hours after intratracheal administration, rats were anesthetized via i.p. injection with a mixture of Ketamine/Rompun and sacrificed by exsanguination via the abdominal aorta. The lungs were perfused with saline to remove all the blood in the tissue. After ligation of the left bronchus, the right lung was lavaged (three in-and-out lavages with the same fluid) with a volume of saline corresponding to 27 ml/kg of body weight at 37° to obtain BALF. BALF was analyzed for the following parameters: total cell number (Coulter counter), differential cell count of 400 cells (cytospins), monocyte chemotactic protein -1 (MCP-1, Invitrogen), tumor necrosis factor α (TNF-α, Arcus Biologicals and eBiosciences), interleukin-6 (IL-6, Demeditec Diagnostics and eBiosciences) and macrophage inflammatory protein -2 (MIP-2, Arcus Biologicals). LDH, ALP, albumin and total protein were measured using an autoanalyser (LX20-Pro, Beckman-Coulter, Woerden, the Netherlands) using kits from the same manufacturer. EDTA blood was used to determine the total number of cells and differential cell count. In citrate plasma, protein levels of van Willebrand factor (vWF, American Diagnostica), fibrinogen (Genway) and C-reactive protein (CRP, Helica Biosystems) were determined. Statistics Data were analyzed by analysis of variance (ANOVA single factor) and where appropriate by a Bonferroni post-hoc analysis (Graphpad Prism). Differential cell count data are not normally distributed; therefore, the Kruskal-Wallis nonparametric test was used. Statistical significance is indicated with a * (P value < 0.05). In all graphs, error bars represent the standard deviation of the mean.
Results Characterization of particles The 50 nm and 250 nm gold particles were custom prepared at a concentration of 2 mg/ml and have a citrate shell for stabilization. Diluting the 50 nm particle solution using ultrapure water did not result in a colour change and particle size distribution measurements using the Nanosight apparatus indicated a highest peak size (size distribution peak with most particles) and mean particle size of around 50 nm (table 1 ), indicating that the citrate shell remains stabilized. Diluting the 50 nm particles using 1 part 10× PBS and 9 parts 50 nm particle suspension to obtain a physiological solution resulted in a colour change from red to blue with the highest peak and mean particle size above 100 nm indicating the formation of agglomerates [ 22 ]. The 50 nm particles in this dilution remained in suspension for approximately 5 minutes. Then particles started to precipitate at the bottom of the tube with a clearer aqueous solution on top. The resulting cluster of gold particles could no longer be homogenized by sonication. This effect was not observed for any other particle dilution that has been prepared. Therefore, 50 nm particles were mixed with PBS immediately before instillation and particle size distribution was measured directly afterwards to avoid this phenomenon. The highest peak size and mean particle size of 250 nm particles in ultrapure water were similar as indicated by a single distribution peak determined in the Nanosight apparatus. Dilution by using 10× PBS resulted in larger agglomerates with a large size distribution (table 1 ). All solutions were electrically stabilized based on zetapotentials (between -40 and -60 mV) for single particles solutions as well as agglomerates (table 1 ). The pH measurement of the 50 nm and 250 nm gold particle solutions as well as the control solution indicated that they are in the physiological range of pH 6.4-7.0. According to the supplier, the concentration of the gold particles suspensions should be 2 mg/ml. ICP-MS measurements revealed a lower concentration of 0.9 ± 0.02 mg/ml for 50 nm particles and 1.1 ± 0.02 mg/ml for 250 nm particles. The administered dose was 405 μg/rat equivalent to 1.6 mg/kg body weight (bw). Using the Nanosight apparatus (table 1 ) as well as TEM (figure 1A and 1B ), the particle size of the 50 nm particles was confirmed at 50 nm, while the 250 nm particles appeared to be smaller, namely 200 nm by tracking analysis versus 210 nm by TEM. The gold particles of 50 nm appeared to be monocrystalline; the diffraction maxima and the lattice images pointed to pure Au (data not shown). Some gold particles of 250 nm consisted of two or three smaller not intimately connected particles. An example is shown in figure 2A . These composite 250 nm particles do not exhibit a monocrystalline diffraction pattern and a spherical or facetted shape as the 50 nm particles (data not shown). figure 2B represents three gold particles of about 250 nm. The secondary electron image demonstrated the individual particles to be well facetted, while other gold particles were fractured and had thus no symmetrical shape (figure 2C ). The high-angle annular dark field (HAADF) images of both the 50 and the 250 nm particles showed the presence of areas containing carbon and oxygen on which some silicon containing areas were present, most likely in the form of SiO 2 (data not shown). The carbon and oxygen may be due to the presence of citrates. Biological response Three and 24 hours after instillation, several parameters were determined in the BALF. Single 250 nm particles induced a significant increase in the percentage of neutrophils after 24 hours (table 2 ). The percentage differential cell counts 24 hours after instillation showed that there is a shift towards a neutrophil influx at the expense of the percentage of macrophages compared to the negative control (figure 3 ), except for 250 nm agglomerates where the percentage in differential cell count did not change, albeit a trend towards an increase in total number of cells and absolute number of macrophages has been observed (table 2 ). This neutrophil influx was minor after gold particle treatment and much larger after quartz that is used as a reference material for inflammation (table 2 and figure 3 ). After quartz instillation, there was a significant increase in total cell numbers as well as macrophages after 3 hours and neutrophils after 24 hours. In addition, an increase in MIP-2 (figure 4A ) and a tendency to an increase in MCP-1 levels (data not shown) was found, as expected [ 27 ]. TNF-α and IL-6 levels were around the detection limit. After instillation of 250 nm single gold particles, a trend towards an increase was found for IL-6 (figure 4B ) and TNF-α levels (figure 4C ). Agglomerated 50 nm particles resulted in increased levels of TNF-α after 3 hours, albeit not significant (figure 4C ). Analysis of damage markers total protein and albumin revealed no significant differences in the BALF after gold instillation (data not shown). The agglomerated 50 nm particles induced an increase in ALP (alkaline phosphatase) activity after 3 hours (figure 5A ). Increased ALP activity in BALF has been associated with type II epithelial cell damage [ 28 ]. Quartz at 1.6 mg/kg bw resulted in cellular damage as can be seen by an increase in ALP (figure 5A ) and lactate dehydrogenase (LDH) (figure 5B ) after 24 hours. Light microscopic analysis revealed dark coloured material inside macrophages of animals that were dosed with 50 nm and 250 nm particles, either agglomerated or as single particles. In control animals that did not receive any particles, no material was seen (Additional file 1 , Figure S1). For both sizes of particles, 60-80 out of 100 macrophages contained this material. Transmission electron microscopy images showed that 50 nm and 250 nm particles were taken up by alveolar macrophages when the particles were administered either as single particles (figure 6A, 6B, 6C, 6D ), as 50 nm agglomerated particles (figure 6F and 6G ) or as 250 nm particles (figure 6H and 6I ). Analysis of elements in HAADF mode confirmed that it were indeed gold particles (figure 6E ). No significant differences in vWF and IL-6 levels in citrate plasma were found (data not shown). Single 250 nm and agglomerated 50 nm gold particles increased fibrinogen levels after 24 hours (figure 7A ). After gold and quartz instillations CRP levels were elevated compared to the negative control after 3 hours (figure 7B ). Upregulation of these acute phase proteins indicate a general response to tissue injury.
Discussion Size differences as well as the agglomeration state of citrate stabilized gold particles of 50 nm and 250 nm have been determined and the impact on the biological response in the lung has been established in this study. Size is one of the important characteristics for the deposition pattern and fate of particles in the body. Particle size affects the accessibility of target organs, the mode of cellular uptake, endocytosis and efficiency of particle processing in the endocytic pathway. Here, we show that 50 nm gold particles that were administered as a well dispersed single particle suspension or as an agglomerated suspension are taken up by macrophages. The particles end up in the cytoplasm inside vesicles and not in other structures. This has also been observed in another study in vivo study for agglomerated 5-8 nm gold particles [ 21 ]. In all cases, nanoparticles were surrounded by a membrane (figure 6 ), indicating that the uptake of the nanoparticles occurred by endo- or phagocytosis. Certain vesicular structures contain a single NP of 50 nm, whereas others contain more than just one. There is evidence for macrophage phagocytic function to be size dependent [ 5 ]. Alveolar macrophages phagocytise spherical particles of 1-2 μm most effectively and uptake has been seen up to 5 μm [ 29 ]. In vitro, particles of 300 nm are less well phagocytised compared to 5 μm [ 30 ]. Particle uptake in macrophages for nano-sized TiO 2 has been estimated to be between 0.06 and 0.12% within 24 hours compared to micron sized particles. For the latter, 10% uptake is seen within the first hour [ 31 ] and more than 80% within 24 hours [ 15 ]. The set-up of this study did not allow quantification of the uptake. Although under the light microscope, the number of macrophages that were filled with material seemed comparable, it was more difficult to visualize macrophages containing 250 nm particles after 24 hours than 50 nm particles in TEM evaluation. This has also been seen before in our unpublished pilot study where we could visualize 250 nm particles in only a few macrophages. The overall findings on biological parameters after gold instillation based on the pulmonary and blood markers are summarized in table 3 . MIP-2 and MCP-1 were only affected by quartz and are therefore not mentioned in the table. TNF-α and IL-6 levels after quartz instillation were around detection levels as determined in two independent experiments using ELISA kits from different manufacturers. In literature, controversial results have been described with respect to increased TNF-α and IL-6 levels after quartz exposure. While some in vivo studies do show a significant increase [ 32 ], other studies only show an increase in vitro and not in vivo [ 33 , 34 ]. A NFκB inflammatory mechanism for DQ12 quartz that is partly TNF-α independent has been postulated [ 34 ]. When comparing 50 nm single versus 50 nm agglomerated particles, most changes on biological variables were found after instillation of agglomerated nanoparticles: a significant increase in ALP, fibrinogen and CRP is accompanied by a trend towards a neutrophil influx and an increase in TNF-α levels. Single 50 nm particles induced an increase in CRP levels and there was a trend towards an increase in neutrophils. Single 250 nm particles increased the number of neutrophils and the levels of fibrinogen and CRP. The response was accompanied by a trend towards an increase in TNF-α and IL-6. Agglomerated 250 nm particles only induced a significant increase in CRP. Moreover a (statistically not significant) increase in total cell number and the number of macrophages was noted. TNF-α is mainly produced by macrophages and stimulates phagocytosis. It also acts together with IL-6 (trend towards increase) by promoting inflammation by attracting neutrophils [ 35 ]. After increases in circulating TNF-α, the liver is stimulated to produce acute phase proteins like C-reactive protein (CRP) and fibrinogen. Most effects were found for single 250 nm particles, since all these parameters were affected. The 250 nm agglomerates, 50 nm agglomerates (that are the same mean size as the primary 250 nm particles) or 50 nm single particles did not affect these parameters to the same extent (table 1 ). The hypothesis that single nanoparticles could be more toxic than larger counterparts either formed by agglomeration or larger primary size particles does not apply to the gold particles that were administered here in a single dose of 1.6 mg/kg bw to the lung. The least effects in this study were seen after administration of single 50 nm particles. At least two reasons have been hypothesized to explain why NPs could be more toxic: 1. there are more reactive atoms on the surface and more surface per unit mass. 2. nanoparticles that are small enough could exert quantum effects due to constraint chemical bonds that are more likely to be disrupted [ 36 ]. A recent systematic literature overview by Auffan et al . suggested that inorganic metal and metal oxide nanoparticles with a primary particle size below 20-30 nm are likely to show different chemical properties not seen in bulk material [ 37 ]. In the case of gold, catalytic activity has been observed for particles of smaller than 10 nm [ 38 , 39 ]. The nanoparticles of 50 nm used in this study are not likely to induce increased biological responses due to enhanced chemical reactivity compared to the submicron sized gold particles. When suspended in PBS instead of ultrapure water, our study showed a 2-4 times increase in overall particle size. Particle agglomerate and aggregate formation in physiological media such as PBS has been observed for a number of different types of particles smaller than 100 nm resulting in entities that were 3-6 times larger [ 11 ]. These effects are influenced by pH, ionic strength, and different types of ions in aqueous suspensions [ 40 ]. Agglomerates are held together by weak forces such as van der Waals forces, electrostatic interactions or surface tension. After 5 minutes, 50 nm particles assembled to form large precipitates that could not be brought back into suspension using e.g. sonication. The gold nanoparticles have a negative surface charge as a result of the weakly bound citrate coating [ 41 ]. Gold colloids are vulnerable to agglomeration due to the compensation of the electrostatic repulsive force by the high ionic strength of phosphate buffered saline (PBS) [ 42 ]. In the present study we have carefully avoided this phenomenon by applying the suspensions to the animals and using the suspensions for chemical analyses immediately after preparation. Extensive characterization of the particles revealed that the size of the 50 nm particles corresponds to the manufacturer's description. The 250 nm particles turn out to be significantly smaller and some are build-up from several nano-sized particles, whereas the 50 nm particles were monocrystalline. It turns out to be difficult to obtain gold particles with exactly the same characteristics with only the primary size as a variable. Due to differences in the production process, the 250 nm particles may have a different appearance and build-up compared to 50 nm particles as was shown by electron microscopy techniques.
Conclusions Both single and agglomerated 50 nm and 250 nm particles generate a mild inflammatory reaction after intratracheal instillation as indicated by small increases in inflammatory cells, pro-inflammatory cytokine production or acute phase protein expression. The effects are the least for single 50 nm gold particles. Both agglomerated as well as single nanoparticles were taken up by macrophages. Extensive particle characterization reveals that primary particle size, concentration of the gold suspension and particle purity are important features to check, since these characteristics may deviate from the manufacturer's description. The hypothesis was that the lung might deal differently with agglomerated and single citrate stabilized gold nanoparticles of different sizes after intratracheal instillation, but there seem to be no major differences. We conclude that single 50 nm gold particles do not pose a greater acute hazard than their agglomerates or slightly larger gold particles when using pulmonary inflammation as a marker for toxicity.
Background Nanoparticle (NP) toxicity testing comes with many challenges. Characterization of the test substance is of crucial importance and in the case of NPs, agglomeration/aggregation state in physiological media needs to be considered. In this study, we have addressed the effect of agglomerated versus single particle suspensions of nano- and submicron sized gold on the inflammatory response in the lung. Rats were exposed to a single dose of 1.6 mg/kg body weight (bw) of spherical gold particles with geometric diameters of 50 nm or 250 nm diluted either by ultrapure water or by adding phosphate buffered saline (PBS). A single dose of 1.6 mg/kg bw DQ12 quartz was used as a positive control for pulmonary inflammation. Extensive characterization of the particle suspensions has been performed by determining the zetapotential, pH, gold concentration and particle size distribution. Primary particle size and particle purity has been verified using transmission electron microscopy (TEM) techniques. Pulmonary inflammation (total cell number, differential cell count and pro-inflammatory cytokines), cell damage (total protein and albumin) and cytotoxicity (alkaline phosphatase and lactate dehydrogenase) were determined in bronchoalveolar lavage fluid (BALF) and acute systemic effects in blood (total cell number, differential cell counts, fibrinogen and C-reactive protein) 3 and 24 hours post exposure. Uptake of gold particles in alveolar macrophages has been determined by TEM. Results Particles diluted in ultrapure water are well dispersed, while agglomerates are formed when diluting in PBS. The particle size of the 50 nm particles was confirmed, while the 250 nm particles appear to be 200 nm using tracking analysis and 210 nm using TEM. No major differences in pulmonary and systemic toxicity markers were observed after instillation of agglomerated versus single gold particles of different sizes. Both agglomerated as well as single nanoparticles were taken up by macrophages. Conclusion Primary particle size, gold concentration and particle purity are important features to check, since these characteristics may deviate from the manufacturer's description. Suspensions of well dispersed 50 nm and 250 nm particles as well as their agglomerates produced very mild pulmonary inflammation at the same mass based dose. We conclude that single 50 nm gold particles do not pose a greater acute hazard than their agglomerates or slightly larger gold particles when using pulmonary inflammation as a marker for toxicity.
Competing interests The authors declare that they have no competing interests. Authors' contributions IG designed the study. IG, JAP, LJJF and JWG carried out the study. IG collected, analyzed, interpreted data and drafted the manuscript. JAP, JWG, and EHJMJ generated and interpreted data. WHdeJ and FRC contributed to the study design and writing of the manuscript. All authors read and approved the final manuscript. Supplementary Material
Acknowledgements We would like to thank J. Quik from the RIVM, Bilthoven, the Netherlands for valuable assistance on zetapotential measurements, P. Krystek from MiPlaza Materials Analysis, Philips, Eindhoven, the Netherlands for gold concentration measurements, K. Vocking for excellent assistance with the cellular electronmicroscpy, D.P.K Lankveld for relevant input on the study design, D.L.A.C. Leseman, R.F. Vlug, J.C. Strootman, P.K. Beekhof and A.J.F Boere for excellent technical assistance and R. Schins for providing the DQ12 quartz.
CC BY
no
2022-01-12 15:21:36
Part Fibre Toxicol. 2010 Dec 2; 7:37
oa_package/01/a5/PMC3014867.tar.gz
PMC3014868
21126379
Introduction Over the past decade, the definition of nanoparticles has been controversial. Nanoparticles are commonly defined as objects with a diameter less than 100 nm, but no clear size cut-off exists, and this usual boundary does not appear to have a solid scientific basis. Other definitions of nanoparticles have been proposed, and the most recent proposal [ 1 ] is based on surface area rather than size (a nanoparticle should have specific surface area > 60 m 2 /cm 3 ), thus reflecting the critical importance of this parameter in governing the reactivity and toxicity of nanomaterials. Physico-chemical properties that may be important in understanding the toxic effects of nanomaterials include primary particle size, agglomeration/aggregation state, size distribution, shape, crystal structure, chemical composition, surface chemistry, surface charge, and porosity. Aspects of these properties have been discussed in several reviews of nanotoxicology [ 2 - 4 ]. Silica is the common name for materials composed of silicon dioxide (SiO 2 ) and occurs in crystalline and amorphous forms. Crystalline silica exists in multiple forms. Quartz, and more specifically α-quartz is a widespread and well-known material. Upon heating, α-quartz is transformed into β-quartz, trydimite and cristobalite. Porosil is the family name for porous crystalline silica. Quartz exists in natural and synthetic forms, whereas all porosils are synthetic. Amorphous silica can be divided into natural specimens (e.g., diatomaceous earth, opal and silica glass) and human-made products. The application of synthetic amorphous silica, especially silica nanoparticles (SNPs), has received wide attention in a variety of industries. SNPs are produced on an industrial scale as additives to cosmetics, drugs, printer toners, varnishes, and food. In addition, nanosilica is being developed for a host of biomedical and biotechnological applications such as cancer therapy, DNA transfection, drug delivery, and enzyme immobilization [ 5 - 9 ]. Barik et al. [ 10 ] recently reviewed the impact of nanosilica on basic biology, medicine, and agro-nanoproducts. With the growing commercialization of nanotechnology products, human exposure to SNPs is increasing, and many aspects related to the size of these nanomaterials have raised concerns about safety [ 11 ]. Until recently, most research has focused on silica particles 0.5 to 10 μm, mainly in crystalline forms, but nanosilica may have different toxicological properties as compared with larger particles. The unique physico-chemical properties of nano-sized silica that make them attractive for industry may present potential hazards to human health, including an enhanced ability to penetrate intracellular targets in the lung and systemic circulation. Biocompatibility is a critical issue for the industrial development of nanoparticles [ 12 , 13 ]. Even though no acute cytotoxicity has been observed or reported, the uptake of the nanoparticles by cells may eventually lead to perturbation of intracellular mechanisms. The ability of silica-coated nanomaterials to penetrate the blood-brain barrier also strongly suggests that extensive studies are required to clarify the potential chronic toxicity of these materials [ 14 ]. A number of SNPs have recently been shown to cause adverse health effects in vitro and in vivo (discussed later in this review). However, most of the studies have used poorly characterized particles in terms of their composition and physico-chemical properties. The distinct physico-chemical properties of nanoparticles indeed determine their interaction with the cell/within the cell, and even subtle differences in such properties can modulate the toxicity and modes of action. The results of toxicity studies then become difficult to interpret and compare, and, as a result, drawing appropriate conclusions is nearly impossible. Although SNPs could certainly provide benefits to society, their interaction with biological systems and potential toxic effects must be carefully addressed. In this review, we discuss silica materials with a special attention to the physico-chemical properties that can affect their potential interaction with biological systems. We aim to provide an overview of the recent in vitro and in vivo investigations of the toxicity of nanosilica, both in crystalline and amorphous forms, rather than review the toxicity of micron-sized silica and quartz. A summary of the present knowledge on the potential toxic effects of nano-sized silica particles is needed, because their toxicological pattern appears distinct from that of micron-sized silica particles.
Conclusions Silica or silicon dioxide (SiO 2 ) is, in many forms, abundantly present in our natural environment. The adverse health effects, including lung cancer, of naturally occurring crystalline silica such as quartz and cristobalite have been thoroughly documented in occupational settings. Naturally occurring amorphous silica such as diatomaceous earth is considered less harmful. Most of the synthetic (manufactured) silicas used in a large variety of applications are amorphous. For silica in general, the property most significantly linked to the toxicological potential is the crystallinity. For micron-sized crystalline silica, oxidative stress and, linked to it, oxidative DNA and membrane damage, are probably the most important mechanisms involved in the inflammogenic and fibrogenic activities (reviewed by [ 60 ]) and/or carcinogenic activity [ 39 , 165 ], for example. These mechanisms do not apply to amorphous silica, which has therefore been far less studied. Moreover, the adverse health effects of biogenic (natural) amorphous silica is often attributed to a certain degree of contamination with crystalline silica [ 49 ]. Synthetic amorphous silica (colloidal silica, fumed silica and precipitated silica) is not involved in progressive fibrosis of the lung [ 52 , 53 ]; however, high doses of amorphous silica may result in acute pulmonary inflammatory responses [ 54 ]. Interest in using SNPs is growing worldwide, especially for biomedical and biotechnological applications such as cancer therapy, DNA transfection, drug delivery, and enzyme immobilization [ 5 - 9 ]. In general, SNPs are synthetic, which has an advantage over natural silica in that they contain fewer or no impurities than do natural silica, and the physico-chemical properties are known and well controlled during production. Exposure to SNPs during the production process and their downstream use is probably minimal for sols and gels because the nanoparticles are trapped/immobilized within their matrix. However, the inhalation potential of low-density fumed silica powders or freeze-dried nanoparticles may be high without adequate precautions. Results of a growing number of in vitro studies indicate that the particle surface area may play a crucial role in the toxicity of silica [ 75 , 166 ]. The cytotoxic activity of silica particles can be related to their surface interfacing with the biological milieu rather than to particle size or shape [ 75 ]. Surface silanol groups are directly involved (as shown in vitro ) in hemolysis [ 76 - 78 ] and in alveolar epithelial cell toxicity [ 79 , 80 ]. This observation indirectly links the hydrophilicity to cellular toxicity [ 80 , 81 ]. The size and surface physico-chemical features of SNPs contribute decisively to the biological effects of SiO 2 nanoparticles. The complexity of protein-SNP interactions should not be underestimated; these interactions appear to be affected by the size of SNPs as well [ 167 - 171 ]. The effect of other physico-chemical properties of SNPs on health, such as porosity, chemical purity, surface chemistry and solubility, are less well studied, and therefore no definite conclusions can be formulated (summary of the data can be found in Table 2 ). Comparison of published studies leads to the conclusion that even a small modification of the surface can result in a more or less marked change of a biological effect [ 2 , 3 , 172 ]. Few in vitro studies have emphasized that the response to SNPs varies by cell type [ 137 , 140 , 141 ]. Considering the use of SNPs for medical applications, biocompatibility and toxicokinetics need to be documented in great detail because, despite no observation of acute (cyto)toxicity, the uptake of the particles by cells may eventually lead to perturbation of intracellular mechanisms. For instance, the ability of silica-coated nanomaterials to penetrate the blood-brain barrier supports the urgent need for extensive studies to clarify the potential chronic toxicity of these materials [ 14 ]. The successful use of nanoparticles in the clinic requires exhaustive and elaborate in vivo studies [ 155 ]. Of note, the toxicity of SNPs can depend on not only the material itself but also the administration route to the living body, as was shown by Hudson et al. [ 156 ]: subcutaneous injection presented good biocompatibility, whereas intraperitoneal and intravenous injection led to fatal outcomes. Unfortunately, only limited short-term and no chronic in vivo studies of SNPs are available (summary of the data is found in Table 3 ), and the current data do not clarify whether amorphous SNPs - showing augmented cytotoxicity and presumably processing oxidative DNA damaging potential -- are less or more harmful as compared with micron-sized silica. Determining the association of results from in vitro and in vivo toxicity assessments is difficult; however, the common feature seems to be cytotoxicity and inflammatory response after exposure to SNPs. To conclude, the available studies of the toxicity of SNPs are relatively few, especially as compared to the vast number of studies of titanium dioxide or carbon nanotubes. Besides the relative lack of information on the safety or hazards of SNPs, often conflicting evidence is emerging in the literature as a result of a general lack of standard procedures, as well as insufficient characterization of nanomaterials in biological systems. For all studies, a crucial issue remains the careful, accurate characterization of particle size and morphologic features (especially in the biological media used for experimental set-up), composition, particle surface area and surface chemistry [ 173 ]. Moreover, equally important to the physico-chemical characterization of the material is the control of assays and assay conditions [ 174 , 175 ]. Only with the complete description of the NP and assay can the results of reported studies be comparable with those of other studies conducted with similar nanomaterials [ 159 , 176 ]. Until now, the health effects of SNPs have mainly been studied in terms of exposure via the respiratory tract, after acute or sub-acute exposure; other exposure routes should also be checked (e.g. blood, skin, gastrointestinal tract). Studies of chronicity are needed to supplement and verify the existing data. Information is insufficient to clearly identify and characterize the health hazards SNPs pose, and defining the appropriate conditions for safe use of these materials is currently not possible.
Silica nanoparticles (SNPs) are produced on an industrial scale and are an addition to a growing number of commercial products. SNPs also have great potential for a variety of diagnostic and therapeutic applications in medicine. Contrary to the well-studied crystalline micron-sized silica, relatively little information exists on the toxicity of its amorphous and nano-size forms. Because nanoparticles possess novel properties, kinetics and unusual bioactivity, their potential biological effects may differ greatly from those of micron-size bulk materials. In this review, we summarize the physico-chemical properties of the different nano-sized silica materials that can affect their interaction with biological systems, with a specific emphasis on inhalation exposure. We discuss recent in vitro and in vivo investigations into the toxicity of nanosilica, both crystalline and amorphous. Most of the in vitro studies of SNPs report results of cellular uptake, size- and dose-dependent cytotoxicity, increased reactive oxygen species levels and pro-inflammatory stimulation. Evidence from a limited number of in vivo studies demonstrates largely reversible lung inflammation, granuloma formation and focal emphysema, with no progressive lung fibrosis. Clearly, more research with standardized materials is needed to enable comparison of experimental data for the different forms of nanosilicas and to establish which physico-chemical properties are responsible for the observed toxicity of SNPs.
Synthesis & Characterization of Silica Materials Classification of natural and synthetic silica materials "Silica" is the name given to materials with the chemical formula of silicon dioxide, SiO 2 . Silicas can be amorphous or crystalline, porous or non-porous (dense), anhydrous or hydroxylated [ 15 ], regardless of their natural or synthetic nature. In a silica material, the silicon atom is in tetrahedral coordination with 4 oxygen atoms. Theoretically, an infinite variety of 3-D-ordered structures can be built from oxygen-sharing silicate tetrahedra. The number of known crystalline silica materials is limited, which leaves much room for research and development. In amorphous silica, the tetrahedra are randomly connected. In nature, amorphous silica can have different origins. Silica can be condensed from vapors emitted in volcanic eruptions. Natural silica can also be deposited from supersaturated natural water or polymerized in living organisms (biogenic silica). These amorphous biogenic silicas can be found as isolated particles, skeletal structures or surface elements in different living organisms. Many microcrystalline silica minerals such as flint, chert and chalcedony are derived from biogenic silica after crystallization by compaction. Kieselguhr (diatomaceous earth) occurs at various stages of transformation [ 15 ] and therefore often exhibits both crystalline and amorphous silica constituents. Physico-chemical characteristics of synthetic silica materials related to toxicity The silica materials presenting a toxicological hazard to human health are mainly synthetic materials and natural quartz. The physico-chemical properties of silica materials largely depend on the synthetic procedures used for their preparation. Therefore, we will briefly discuss silica synthesis processes. Silica synthesis Silica is mainly synthesized from an aqueous solution, with dissociated monomeric silicic acid, Si(OH) 4 , or from a vapor of a silicon compound such as silicon tetrachloride. Waterglass is a concentrated alkaline sodium silicate solution with anhydrous composition corresponding to Na 2 SiO 3 . It is the most common reagent for silica production in aqueous solution . Waterglass is a sodium salt of silicic acid that forms silicic acid upon acidification. When the concentration of Si(OH) 4 exceeds about 2.10 -3 M, condensation to polysilicic acids (Figure 1 ) occurs, thus leading to the formation of colloidal silica particles [ 15 ]. The polymerization and the formation of silica can be represented as follows: [Si n O 2n-nx/2 (OH) nx ] + m Si(OH) 4 → [Si n+m O 2n-nx/2+2m(2-p) (OH) nx+4(m-p) ] + 2 pm H 2 O Where: n = number of silicon atoms in a polysilicic acid molecule or particle, x = number of OH groups per silicon atom in the polymer (0≤ × ≤ 3), m = number of monomeric silicic acid molecules added to the polymer, and p = fraction of the hydroxyl groups per monomeric silicic acid molecule that are converted to water during the polymerization reaction [ 15 ]. Amorphous silica particles are formed by polymerization of monomers in the aqueous solution supersaturated with silicic acid. Various silica materials are produced in liquid phase processes (Figure 2 ). Colloidal silica or silica sol is most often produced in a multi-step process in which the alkaline silicate solution is partially neutralized with a mineral acid. Alternatively, this pH neutralization can be achieved by electrodialysis. The resulting silica suspension is stabilized by pH adjustment. Finally a solid concentration up to 50 wt% is reached by water evaporation. Silica sol nanoparticles show a perfect spherical shape and identical size as a result of extensive Ostwald ripening [ 15 ]. Stöber silica sol is prepared by controlled hydrolysis and condensation of tetraethylorthosilicate (TEOS) in ethanol to which catalytic amounts of water and ammonia are added. The Stöber procedure can be used to obtain monodisperse spherical amorphous silica particles with tunable size and porosity [ 16 ]. Silica gel is obtained by destabilizing silica sol. Silica gel is an open 3-D network of aggregated sol particles. The pore size is related to the size of the original silica sol particles composing the gel. Precipitated silica is formed when a sol is destabilized and precipitated. Ordered mesoporous silica is obtained by a supramolecular assembly of silica around surfactant micelles. Typical surfactant molecules are amphiphilic polymers such as tribloc copolymers or quaternary alkylammonium compounds. These organic supramolecular templates are evacuated from the mesopores, typically via a calcination step. Calcination is a controlled combustion process leading to oxidation and decomposition of the template molecules into small volatile products such as NO x , CO 2 and H 2 O, which can leave the pores. The diameter of the mesopores (2-50 nm) is determined by the type of surfactant applied [ 17 , 18 ]. A completely different synthesis route of amorphous silica starts from SiCl 4 in the vapor phase. Silicon tetrachloride is oxidized in a hydrogen flame at temperatures exceeding 1000°C and polymerized into amorphous non-porous SNPs. This nanopowder has very low bulk density and high specific surface area, typically 200 to 300 m 2 /g. This material is called pyrogenic or fumed silica , referring to the special synthesis conditions [ 15 ]. The synthesis of dense crystalline silica such as quartz from aqueous solution is a slow process requiring heating the solution to accelerate the formation process in a so-called hydrothermal synthesis [ 15 ]. Alternatively, under high pressure, amorphous silica can be transformed to crystalline material by microcrystallization. The appearance of quartz ranges from macroscopic crystals to microcrystalline powders. Large crystals are grown at high temperature and pressure in industry. Smaller quartz crystals are conveniently obtained by grinding large crystals. Alpha-quartz is formed under moderate temperature and pressure conditions and is the most abundant form of quartz. At temperatures exceeding 573°C, α-quartz can transform into β-quartz [ 19 ]. At atmospheric pressure and temperatures higher than 870°C, quartz is transformed into tridymite and at temperatures more than 1470°C into cristobalite [ 15 , 20 ]. These high-temperature polymorphs of quartz have the same elemental composition but a different crystal structure and can persist metastably at lower temperatures. Dense and porous crystalline materials can be distinguished by framework density. The framework density is conveniently defined as the number of tetrahedrally coordinated atoms (T-atoms) per nm 3 . For dense structures, such as quartz, tridymite and cristobalite, values of 22 to 29 T-atoms/nm 3 are common, whereas for porosils belonging to the zeolite material family, as few as 12.1 T-atoms/nm 3 are present [ 21 ]. The framework structure of a porosil is denoted with a 3-letter code. Descriptions are available in the Atlas of Zeolite Framework Types [ 22 ]. Porosils are crystallized in aqueous media in the presence of organic molecules that act as porogens or template molecules defining the size and shape of the pores. Their evacuation is typically achieved through calcination. Among the porosils are clathrasils and zeosils [ 23 , 24 ]. Zeosils have cages with windows or channels of a sufficiently free dimension to allow molecules to diffuse in and out, a property known as molecular sieving [ 25 ]. Clathrasils have cages with windows that are delineated with a 6-membered ring of SiO units, thus presenting a free aperture of barely 0.28 nm. Even a molecule as small as oxygen has no access to the cavities of a clathrasil. The organic template molecules engaged in the crystallization of a clathrasil cannot be removed easily from the pores [ 23 , 24 ]. When heated above 1700°C, any type of silica (amorphous or crystalline) melts. During cooling, the disordered structure is solidified, and a dense amorphous silica glass or vitreous silica is formed [ 15 ]. Physico-chemical properties The properties of silica materials considered essential for their potential toxicity are crystallinity, particle size and morphology, porosity, chemical purity, surface chemistry and solubility [ 26 ]. An overview of the properties of silica materials involved in silica toxicity is provided in Table 1 . Crystallinity In crystalline structures such as quartz and porosils, the arrangement of atoms is ordered in all dimensions. According to the International Union of Pure and Applied Chemistry (IUPAC), the atoms must be arranged periodically with long-range order (at least 10 repeats in all directions) and produce sharp maxima in a diffraction experiment to observe x-ray diffraction (XRD) crystallinity [ 27 ]. The threshold for observing crystallinity depends on the unit cell size (size of the repeated unit in a crystal). For materials with large unit cells, such as porosils, the minimum particle size required is about 10 nanometers to observe a distinct, sharp XRD pattern. Amorphous silica may present some short-range order but lacks long-range order in 3 dimensions and does not exhibit a sharp XRD pattern. Of note, the surface of a crystal represents a discontinuity that can be seen as a defect. With the presence of a less-structured or even partially amorphous rim, crystals may behave like amorphous particles. Thus, particles with an ordering at limited-length scales or with amorphous regions may be classified as amorphous. Particle size and morphology Nanoparticles are obtained by direct synthesis of silica sol [ 15 ] or by crystallization of nano-sized crystals of quartz or porosils [ 25 ]. The particle size is determined by the synthesis parameters. Amorphous silica sol particles tend to adopt the spherical shape so as to reach a minimum of interfacial surface area. The particle size of commercial silica sols prepared from sodium silicate is from 10 to 25 nm (Figure 3 left). Sols with larger primary particles can be prepared from TEOS by Stöber synthesis, for example (Figure 3 middle). Grinding and milling processes reduce particle size. These techniques are most often applied to quartz, silica gel and vitreous silica. The obtained products generally have a broad size distribution. Crystalline particles exhibit crystal planes at the surface, and the morphology of the crystalline nanoparticles depends on the crystal class such as cubic, hexagonal, tetragonal, and orthorhombic (Figure 3 right). For all nanomaterials, in aqueous environment, the primary nano-sized silica particles tend to form aggregates. Porosity According to IUPAC [ 28 ], pores are classified according to their diameter into micropores (< 2 nm), mesopores (2-50 nm) and macropores (> 50 nm). Amorphous sol particles can be microporous or non-porous (dense). The porosity of Stöber silica can be tuned by adapting the synthesis parameters: decreasing the ratio of water to TEOS promotes particle growth by aggregating smaller sub-particles, thus leading to rough particle surfaces with micropores. In contrast, smooth particle surfaces are obtained with conditions of high ratio of water to TEOS [ 29 ]. Silica gel is a powder with particle size in the micrometer range or larger and is, typically, mesoporous. Zeosils and clathrasils have characteristic pores and cages in the micropore size range, depending on framework topology. Examples of porosil frameworks are shown in Figure 4 [ 22 ]. When the silica is presented as a nanopowder, porosity can be an intrinsic and extrinsic characteristic: stapling of the elementary nanoparticles gives rise to an interparticle porosity, which often is difficult to distinguish from the intrinsic intraparticle porosity, especially when dealing with mesoporosity. Hydrophilic-hydrophobic properties The hydrophilicity of a silica material increases with the number of silanols, or silicon-bonded hydroxyl groups, capable of forming hydrogen bonds with physical water molecules. The chemical formula of silica is represented as SiO 2 .xH 2 O, in which water represents chemical water contained in silanol groups present on the surface of the silica material. These water molecules are not to be confused with crystal water, such as that present in many inorganic salt crystals. The surface chemistry of silica is depicted in Figure 5 . Vicinal hydroxyl groups (one hydroxyl group per tetrahedron) located at mutual distances smaller than 3 nm are engaged in hydrogen bonding. Geminal hydroxyls (2 hydroxyl groups per tetrahedron) are considered to occur in minor concentrations. Isolated silanols are positioned too far apart to be engaged in hydrogen bonding. Because of the differing chemistry of these 3 types of silanol groups, they are not all equivalent in their adsorption behavior or chemical reactivity. Vicinal hydroxyls interact strongly with water molecules and are responsible for the excellent water adsorption properties of silica, which are exploited in industrial gas drying operations, for example. The reported concentration of hydroxyl groups per square nanometer on the surface of amorphous silica ranges from 4 to 5 OH/nm 2 [ 12 ]. As compared with amorphous silica, the crystalline forms of silica generally contain a lower concentration of surface hydroxyl groups [ 15 ]. Hydrogen-bonded water molecules are removed when silica is heated at 170°C under atmospheric pressure or at room temperature under vacuum. Colloidal silica, precipitated silica and ordered mesoporous silica and silica gel are hydrophilic because of their high concentration of silanols. Silicagel, for example, can adsorb water in quantities up to 100% of its proper weight. Porosils typically are hydrophobic because they lack silanols in the pores of their framework. Silica produced at high temperature, such as pyrogenic and vitreous silica, or calcined at temperatures exceeding 800°C, is almost entirely dehydroxylated. In a dehydroxylation reaction, neighboring silanol groups are condensed into siloxane bonds (Figure 5 bottom) and water molecules. Some isolated silanol groups may persist on the surface [ 15 ]. Because hydrogen bonding on siloxanes is unfavorable, dehydroxylated silica is hydrophobic. Grinding of hydrophobic bulk materials such as quartz and vitreous silica induces silicon and oxygen radicals and surface charges. These charges increase the hydrophilic surface [ 19 , 30 ]. Solubility The dissolution and precipitation of silica in water chemically involves hydrolysis and condensation reactions, respectively, catalyzed by OH - ions (Figure 1 ). For micrometer-sized nonporous amorphous silica, the equilibrium concentrations of Si(OH) 4 at 25°C in water corresponds to 70 ppm at pH 7. The silica solubility depends on the surface curvature of the (nano)particles. SNPs and nanoporous silica show enhanced equilibrium solubility, of 100-130 ppm [ 12 ]. According to Vogelsberger et al. [ 31 ], the solubilization of amorphous SNPs in physiological buffer at 25°C is accelerated because of the large surface area exposed. The solubility equilibrium is reached only after 24 to 48 h. Crystalline silica such as quartz has a much lower equilibrium solubility, of 6 ppm [ 15 ]. In summary, when dealing with silica, the physico-chemical properties such as amorphous versus crystalline nature, porosity, particle size and degree of hydroxylation must be specified. An overview of silica materials described in the scientific literature and in the research and development environment is provided in Table 1 . Toxicity Of Silica Background Health effects of silica and epidemiological studies Until recently, toxicological research into silica particles focused mainly on "natural" crystalline silica particles of 0.5 to 10 μm (coarse or fine particles). This research was/is fed by the clear association of occupational inhalation exposure and severe health effects, mainly on the respiratory system. The typical lung reaction induced by chronic inhalation of crystalline silica is silicosis, a generally progressive fibrotic lung disease (pneumoconiosis), exemplified by the development of silicotic nodules composed of silica particles surrounded by whorled collagen in concentric layers, with macrophages, lymphocytes, and fibroblasts in the periphery. Epidemiologic studies have found that silicosis may develop or progress even after occupational exposure has ended; therefore, above a given lung burden of particles, silicosis was suggested to progress without further exposure [ 32 - 34 ]. Calvert et al. [ 35 ] recently reported an association of crystalline silica (mainly quartz) exposure and silicosis, as well as lung cancer, chronic obstructive pulmonary disease (COPD), and pulmonary tuberculosis. The carcinogenicity of quartz and cristobalite has been shown in several epidemiological studies [ 36 - 38 ]. In 1997, the International Agency for Research on Cancer (IARC) classified some crystalline silica polymorphs (quartz and cristobalite) in group 1 (sufficient evidence for the carcinogenicity to experimental animals and to humans), whereas amorphous silica (silicon dioxide without crystalline structure) was classified in group 3 (inadequate evidence for carcinogenicity) [ 39 ]. This classification has recently been confirmed [ 40 ]. Checkoway and Franzblau [ 41 ] reviewed occupational epidemiologic literature on the interrelations among silica exposure, silicosis and lung cancer and concluded that the appearance of silicosis is not necessarily required for the development of silica-associated lung cancer. Hnizdo and Vallyathan [ 42 ] suggested that chronic exposure to levels of crystalline silica dust, which does not cause disabling silicosis, may cause chronic bronchitis, emphysema, and/or small airway disease leading to airflow obstruction, even in the absence of radiological evidence of silicosis. Evidence has linked silica exposure to various autoimmune diseases (systemic sclerosis, rheumatoid arthritis, lupus, chronic renal disease), as reviewed by Steenland and Goldsmith [ 43 ]. A study by Haustein et al. [ 44 ] reported on silica-induced (silica dust) scleroderma. Amorphous silica has been far less studied than has the crystalline form [ 39 ]. Warheit [ 45 ] briefly described the inhalation toxicity data related to amorphous silica particulates and concluded that some forms of amorphous silica are more potent in producing pulmonary effects as compared to others. He also emphasized the great need for adequate toxicological testing of many of these amorphous silicates given their importance in commerce and widespread potential for exposure. Workers exposed to precipitated or fumed silica did not exhibit pneumoconiosis [ 46 , 47 ], but evidence of pulmonary fibrosis was reported in workers exposed to amorphous silica dust produced as a byproduct of silicon metal production [ 48 ]. Merget et al. [ 49 ] reviewed the current knowledge of the health effects of a wide range of amorphous forms of silica in humans. The major problem in the assessment of health effects of biogenic amorphous silica is its contamination with crystalline silica. This problem applies particularly to the well-documented pneumoconiosis among diatomaceous-earth workers. Although the data are limited, a risk of chronic obstructive bronchitis disease, COPD or emphysema cannot be excluded [ 49 ]. Animal inhalation studies involving synthetic amorphous silica (colloidal silica, fumed silica and precipitated silica) showed at least partially reversible inflammation [ 50 , 51 ], granuloma formation and emphysema, but no progressive fibrosis of the lungs [ 52 , 53 ]. However, high doses of amorphous silica may result in acute pulmonary inflammatory responses, which could conceivably trigger long-term effects, despite a low biopersistence of the particles [ 54 ]. The debate on the health effects of micron-sized crystalline or amorphous silica is beyond the scope of this article. Readers are referred to other publications [ 35 - 38 , 41 , 55 - 57 ]. Mechanisms of toxic action As mentioned, most of the toxicological research into silica has focused on crystalline silica particles of 0.5 to 10 μm (coarse or fine particles). Despite the relatively large amount of available studies, the mechanisms of crystalline silica toxicity at the cellular and molecular levels are still unclear, and whether any single mechanism underlies all the above-mentioned diseases induced by these particles is uncertain [ 43 ]. However, severe inflammation following exposure to silica particles appears to be a common initiating step [ 58 , 59 ]. The crucial role of reactive oxygen species (ROS) in the inflammatory, fibrogenic and carcinogenic activity of quartz is well established [ 60 , 61 ]. Oxidative membrane and DNA damage are considered the most important mechanisms involved in the health effects of micron-sized crystalline silica. A few of the numerous reports clearly demonstrate these findings: ROS generated by the silica surface can induce cell membrane damage via lipid peroxidation that may subsequently lead to increased cellular permeability [ 62 ], perturbation of intracellular calcium homeostasis [ 63 ] and alterations in signaling pathways. Schins et al. and Fanizza et al. [ 64 , 65 ] demonstrated that respirable quartz particles induce oxidative DNA damage in human lung epithelial cells. Li et al. [ 66 , 67 ] demonstrated that micron-sized quartz particles induce . OH generation through an iron-dependent mechanism. A close association of . OH and iron ion concentration has been reported for amorphous silica particles [ 66 , 67 ]. The study of Ghiazza et al. [ 30 ] indicates that crystallinity might not be a necessary prerequisite to make a silica particle toxic; both quartz and vitreous silica showed stable surface radicals and sustained release of HO . radicals. When tested on macrophages, vitreous silica and pure quartz showed a remarkable potency in cytotoxicity, release of nitrite and tumor necrosis factor α (TNF-α) production, suggesting a common behavior in inducing of oxidative stress [ 30 ]. Ding et al. [ 68 ] discuss the molecular mechanisms of silica-induced lung injuries with a focus on NF-kB activation, generation of cyclooxygenase II and tumor necrosis factor α (TNF-α). The review of Castranova [ 69 ] summarizes evidence that in vitro and in vivo exposure to crystalline silica results in activation of NF-kB and AP-1 signaling pathways. In vitro and in vivo animal studies, as well as investigations in humans, strongly support the role of macrophage products in the development and progression of silicosis [ 70 ]. Such products include a large panel of cytokines [ 71 ], with TNF-α seeming to determine the development of silica-induced pulmonary fibrosis [ 72 ]. In addition, recent evidence implicates interleukin 1β (IL-1β) and its activation by the NALP-3 inflammasome [ 73 ]. A large body of experimental work in the past 20 years has shown that 2 main factors seem to govern the hazardous nature of crystalline silica: particle surface reactivity and the form of silica [ 74 ]. Fenoglio et al. [ 75 ] evaluated these factors systematically, studying synthetic quartz samples differing only in size and shape. Cytotoxicity appeared to be primarily governed by the form of the particles and the extent of the exposed surface. Several studies indicate that the surface silanol groups are directly involved both in membranolysis [ 76 - 78 ] and in toxicity to alveolar cells [ 79 , 80 ]. Therefore, the distribution and abundance of silanols determines the degree of hydrophilicity (see "Physico-chemical properties of synthetic silica materials related to toxicity" described above) and seems to modulate cell toxicity [ 80 , 81 ]. Experimental work with respirable silica particles and the survey of published data by Bagchi [ 82 ] suggest that the toxicity of these particles is caused by the large amount of positive charges they carry. Ghiazza et al. [ 83 ] reported that formation of a vitreous phase at the surface of some commercial diatomaceous earth prevents the onset of oxidative stress effects. Donaldson and Borm [ 84 ] emphasized that the ability of quartz to generate ROS could be modified by a range of substances that affect the quartz surface, such as substances originating from other minerals. The authors concluded that the toxicity of quartz is not a constant entity and may vary greatly depending on the origin/constitution of the sample. The origin/synthesis of SNPs plays a crucial role in determining the physico-chemical properties of these particles and, consequently, their potential interactions with biological systems. Surface area, surface morphology, surface energy, dissolution layer properties, adsorption and aggregation properties are relevant parameters. Depending on the manufacturing process, amorphous silica has a wide range of physico-chemical properties that determine its industrial application. Bye et al. [ 85 ] showed that the cytotoxic activity of different forms of amorphous silica does not depend on a crystalline silica component but, rather, is caused by surface charges and the morphologic features of particles. Synthetic amorphous silica has been the subject of dissolution testing with a simulated biological medium, and the silica dissolution rate was reported as being more rapid than the reverse precipitation rate [ 86 ]. Solubility has been defined as a key driver in the clearance mechanisms involved in amorphous silica removal from lung [ 87 ]. Warheit [ 45 ] reviewed pulmonary responses to different forms of silica and reported that cristobalite produced the greatest lung injury, quartz produced intermediate effects, and amorphous silica produced minimal effects. In terms of analytical technique, small differences in dissolution exist among these different forms of silica, and dissolution, in turn, influences pulmonary effects through the concept of persistence. In addition, components from the biological system may react with the surface of the particle. A systematic investigation of iron-containing SNPs as used in industrial fine-chemical synthesis demonstrated the presence of catalytic activity that could strongly alter the toxic action of nanoparticles [ 88 ]. On the whole, considering the great variety of silica forms, degree of crystallinity, surface state and the presence of contaminants, there is a critical need for carefully characterized standard silica samples to unravel the relationships between physico-chemical factors and toxicity, both micron- and nano-sized. The main goal of this review is to focus on the toxicity of nanosilica, which has never been properly reviewed. Moreover, nanosilica occurs mainly in amorphous forms, and the potential hazard posed by these nanomaterials cannot be simply related to, as has already been reviewed many times, studies of micron-sized crystalline materials. Silica nanoparticles The growing abundance and industrial applications of nanotechnology has resulted in a recent shift of toxicological research towards nanoparticles [ 89 - 94 ]. Ultrafine particles (< 0.1 μm) have been demonstrated to cause greater inflammatory responses and particle-mediated lung diseases than have fine particles (< 2.5 μm) per given mass [ 95 - 97 ]. Also, experiments involving silica have shown that nanoparticles, both ultrafine colloidal silica [ 98 , 99 ] and crystalline silica [ 99 ], have a greater ability to cause lung injury as compared with fine particles. Thus, the unique properties (i.e., small size and corresponding large specific surface area; cell penetrating ability) of nano-sized SiO 2 are likely to impose biological effects that differ greatly from micron-scale counterparts. In vitro studies of nanosilica toxicity A structured summary of in vitro studies of the toxicity of SNPs can be found in Table 2 . Chen and von Mikecz [ 100 ] investigated the effects of nanoparticles on structure, function, and proteasomal proteolysis in the cell nucleus by incubating different cell lines with unlabeled and fluorescently labeled amorphous silica particles of different sizes [ 100 ]. SiO 2 particles between 40 nm and 5 μm were applied to epithelial cells in culture and observed on confocal laser scanning microscopy with differential interference contrast. Particles of all tested sizes penetrated the cytoplasm; however, nuclear localization was observed exclusively in cells treated with SiO 2 nanoparticles between 40 and 70 nm. Fine and coarse SiO 2 particles (0.2-5 μm) were exclusively located in the cytoplasm and accumulated around the nucleus, forming nuclear indentations. The uptake of SNPs in the nucleus induced aberrant clusters of topoisomerase I and protein aggregates in the nucleoplasm -- the former inhibiting replication, transcription, and cell proliferation -- without altering cell viability. Cells treated with fine (0.5 μm) or coarse (5 μm) SiO 2 particles had the same replication and transcription activity as that of untreated control cells [ 100 ]. Jin et al. [ 101 ] investigated the potential toxicity of luminescent amorphous SNPs (50 nm) in freshly isolated rat alveolar macrophage cells and human lung epithelial cells (A549 cells). The SNPs penetrated the cells but were not detected in the nuclear region and did not cause significant toxic effects at the molecular and cellular levels below a concentration of 0.1 mg/ml. Lin et al. [ 102 ] investigated the cytotoxicity of amorphous (colloidal) SNPs (15 and 46 nm) in cultured human alveolar epithelial cells (A549 cells). Cell viability decreased in a time- and dose-dependent manner (down to 100 μg/ml), and nanoparticles of both sizes were more cytotoxic than were fine quartz particles (Min-U-Sil 5). Exposure to 15-nm SNPs generated oxidative stress in A549 cells as reflected by reduced glutathione (GSH) levels, elevated production of malondialdehyde (MDA) and lactate dehydrogenase (LDH) leakage, which is indicative of lipid peroxidation and membrane damage, respectively [ 102 ]. In the study by Wottrich et al. [ 103 ], A549 cells and macrophages (THP-1, Mono Mac 6) exposed to 60 nm amorphous SNPs showed distinctly higher mortality than did larger silica particles (diameter 100 nm). Another study by Choi et al. [ 104 ], involving A549 cells and amorphous SNPs (14 nm), showed a pro-inflammatory response triggered by nanoparticles without blocking cell proliferation or causing cell death to any great extent. A recent work by Akhtar et al. [ 105 ] examined cytotoxicity (by MTT and LDH assay) and oxidative stress (ROS levels, membrane lipid peroxidation, GSH level and activity of GSH metabolizing enzymes) in A549 cells exposed for 48 h to amorphous SNPs of 10 and 80 nm. The SNPs were cytotoxic to studied cells through oxidant generation (ROS and membrane lipid peroxidation) rather than depletion of GSH. Eom and Choi [ 106 ] studied oxidative stress caused by amorphous SNPs (7 and 5-15 nm) in human bronchial epithelial cells (BEAS-2B) and observed the formation of ROS and induction of antioxidant enzymes. Shi et al. [ 107 ] exposed A549 cells to amorphous SNPs (10-20 nm) at concentrations up to 200 μg/ml and observed low cytotoxicity as measured by MTT and LDH assays. However, co-treatment with the same nanoparticles and lipopolysaccharide, a bacterial product that may contaminate (nano)materials, significantly enhanced the cytotoxicity. Yu et al. [ 108 ] examined the cytotoxic activity (by MTT and LDH assay) of well-dispersed amorphous silica particles (30-535 nm) in mouse keratinocytes. All sizes of particles were taken up into the cell cytoplasm; nuclear uptake was not studied. The toxicity was dose and size dependent, with 30- and 48-nm particles being more cytotoxic than 118- and 535-nm particles. The reduced GSH level significantly decreased only after exposure to 30-nm nanoparticles [ 108 ]. Nabeshi et al. [ 109 ] showed the size-dependent cytotoxic effects of amorphous silica particles (70, 300 and 1000 nm) on mouse epidermal Langerhans cells. The smallest particles induced greater cytotoxicity (by LDH assay) and inhibited cellular proliferation (by [ 3 H]-thymidine incorporation). The observed effects were associated with the quantity of particle uptake into the cells. Yang et al. [ 110 ] evaluated the effects of amorphous SNPs (15 and 30 nm) and micron-sized silica particles on cellular viability, cell cycle, apoptosis and protein expression in the human epidermal keratinocyte cell line HaCaT. Microscopy examination revealed morphological changes after 24-h exposure; cell growth also appeared to be significantly inhibited. The cellular viability of HaCaT cells was significantly decreased, and the amount of apoptotic cells was increased in a dose-dependent manner after treatment with nano- and micron-sized SiO 2 particles. Furthermore, smaller silica particles were more cytotoxic and induced a higher apoptotic rate. Proteomic analysis revealed differential induction of expression of 16 proteins by SiO 2 exposure; proteins were classified into 5 categories according to their functions: oxidative stress-associated proteins, cytoskeleton-associated proteins, molecular chaperones, energy metabolism-associated proteins, and apoptosis and tumor-associated proteins. The expression levels of the differentially expressed proteins were associated with particle size [ 110 ]. In a recently published study [ 111 ], the same research group used these SNPs to study the global DNA methylation profiles in HaCaT cells; the authors reported that nanosilica treatment can induce epigenetic changes. Cousins et al. [ 112 ] exposed murine fibroblasts to small amorphous (colloidal) silica particles (7, 14 and 21 nm) over a long incubation period (1, 3 and 7 days and up to 7 weeks) and observed a distinctive cellular response affecting the morphologic features, adhesion and proliferation of the fibroblasts but not cell viability. Chang et al. [ 113 ] exposed selected human fibroblast and cancer cell lines for 48 h to amorphous SNPs and assessed cellular viability by MTT and LDH assays. Cytotoxicity was seen at concentrations > 138 μg/ml and depended on the metabolic activity of the cell line. However, the average primary size of tested silica particles was 21 and 80 nm, but their average hydrodynamic particle size was 188 and 236 nm, respectively, so in media, aggregates/agglomerates were formed. In the study of Yang et al. [ 114 ], cell membrane injury induced by 20-nm amorphous silica nanoparticles in mouse macrophages was closely associated with increased intracellular oxidative stress, decreased membrane fluidity, and perturbation of intracellular calcium homeostasis. Besides inhalation, ingestion is considered a major uptake route of nanoparticles into the human body [ 3 ]; however, the possible harmful effects of engineered nanoparticles in the gastrointestinal tract are still largely unknown. Recently, Gerloff et al. [ 115 ] investigated the cytotoxic and DNA damaging properties of amorphous fumed SiO 2 nanoparticles (14 nm) in the human colon epithelial cell-line Caco-2. Exposure to SNPs for up to 24 h caused cell mortality, significant DNA damage and total glutathione depletion. The results of an in vivo study of mice fed nanosized silica are discussed in section 3.2.2. Ye et al. [ 116 ] reported on induced apoptosis in a human hepatic cell line after exposure to amorphous (colloidal) SNPs (21, 48 and 86 nm). The viability of cells was assessed by LDH and MTT assay; oxidative stress was studied by measurement of ROS, lipid peroxidation and GSH concentration; and apoptosis was quantified by annexin V/propidium iodide staining and DNA ladder assays. Nano-SiO 2 caused cytotoxicity in a size-, dose- and time-dependent manner. Because nanoparticles are probably distributed by the blood stream (e.g., with medical applications), endothelial cells would also come in direct contact with these particles, for pathogenic particle-endothelial interactions. Peters et al. [ 117 ] evaluated the effects of 4- to 40-nm amorphous SiO 2 particles in vitro on human dermal microvascular endothelial cell function and viability. The particles were internalized but did not exert cytotoxic effects (MTS assay). However, cells showed impaired proliferative activity and pro-inflammatory stimulation. Napierska et al. [ 118 ] reported a dose-dependent cytotoxicity (by MTT and LDH assay) of monodisperse amorphous SNPs (16-335 nm) in a human endothelial cell line. The toxicity of the particles was strongly related to particle size; smaller particles showed significantly higher toxicity and also affected the exposed cells faster. Ye et al. [ 119 ] evaluated the toxicity of amorphous SNPs (21 and 48 nm) towards rat myocardial cells. Exposure to the SNPs for up to 48 h resulted in size-, dose- and time-dependent cytotoxicity, smaller particles again showing higher toxicity. Barnes et al. [ 120 ] reported no detectable genotoxic activity (by Comet assay) of amorphous SNPs (20 nm to < 400 nm) in 3T3-L1 fibroblasts at 4 or 40 μg/ml silica for 24 h. The particle dispersions were carefully characterized and the results were independently validated in 2 separate laboratories. In a recent review, Gonzalez et al. [ 121 ], in a literature review, compared 2 genotoxicity tests -- the alkaline Comet assay and the micronucleus test - in terms of chemical composition and size of engineered SNPs: engineered SNPs did not seem to induce DNA strand breakage. However, when monodisperse amorphous SNPs of 3 different sizes (16, 60 and 104 nm) were selected to assess the genotoxic potential of these particles in A549 lung carcinoma cells with a well-validated assay (the in vitro cytochalasin-B micronucleus assay), at non-cytotoxic doses, the smallest particles showed an apparently higher-fold induction of micronucleated binucleated (MNBN) cells [ 122 ]. When considering the 3 SNPs together, particle number and total surface area accounted for MNBN induction because they were significantly associated with the amplitude of the effect. Crystalline nanosilica Wang et al. [ 99 ] investigated cytotoxicity (by MTT assay) and genotoxicity of ultrafine crystalline SiO 2 particulates (UF-SiO 2 ) in cultured human lymphoblastoid cells. A 24-h treatment with 120 μg/ml UF-SiO 2 produced a fourfold increase in MNBN cells, with no significant difference as measured by the Comet assay. However, the ultrafine crystalline silica used was extracted from commercially available crystalline silica and the particle sizes were not uniform [ 99 ]. Mesoporous silica The cytoxicity of amorphous mesoporous SNPs ( MSNs ) was recently studied intensively because they are promising materials for drug delivery systems and cell markers [ 8 , 123 , 124 ]. Several studies have demonstrated that efficient cellular uptake of MSNs could be achieved at concentrations < 50 μg/ml, with no cytotoxic effects observed up to 100 μg/ml in different mammalian cells [ 125 - 130 ]. Lu et al. [ 128 ] reported on the optimal size of ~50 nm MSNs for cell uptake. Slowing et al. [ 131 ] reported that, contrary to the known cytotoxicity of amorphous SNPs toward red blood cells, mesoporous SNPs exhibit high biocompatibility at concentrations adequate for potential pharmacological applications. However, studies have reported cytotoxicity of mesoporous silica nanomaterials. Tao et al. [ 132 ] investigated the effects of two types of MSNs (pore diameters of 31 and 55 Å) on cellular bioenergetics (cellular respiration and ATP content) in myeloid and lymphoid cells and isolated mitochondria. Only cells exposed to MSNs with larger size and larger pores showed concentration- and time-dependent inhibition of cellular respiration, and both nanoparticles were toxic to the isolated mitochondria. Di Pasqua et al. [ 133 ] reported that the toxicity of MSNs towards human neuroblastoma cells was related to the adsorptive surface area of the particle. However, the nature of the functional groups playing a role could not be excluded. Vallhov et al. [ 134 ] investigated the effects of mesoporous SNPs of different sizes (270 nm and 2.5 μm) on human dendritic cells and found viability, uptake and immune regulatory markers affected by increasing size and dose. He et al. [ 135 ] evaluated the influence of size and concentration of mesoporous SNPs (190, 420 and 1220 nm) on cytotoxicity in human breast cancer cells and monkey kidney cells. The cytotoxicity of the particles was associated with particle size: silica of 190 and 420 nm in diameter showed significant cytotoxicity at concentrations > 25 μg/ml; whereas particles of 1220 nm in diameter showed slight cytotoxicity at 480 μg/ml. The smaller particles were suggested to be more easily endocytosed and consequently located within lysosomes [ 135 ]. Surface-modified/functionalized silica Brown et al. [ 136 ] attempted to evaluate the role of shape in particle toxicity in the lung; the authors compared the response of rod-shaped and spherical amorphous silica particles (Stöber), not coated or coated with fibronectin or polyethylene glycol (PEG), under stretched and static conditions. The dosimetric comparison of materials with different shapes (e.g., needle-shaped or acicular and isotropic) was not straightforward. Non-coated particles induced an increase in IL-8 and LDH release, whereas a surface modification with PEG mitigated this effect, which suggested the significance of adhesive interactions for membrane binding/signal transduction, for example [ 136 ]. Diaz et al. [ 137 ] described the interactions of two amorphous silica particles - a pristine particle, without any coating, and PEGylated silica particles (average size 130 and 155 nm), as well as an iron oxide particle with a silica shell (80 nm) -- with different human peripheral blood cells, several human tumor cell lines and mouse peritoneal macrophages. The effects depended on the cell analyzed: although all particles were phagocytosed and were able to induce ROS expression in mouse macrophages, they differentially affected the human cell lines and peripheral blood cells, both in terms of internalization and ROS induction. The availability of the particles to be internalized by the cells seemed to strongly depend on aggregation, especially on the size and morphology of the aggregates [ 137 ]. Almost all of the existing cytotoxicity studies of SNPs involved monocultures of cells that are organ specific. The exception is the study by Wottrich et al. [ 103 ], in which co-cultures of epithelial cells (A549) and macrophages (THP-1, Mono Mac 6) exposed to 60- and 100-nm amorphous SNPs showed an increased sensitivity to the cytokine release as compared with monocultures of each cell type. The enhanced responses to nanoparticles in different contact and non-contact co-cultures were reported in studies by Herseth et al. [ 138 , 139 ] with micron-sized crystalline silica, showing that more realistic models should be applied to study interactions between nanoparticles and cells or organs of interest. Few recently published studies have systematically investigated nanomaterial properties in terms of the degree and pathways of cytotoxicity. Sohaebuddin et al. [ 140 ] selected nanomaterials of different composition, including silica, to analyze the effects of size and composition on 3 model cell lines: fibroblasts, macrophages and bronchiolar epithelial cells. The authors concluded that the physico-chemical properties of size and composition both determined the cellular responses and induced cell-specific responses. In another recent study, Rabolli et al. [ 141 ] studied the influence of size, surface area and microporosity on the in vitro cytotoxic activity of a set of 17 stable suspensions of monodisperse amorphous SNPs of different sizes (2-335 nm) in 4 different cell types (macrophages, fibroblasts, endothelial cells and erythrocytes). The response to these nanoparticles was governed by different physico-chemical parameters that varied by cell type: in murine macrophages, the cytotoxic response increased with external surface area and decreased with micopore volume; in human endothelial cells and mouse embryo fibroblasts, the cytotoxicity increased with surface roughness and decrease in diameter; and in human erythrocytes, the hemolytic activity increased with the diameter of the SNP [ 141 ]. Overall, most of these in vitro studies involving different SNPs documented the cytotoxic effects of these nanomaterials. The determinants of the observed cytotoxicity seem to be complex and vary with the particles used and cell type tested. Unfortunately, for many published studies, adequate material characterization is still missing. The mere cytotoxicity reported with some particles does not strictly imply hazard. However, this observation indicates that a proactive development of nanomaterials should consider physical, chemical and catalytic properties of nanoparticles. In vivo studies of nanosilica toxicity Along with particle size, surface area and particle number appear to be integral components contributing to the mechanisms of lung toxicity induced by nano-sized particles. The high deposition rate of ultrafine particulates is a result of a small aerodynamic diameter and is assumed to be important in the lung inflammatory process. Some evidence suggests that inhaled nanoparticles, after deposition in the lung, largely escape from alveolar macrophage clearance and gain greater access to the pulmonary interstitium via translocation from alveolar spaces through epithelium [ 3 , 142 ]. A summary of the in vivo responses to SNPs can be found in Table 3 . In 1991, Warheit et al. [ 143 ] performed a rat inhalation study (nose-only) with an aerosol of colloidal silica (mass median aerodynamic diameter 2.9, 3.3 and 3.7 μm) for 2 or 4 weeks at concentrations up to 150 mg/m 3 , and some groups of rats were allowed to recover for 3 months. The inflammatory responses, mainly seen as increased numbers of neutrophils in bronchoalveolar lavage fluid (BALF), with the 2 and/or 4 weeks of exposure were evident at ≥ 50 mg/m 3 concentration. Three months after exposure, most biochemical parameters returned to control values [ 143 ]. Lee and Kelly [ 52 ] studied the effects of repeated inhalation (6 h/day, 5 days/week for 4 weeks) of an aerosol of colloidal silica (mass median aerodynamic diameter 2.9, 3.3 and 3.7 μm; concentration up to 150 mg/m 3 ) in rats. The authors reported a dose-dependent alveolar macrophage response, polymorphonuclear leukocytic infiltration, and type II pneumocyte hyperplasia in alveolar duct regions. Lung-deposited nanosilica were cleared rapidly from the lungs, with half-times of approximately 40 and 50 days for the 50 and 150 mg/m 3 treatment groups, respectively. The lungs did not show formation of fibrotic scar tissue or alveolar bronchiolarization [ 52 ]. Cho et al. [ 144 ] investigated inflammatory mediators (24 h, and 1, 4 or 14 weeks after exposure) induced by intratracheal instillation in mice of up to 50 mg/kg of ultrafine amorphous silica with a primary particle diameter of 14 nm. The authors observed significantly increased lung weights, total cell numbers and levels of total protein in BALF up to 1 week after treatment. The histopathological examination revealed acute inflammation, with neutrophils and chronic granulomatous inflammation. The expression of cytokines (IL-1β, IL-6, IL-8, and TNF-α) and chemokines (monocyte chemoattractant protein 1 and macrophage inflammatory protein 2) was significantly increased during the early stages, with no changes after week 1 [ 144 ]. Chen et al. [ 145 ] studied age-related differences in response to amorphous SNPs (average size 38 nm). Changes in serum biomarkers, pulmonary inflammation, heart injury and pathology were compared in young (3 weeks), adult (8 weeks) and old (20 months) rats that inhaled tested nanoparticles for 4 weeks (40 min/day). Old animals appeared to be more sensitive to nanoparticle exposure than were young and adult rats. The risk of pulmonary damage was old > young > adult, but the risk of cardiovascular disorder was observed only in old animals [ 145 ]. Kaewamatawong et al. [ 98 ] compared acute pulmonary toxicity induced in mice by ultrafine colloidal silica particles (UFCSs; average size 14 nm) or fine colloidal silica particles (FCSs; average size 213 nm) after intratracheal instillation of 3-mg particles. Histopathological examination with both sizes revealed bronchiolar degeneration, necrosis, neutrophilic inflammation, alveolar type II cell swelling and alveolar macrophage accumulation. However, UFCSs induced extensive alveolar hemorrhage, more severe bronchiolar epithelial cell necrosis and neutrophil influx in alveoli as compared with FCSs. Electron microscopy showed UFCSs and FCSs on the bronchiolar and alveolar wall surface and in the cytoplasm of alveolar epithelial cells, alveolar macrophages and neutrophils. The findings suggest that UFCSs (possibly linked to size and/or larger surface area) have a greater ability to induce lung inflammation and tissue damage than do FCSs [ 98 ]. The same research group reported acute and subacute pulmonary toxicity of low-dose UFCS particles in mice after intratracheal instillation [ 146 ]. Exposure of up to 100 μg UFCSs produced moderate to severe pulmonary inflammation and tissue injury 3 days after exposure. Mice instilled with 30 μg UFCSs and sacrificed at intervals from 1 to 30 days after exposure showed moderate pulmonary inflammation and injury on BALF indices at the acute period; however, these changes gradually regressed with time. Concomitant histopathological and laminin immunohistochemical results were similar to BALF data. The authors reported a significant increase in the apoptotic index (TUNEL) in lung parenchyma at all observation times. The findings suggest that instillation of a small dose of UFCSs causes acute but transient lung inflammation and tissue damage in which oxidative stress and apoptosis may be involved [ 146 ]. In a study of fibrogenesis, Wistar rats were intratracheally instilled with silica (of unknown composition) nano- (10 ± 5 nm) and microparticles (0.5-10 μm), and were sacrificed 1 and 2 months after dosing [ 147 ]. One month after instillation, cellular nodules (Stage I silicosis) were found in the nano-sized SiO 2 group, whereas more severe lesions were found in the micron-sized SiO 2 treatment group (Stage II and Stage II+ of silicotic nodules). One month later, the nano-sized SiO 2 group still showed only Stage I silicotic nodules, whereas the micron-silica group showed disease progression and Stage II+ and III silicotic nodules. Therefore, in rats, the effect of nano-SiO 2 on fibrogenesis might be milder than that of micron-SiO 2 . Nanoparticles, because of their size, probably diffuse more easily to other pulmonary compartments than do microparticles [ 147 ]. Warheit et al. [ 148 ] compared the toxicity of synthetic nanoquartz particles (12 and 50 nm) to mined Min-U-Sil quartz (500 nm) and synthetic fine-quartz particles (300 nm) and (2) evaluated the surface activity (hemolytic potential) of the different samples in terms of toxicity. Rats were instilled with the different particle types (1 or 5 mg/kg), and pulmonary toxicity was assessed with BALF biomarkers, cell proliferation, and histopathological evaluation of lung tissue at 24 h, 1 week, 1 month, and 3 months after exposure. Exposure to the quartz particles of different sizes produced pulmonary inflammation and cytotoxicity, with nanoscale quartz of 12 nm and Min-U-Sil quartz being more toxic than fine quartz and nanoscale quartz of 50 nm. The pulmonary effects were not consistent with particle size but were associated with surface activity, particularly hemolytic potential [ 148 ]. In a recent work by Sayes et al. [ 149 ], rats inhaled freshly generated aerosolized amorphous SNPs of 37 and 83 nm for a short-term period. In contrast to previous studies' measurements, particle number rather than particle mass was chosen as dose metrics (3.7 × 10 7 or 1.8 × 10 8 particles/cm 3 ) for 1- or 3-day exposure. Pulmonary toxicity (cell counts, differentials, enzymatic activity of LDH and alkaline phosphatase (ALP) in BALF) and genotoxicity endpoints (micronuclei induction) were assessed from 24 h up to 2 months after exposure. One- or 3-day aerosol exposure produced no significant pulmonary inflammatory, genotoxic or adverse lung histopathological effects in rats exposed to very high particle numbers in a range of mass concentrations (1.8 or 86 mg/m 3 ). Recently, airway irritants were suggested to facilitate allergic sensitization [ 150 - 152 ]. Arts et al. [ 153 ] examined the effect of pre-exposure to synthetic (fumed) amorphous SNPs (14 nm) on elicitation of airway hypersensitivity reactions by the low-molecular-weight allergen trimellitic anhydride (TMA). Brown Norway rats were topically sensitized with TMA, exposed (head or nose only) to SNPs for 6 h/day for 6 days and then challenged by inhalation with a minimally irritating concentration of TMA. One day later, breathing parameters, cellular and biochemical changes in BALF, and histopathological airway changes were studied. Exposure to SNPs alone resulted in transient changes in breathing parameters during exposure and in nasal and alveolar inflammation with neutrophils and macrophages. Exposure to particles before a single TMA challenge resulted in only a slightly irregular breathing pattern during TMA challenge. Interestingly, pre-exposure to particles diminished the effect of TMA on tidal volume, laryngeal ulceration, laryngeal inflammation, and the number of BALF eosinophils in most animals. When an additional group of animals was exposed to nanosilica before a second challenge to TMA, the pulmonary eosinophilic infiltrate and edema induced by a second TMA challenge in control animals was diminished by the preceding silica exposure, but the number of lymphocytes in the BALF was increased. The authors concluded that SNPs could reduce as well as aggravate certain aspects of TMA-induced respiratory allergy [ 153 ]. As mentioned, next to inhalation, ingestion is considered a major route for the uptake of nanoparticles in the human body. So et al. [ 154 ] studied the effects on mice fed nano- and micron-sized amorphous silica particles (30 nm and approximately 30 μm, respectively). After feeding the animals for 10 weeks (total amount of 140 g silica/kg mouse), blood was tested biochemically and hematologically. The group fed SNPs showed higher serum values of alanine aminotransferase as compared with the other groups (both control and micron-silica treated). Although the contents of Si in the livers of the groups were almost the same, hematoxylin and eosin staining revealed a fatty liver pattern in the group treated with SNPs [ 154 ]. The successful use of nanoparticles in the clinic requires exhaustive studies on the behavior of these particles in vivo . Unfortunately, biocompatibility, biodistribution and clearance studies of silica-based nanoparticles are sparse. Kumar et al. [ 155 ] used nanoparticles of organically modified amorphous silica (ORMOSIL; amino-terminated; 20-25 nm) to study biodistribution, clearance and toxicity in a mouse model. Particles conjugated with fluorophore and radiolabeled were injected systemically in mice. Biodistribution studies showed a greater accumulation of nanoparticles in liver, spleen and stomach than in kidney, heart and lungs. Over 15 days, almost 100% of the injected nanoparticles were effectively cleared out of the animals via hepatobiliary excretion, without any sign of organ toxicity. Hudson et al. [ 156 ] examined the biocompatibility of mesoporous silica particles (150 nm, 800 nm and 4 μm) after injection in rats and mice. When the particles were injected subcutaneously in rats, the amount of residual material decreased progressively over 3 months, with no significant injury to surrounding tissues. Subcutaneous injection of the same particles in mice produced no toxic effects. In contrast, intra-peritoneal and intra-venous injection in mice resulted in death; microscopic analysis of the lung tissue of the mice indicated that death might have been due to pulmonary thrombosis. Nishimori et al. [ 157 ] evaluated the acute toxicity of amorphous silica particles (70, 300 and 1000 nm) after a single intravenous injection in mice and reported that 70-nm silica injured the liver but not the spleen, lung or kidney. Moreover, chronic administration of 70-nm nanoparticles (injections every 3 days for 4 weeks) caused liver fibrosis. Cho et al. [ 158 ] examined the impact of the size of amorphous SNPs on toxicity, tissue distribution and excretion. Fluorescence dye-labeled 50-, 100- and 200-nm silica particles were intravenously injected in mice. The incidence and severity of inflammation with the 100- and 200-nm SNPs was significantly increased in the liver at 12 h; the 50-nm particles induced a slight but nonsignificant inflammatory response. The tissue distribution and excretion of the injected particles differed depending on particle size. With increasing particle size, more particles were trapped by macrophages in the liver and spleen. All particles were cleared via urine and bile; however, the 50-nm SNPs were excreted faster than were the other 2 particle sizes [ 158 ]. In vivo versus in vitro ; amorphous versus crystalline Park and Park [ 159 ] performed in vitro and in vivo studies to investigate oxidative stress and pro-inflammatory responses induced by amorphous SNPs (average primary size 12 nm). RAW 264.7 cells derived from mouse peritoneal macrophages were exposed to SNPs (5-40 ppm) in vitro and showed ROS generation and decreased intracellular GSH levels, as well as increased levels of nitric oxide released from the cultured macrophage cell line. In vivo , mice were treated with a single intraperitoneal dose of 50 mg/kg of nanosilica. The treatment produced activated peritoneal macrophages, increased blood level of IL-1β and TNF-α, and increased level of nitric oxide released from peritoneal macrophages. Ex vivo , cultured peritoneal macrophages harvested from the treated mice showed the expression of inflammation-related genes (IL-1, IL-6, TNF- α, inducible nitric oxide synthase, cyclooxygenase 2). In the spleen, the relative distribution of natural killer cells and T cells was increased 184.8% and 115.1%, respectively, as compared with control animals, and that of B cells was decreased to 87.7% [ 159 ]. Kim et al. [ 160 ] addressed the toxicity of nano- and micron-sized silica particles (14 nm and 1-5 μm, respectively) in vitro and in vivo. In vitro , RAW 264.7 cells were exposed to both particle sizes for 24 h, and the cell viability was decreased in dose-dependent manner; however, apoptosis was observed only after treatment with nanoparticles. In vivo , mice received up to 5 mg/kg silica particles via oropharyngeal aspiration. Again, size-dependent toxicity of silica was observed; pulmonary injury and neutrophilic infiltration were greater after treatment with nano-sized SiO 2 particles than with micron-sized silica [ 160 ]. Sayes et al. [ 161 ] assessed the capacity of in vitro screening studies to predict in vivo pulmonary toxicity of several fine or nanoscale particle types in rats. For the in vitro component of the study, rat lung epithelial cells, primary alveolar macrophages and alveolar macrophages-lung epithelial cell co-cultures were incubated with quartz particles and precipitated amorphous silica. In the in vivo component of the study, rats were exposed by intratracheal instillation to the same particles. In vivo , pulmonary toxicity studies demonstrated that crystalline silica particles produced sustained inflammation and cytotoxicity, whereas amorphous silica particles produced reversible and transient inflammatory responses. Ex vivo , pulmonary inflammation studies showed that crystalline and amorphous silica-exposed rat lung epithelial cells did not produce MIP-2 cytokines, but alveolar macrophages and, to a lesser degree, co-cultures secreted this chemotactic factor into the culture media. In vitro cytotoxicity studies demonstrated a variety of responses to the different particle types, primarily at high doses. When considering the range of toxicological endpoints, comparisons of in vivo and in vitro measurements revealed little correlation, particularly when considering the many variables assessed in this study such as cell types used, culture conditions and time course of exposure, as well as measured endpoints. To summarize, extrapolating (or comparing) the results obtained in vitro to the in vivo situation is difficult and applies not only to toxicity studies with nanoparticles -- any existing in vitro test system lacks the complexity of animal models or the human body. However, considering the number of particles and the number of possible properties of these particles that may vary (size, shape, coating, etc.), clearly, not all can be evaluated in in vivo studies, and scientists have been striving to determine the correlation between the results obtained from in vitro and in vivo toxicity assessments. Although little correlation has been found in these studies with nanosilica [ 159 - 161 ], Lu et al. [ 162 ] tested a panel of metal oxide nanoparticles and could predict the inflammogenicity of tested nanomaterials with a battery of simple in vitro tests. Similar conclusions were drawn in a recent study by Rushton et al. [ 163 ]; the authors could predict the acute in vivo inflammatory potential of nanoparticles with cell-free and cellular assays by using NP surface area-based dose and response metrics. The authors also found that a cellular component was required to achieve a higher degree of predictive power. Established and validated co-culture systems may provide a tool to better mimic the in vivo system. Using recently developed 3-D cell cultures and improving the exposure system (likewise exposure at the air-liquid interface of a human epithelial airway model reported by Brandenberger et al. [ 164 ]), could substantially improve the outcome from in vitro studies with nanomaterials. Competing interests The authors declare that they have no competing interests. Authors' contributions DN, LCJT and PHH drafted the manuscript. DN provided key input in the literature search. LCJT and JAM wrote the section on synthesis and characterization of silica materials. LCJT prepared all figures. DL contributed to drafting the paper. All authors read and approved the final manuscript.
Acknowledgements This work was supported by the Belgian Science Policy program "Science for a Sustainable Development" (SD/HE/02A). JAM acknowledges the Flemish government for long-term structural funding (Methusalem).
CC BY
no
2022-01-12 15:21:36
Part Fibre Toxicol. 2010 Dec 3; 7:39
oa_package/0d/2f/PMC3014868.tar.gz
PMC3014869
21122153
Background Rates of caesarean section are progressively increasing in many parts of the world, particularly among developing countries such as China [ 1 - 4 ]. In many Chinese hospitals, the caesarean section rate was more than 40%, while in some cases, it was up to 80% [ 2 - 4 ], which was much higher than the acceptable caesarean rate (5-15%) in WHO's guidelines [ 5 ]. Although the rate of caesarean section resulting in the best outcome for mothers and children continues to be a matter of debate, it is evident that a better outcome (including lower morbidity and mortality) does not necessarily result from a higher rate of caesarean sections. In recent years, there has been an increasing tendency for pregnant women without obstetric indications for caesarean section to ask for this procedure because they perceive it to be safe and more convenient than vaginal delivery [ 4 , 6 ]. This situation has become a significant factor leading to the increased rate of caesarean section in China [ 4 , 6 ]. Currently there is much debate as to whether this surgical procedure should be performed for women without clear clinically acceptable indications [ 7 - 11 ]. The focus of the debate is whether caesarean section has greater benefits than vaginal delivery and is an acceptable alternative to vaginal delivery in low risk women. Some studies were against and some were for caesarean section [ 12 - 18 ]. Most previous studies were retrospective and could not identify detailed significant baseline differences between the two groups of caesarean section and vaginal delivery. Because of this, it is difficult to evaluate whether the short- and long-term abnormalities (e.g. post partum haemorrhage and chronic abdominal pain) were directly due to the caesarean sections or to underlying conditions. All-cause caesareans might comprise women who need a life saving surgical intervention for the mother or baby as well as women whose need for the procedure was not clinically justified. The interpretation of these crude caesarean rates is therefore difficult. Randomized clinical trials may be a good choice to address the issue, but this is not feasible in most cases. A Cochrane Review that focused on this subject found no trials to help assess the risks and benefits of caesarean section when undertaken without a conventional medical indication. The authors of the review strongly recommended alternative research methods to gather data on the outcomes associated with different ways of giving birth [ 19 ]. It is our contention that a prospective parallel-group observational study matching or controlling for possible confounders could overcome these deficiencies in the trials. We therefore undertook a study with an indication-matched cohort design to compare the medical outcomes of mothers who had caesarean section with those who delivered vaginally. This study focused on nulliparous pregnant women who were relatively healthy, did not have a history of any serious diseases and had no serious complications during pregnancy. In other words, these women were at low risk of complications at delivery.
Methods This study was an indication-matched cohort study, comparing women who had caesarean section with a comparable low risk group of women who had vaginal delivery (Figure 1 ). It involved pregnant women without obstetric indications (subgroup C and F in Figure 1 ) or with relative medical indications (subgroup B and E in Figure 1 ) for caesarean section, in other words low risk pregnant women. Women with absolute medical indications for caesarean section were excluded (subgroup A and D in Figure 1 ). There are different classification systems recommended for use in high or low caesarean delivery rate settings [ 20 ]. The absolute and relative indications for caesarean section in this study, which are shown in Table 1 , were classified according to the Practice Guidelines for Gynaecology and Obstetrics in Shanghai [ 21 ] and the opinions of the expert team on this project. Patient selection This study was undertaken in three district level Maternal and Children's Hospitals (MCHs) in Shanghai. The inclusion criteria were: 1) older than 20 and younger than 35 years; 2) more than 37 weeks gestation at delivery; 3) nulliparous; 4) no history of induced abortion (including medical abortion and surgical abortion); 5) no history of heart, liver, lung, kidney, endocrine or psychiatric diseases resulting in hospitalization; 6) planning to have the delivery at the present MCH and planning to live in Shanghai after delivery. The exclusion criteria were: 1) unmarried, divorced or widowed; 2) a history of spontaneous abortion; 3) multiple foetus; 4) more than 42 weeks gestation at delivery; 5) low birth weight (less than 2500 g); 6) the presence of absolute indications for caesarean section. Nulliparous women were selected because of the 'One-Child' family planning policy in China. Moreover, this restriction should avoid possible confounding by parity in the analysis. Women meeting these inclusion and exclusion criteria were enrolled in this study, and women with relative indications or without any obstetric indication for caesarean section were finally accepted following preliminary screening in the antenatal clinics and re-screening in the maternity wards after birth. The exposure group of eligible women included those with no identified risk and some with relative indications for caesarean section. The control group included some who had relative indications, but did not proceed to caesarean section. The research team took no part in the clinical care of the women and did not participate in the decision to have a caesarean section. The women in the caesarean section group were matched with those who delivered vaginally (Figure 1 ). The matching criteria were: 1. The absence of any indication for caesarean section (subgroup C and F in Figure 1 ). 2. The presence of a relative indication (subgroup B and E in Figure 1 ). Where possible the women were matched by the precise condition. If there could not be an exact match a similar indication was accepted, or the pairs were matched according to the presence of any relative indication. 3. A delivery date within 20 days of each other. The subject delivering first received precedence for matching. 4. Delivery in the same hospital. Data collection Information collected was recorded on questionnaires (see Additional file 1 ). A baseline questionnaire was completed at the time of preliminary screening in the antenatal clinics to determine preliminary eligibility for the study (Figure 2 ). Details recorded were demographic characteristics, smoking habits, alcohol use and medical history. In addition, the weight and height before pregnancy and health events and medicine use during pregnancy were recorded. A post-partum questionnaire was completed by interview in conjunction with the obstetrical record. The data recorded were method of delivery, gestational weeks at delivery, medical events during delivery, and the occurrence of complications post-partum. The volume of blood loss was recorded from the start of labour until two hours post-partum by a measurement cup (when losing a large amount of blood) and weighing gauzes. Post-partum haemorrhage (PPH) was defined as the total loss of blood of more than 400 ml from the start of labour until 2 hours post-partum, according to the definition suggested by the Chinese Collaboration Group of Bleeding Post-partum [ 22 ]. This definition of PPH was different from the conventional one of blood loss of more than 500 ml from the genital tract within 24 hours of delivery. However the definition used in this study was considered more practical and easier to operate by the health professionals in the MCHs and similar definitions were also used in other clinical investigations with a focus on PPH [ 23 - 25 ]. Follow-up interviews were conducted at 1 month, 6 months and 12 months post-partum. Outcomes recorded were infection (reproductive tract infection, urinary infection and wound complications), anaemia (mainly iron-deficiency anaemia with haemoglobin lower than 110 g/L post-partum), puerperal fever, chronic abdominal pain and rehospitalization. The temperature was taken at least four times daily and puerperal fever was defined as a temperature equal to or greater than 38°C occurring at least twice within the first 10 post-partum days, exclusive of the first 24 hours. Chronic abdominal pain was defined as non-cyclic pain in the lower abdomen for at least 4 months, interfering with daily activities. All follow-up interviews were completed by trained health staff in the MCHs where the women were enrolled. The interviewers were blind to the method of delivery in the follow-up interviews at 1, 6 and 12 months post partum. A clinician in each MCH was responsible for collecting the data and completed the questionnaires with the help of experienced midwives or health workers. Because of concern for misclassification of medical indications at the matching stage in the study, a quality control procedure was used, guided by the senior doctors and audited by a member of the expert team, to ensure the fundamental 'eligibility' matching criteria were met. The collection of follow-up data was completed by March 2004. Routine practice The MCHs in this study were government hospitals. They were similar in obstetric practice which mainly followed the official guidelines on clinical practice [ 2 , 21 ]. Routine active management was given by medical care personnel to all women. Analgesia such as epidural analgesia would be offered if indicated or women requested it. The caesarean sections and operative vaginal deliveries were undertaken by experienced obstetricians. Women having a caesarean section or those at high risk of infection should be offered prophylactic antibiotics, such as a single dose of first-generation cephalosporin (and metronidazole if necessary), to reduce the risk of post-operative infections. Active management included the following aimed at preventing PPH: routine intramuscular (i.m.) injection of a preventive single dose of oxytocin with delivery of the baby, controlled cord traction and uterine massage after delivery of the placenta. The preventive single i.m. dose of oxytocin may be 10 or 20 IU. This varied in different hospitals, but is the same for caesarean section and vaginal delivery in the same hospital. In addition to the routine single i.m. dose of oxytocin, extra doses of oxytocin by intravenous continuous drip or other uterotonics (such as carbetocin, prostaglandin PGE 2a or carboprost) may be administered after delivery. This decision relied on the clinicians' assessment (e.g. atonic uterus). Administration of all uterotonics and relevant details of active management of labour were recorded. Sample size We estimated the overall maternal morbidity would be 8% of the caesarean section group [ 26 , 27 ]. Thus, an equally divided sample of 540 was deemed sufficient for the detection of a risk ratio of 5.0 compared with the vaginal delivery group, with a type I error (two-sided) of 5% and a power of 90%. On the assumption of a 10% rate of loss to follow-up, we established an overall target sample size of 600. Statistical methods As this was a matched prospective comparative study, a stratification strategy was used for examining the matched data [ 28 , 29 ]. Previous studies have indicated that a number of factors contribute to caesarean section [ 2 - 4 ]. Data collected in the current study indicated that the factors related to caesarean section did not distribute in good balance. It was therefore decided to use a propensity score technique [ 30 , 31 ] in the analysis to balance the differences of these factors between the two study groups. The covariates were average monthly income per capita, body mass index (BMI), maternal confidence in vaginal delivery, preference for caesarean section, prepartum self-rating depression scale (SDS), birth weight and doula support, which were combined to yield a propensity score through stepwise logistic regression (data not shown). Comparisons between groups employed t-test or Wilcoxon rank sum test for continuous variables, Pearson Chi-square for categorical data, and Cochrane-Mantel-Haenszel Chi-square for ordered categorical data. When the incidence was smaller than 10%, a logistic regression model was employed which used odds ratio (OR) to estimate relative risk (RR); while for a more common condition, a binomial regression model was used to estimate the RR directly. Relative risks were calculated adjusting for propensity score and medical indications. The routine clinical practice that was accompanied by the method of delivery was not adjusted additionally. Epi Data and SAS software version 8.2 (SAS Institute, Cary, NC, USA) were employed for data management and statistical analysis in this study. Ethical issues Approval from the local research ethics committee in the Shanghai Institute Of Planned Parenthood Research/WHO Collaborating Centre for Research in Human Reproduction was granted for the study protocol on 10 January 2001 (reference code: 20010101), and the study was also accepted by the heads of the hospitals involved. Individual informed consent was obtained during the initial screening.
Results A total of 1460 pregnant women were screened for eligibility at the antenatal clinics between 2001 and 2003. There were 933 women who met the inclusion criteria and had no exclusion criteria. Of these, 301 had caesarean section and were matched successfully with 301 women who delivered vaginally. A total of 13 cases (2.2%) were lost to follow-up (Figure 2 ), 6 (2.0%) in the caesarean section group and 7 (2.3%) in the vaginal delivery group. The proportions and reasons for loss of follow-up were similar in the exposure and control groups. In addition, the characteristics at baseline were also similar in women followed up and those lost to follow-up in both the exposure and control groups. Demographic characteristics of participants The average age of women in both groups was about 25 years (Table 2 ). Women in both groups had a similar distribution of education level, and years of marriage (see Additional file 2 : Supplementary Table S1). A majority of the subjects had an education level of high school or higher, i.e. 265 (88.0%) in the caesarean section group and 251 (83.4%) in the vaginal delivery group. More than half of the women in both groups were workers and employees in factories and companies (53.2% in caesarean section group and 54.5% in vaginal delivery group). The caesarean section group tended to have more with a professional occupation and a higher income level, but the differences were not statistically significant (see Additional file 2 : Supplementary Table S1). There was one smoker in the vaginal delivery group and none in the caesarean section group and 4 and 3 respectively drank alcohol. Medical history No significant differences were found between the two groups in medical history during the year before pregnancy, nor during pregnancy (Table 2 ). There was one woman in each of the two groups who suffered preeclampsia. The incidence of threatened abortion was respectively 14.4% and 17.7% in the caesarean section and vaginal delivery groups (χ 2 = 1.20, P = 0.27). Obstetric details Women in both groups had a similar duration of gestation when delivered (χ 2 = 0.46, P = 0.64), with most at 39 weeks. In the vaginal delivery group, 215 (71.4%) delivered spontaneously, 86 (28.6%) needed forceps assistance and 297 (98.7%) women had mediolateral episiotomies. In the caesarean section group, there were 134 (44.5%) women who had elective caesarean section and 167 (55.5%) who had caesarean section decided during labour. Women in both delivery groups had a similar distribution of relative indications for caesarean section as listed in Table 2 . The proportion of relative cephalopelvic disproportion was respectively 16% and 15% in the caesarean section and vaginal delivery groups. Among those with relative indications, foetal distress ranked first in both groups. Epidural analgesia or combined spinal-epidural analgesia was administrated for 141 (46.8%) in the caesarean section group and 125 (41.5%) in the vaginal group respectively for labour and delivery. Prophylactic antibiotics were used for all in the caesarean section group and 145 (48.2%) in the vaginal delivery group. In addition to the routine single i.m. dose of oxytocin administered at delivery, extra uterotonics were given after delivery for 277 (95.9%) in the caesarean section group and 230 (76.7%) in vaginal delivery group (see Additional file 2 : Supplementary Table S2). Post-partum morbidity during hospitalization Complications during post-partum hospitalization were mainly haemorrhage, infection and fever. The incidence of total complications was 2.2 times higher in the caesarean section group (Table 3 ). When individual complications were studied, it was found that the caesarean section group had a relative risk of 5.6 for post-partum haemorrhage compared with the control group, after adjusting for propensity score and the relative indications for caesarean section (Table 3 ). The medians (lower and upper quartile) of total blood loss during labour and within 2 hours post-partum were respectively 200 ml (200-300 ml) and 170 ml (110-200 ml) in the caesarean section group and control group and the difference was highly significant (Wilcoxon rank sum test, Z = 13.81, P < 0.0001). Because of uterine atony in four and tear of the uterine incision in one, the amount of PPH of five women was more than 1000 ml. These patients were all in the caesarean section group and were given blood transfusions. Rates of puerperal infection or fever post-partum in hospital did not show any statistically significant differences between the two groups (Table 3 ). The median (q1-q3) days stayed in hospital after delivery were 8 (7-10) and 6 (5-7) for caesarean section and vaginal delivery groups respectively. Telephone interview was applied after hospital discharge. Morbidity after discharge and within one year post-partum The clearing time of lochia was recorded in most cases at the first follow-up, which was the end of the first month post-partum. Most women (60%) had a clearing period of 4 to 6 weeks post-partum. Women in the caesarean section group had a longer time of clearing than the control group (χ 2 CMH = 6.41, P = 0.01). At one month post-partum, the total incidences of all problems measured were 8.0% and 7.3% in the caesarean section group and control group respectively. Longer term or delayed problems are listed in table 4 . 'Wound complications' refers to persisting incisional pain without infection plus break-down of abdominal wounds and episiotomies due to infection or abscess. The most frequent problems within one year after discharge included anaemia, reproductive tract infection, wound complications and waist/back pain. No statistically significant differences were found for these events in the two groups (Table 4 ). The incidence of chronic abdominal pain was 4.4% and 1.7% respectively in the caesarean section group and vaginal delivery group (adjusted RR = 3.6, 95%CI 1.2-10.9) (Table 4 ). There were 5 women (6 times) rehospitalized for medical reasons within one year post-partum respectively in both groups (χ 2 = 1.17, P = 0.28). Four women were rehospitalized within one month post-partum: one due to fever and one due to mastitis in the caesarean section group, and two women due to mastitis in the vaginal delivery group. There were four women rehospitalized between one and 6 months post-partum: three women due to fever, mastitis and gallstone respectively in the caesarean section group, and one woman due to adenoma of the parathyroid in the vaginal delivery group. Three women were rehospitalized between 6 and 12 months post-partum: all three women were in the vaginal delivery group, two women due to ruptured corpus luteum cyst and one woman due to ectopic pregnancy.
Discussion Caesarean section is now the most frequently performed major obstetric operation in China. In some areas, caesarean delivery on maternal request accounted for half of all caesarean births [ 2 - 4 , 32 ]. Investigation of maternal medical outcomes of subjects with non-medically indicated caesarean section is therefore very important. Our indication-matched prospective cohort study should help to minimize confounding by indication and should add important insights on the effects of caesarean section on maternal health. Incidence of haemorrhage post-partum It was found that the women in the caesarean section group had a relative risk of haemorrhage of 5.6 compared with the control group. Although routine active management was offered by skilled medical care personnel to all women, the number of years of clinical experience of the health workers for vaginal delivery was smaller, It was also found that more women in the caesarean section group were given extra uterotonics after delivery compared with the vaginal group (see Additional file 2 : Supplementary Table S2). Both of these factors would tend to reduce PPH and thus the observed excess of PPH in the caesarean group is strengthened. Cheng reported that the incidence of haemorrhage was respectively 3.5% and 1.8% (definition of post-partum haemorrhage not specified) in two groups between 2000 and 2002 [ 26 ], and Zhu reported that the incidence was respectively 15.3% and 7.5% (using the traditional definition of haemorrhage as blood loss more than 500 ml from the genital tract within 24 hours of delivery) in two groups between 1998 and 2001 [ 27 ]. Although these numbers were all from clinical studies in Shanghai in recent years, the incidences varied. This might be due to differences in the definitions of post-partum haemorrhage and methods of assessing the volume of blood loss. However, all studies indicated an increased risk of haemorrhage in the caesarean section group. Incidence of infection The common complications following caesarean section in the short term included infection after surgical operation (including endometritis, wound infection and urinary infection.) and fever. No statistically significant differences for these were found in the two groups. This is thought to be a result of improved perinatal medical practice, such as strict aseptic technique, and prophylactic use of antibiotics reducing the frequency of infection. At present clinical staff comply strictly with the new official national practice guidelines on the use of antibiotics: ' the Practice Guidelines for Standardized Use of Antibiotics' [ 33 ]. In our study prophylactic antibiotics were used for all of the caesarean section group and nearly half in the vaginal delivery group (see Additional file 2 : Supplementary Table S2). Prophylactic antibiotics can reduce the incidence of endometritis following caesarean section by two thirds to three quarters [ 34 ]. It has been reported that there was an increase in wound infection (RR = 3.5; 95% CI, 1.8-6.7) with caesarean delivery without labour compared with spontaneous vaginal delivery [ 12 ]. Another study found that women who had caesarean delivery were more likely to be rehospitalized with obstetrical surgical wound complications (RR = 30.2; 95% CI, 18.8-47.4) when compared with women who had spontaneous vaginal delivery [ 15 ]. The difference between our study and theirs may be due to different study populations and the very high proportion of Chinese women who had an episiotomy in our study. Incidence of chronic abdominal pain In this study, it was found that women with caesarean section had a relative risk of 3.6 for chronic abdominal pain compared with those having vaginal delivery, confirming the finding in a Brazilian study [ 16 ]. Although the causal mechanism of chronic abdominal pain is not completely understood, the common reasons may be abdominal adhesion after surgical operations and pelvic inflammatory adhesions [ 35 ]. Women should be counselled about this when requesting caesarean section. Rehospitalization due to illness post-partum Recent studies showed that rehospitalizations in the first one or two months after giving birth were more likely in planned caesarean when compared with planned vaginal births [ 15 ]. The present study found that the total incidence of rehospitalization within one year post-partum was 1.7%, and no difference was found between the two groups. This low rate might be attributed to the fact that all the participants in this study were relatively healthy women, with a low risk of complications. Limitations of this study Caesarean sections and vaginal deliveries have different effects on women's health as shown in this study. The findings are consistent with the WHO global survey in Africa and Latin America in 2004-05 and in Asia in 2007-08, which is mainly an ecological study at institutional level [ 36 , 37 ]. However, some obstetric practices in China, such as the routine use of episiotomy during vaginal delivery, may limit the validity of comparisons with other countries where clinical practice is substantially different [ 2 , 3 ]. In reality, this may under-estimate the risk of caesarean section in China. Due to the insufficient sample size, we were not able to subdivide the two study groups into categories such as spontaneous vaginal delivery, operative vaginal delivery, elective caesarean section or caesarean section decided during labour as defined by other surveys [ 36 , 37 ]. We were also not able to differentiate the effects of the different modes of delivery, on the risk of some uncommon complications such as maternal mortality, venous thromboembolism and hysterectomy. In addition, the conclusions of this study should not be extended to those women with absolute indications for caesarean section.
Conclusion Caesarean section in low risk nulliparous Chinese women carries increased risks over vaginal delivery. Those requesting caesarean section without conventional obstetric indications or medical indications for mother or foetus, should be advised of these potential risks.
Background Rates of caesarean section are progressively increasing in many parts of the world. As a result of psychosocial factors there has been an increasing tendency for pregnant women without justifiable medical indications for caesarean section to ask for this procedure in China. A critical examination of this issue in relation to maternal outcomes is important. At present there are no clinical trials to help assess the risks and benefits of caesarean section in low risk women. To fill the gap left by trials, this indication-matched cohort study was carried out to examine prospectively the outcomes of caesarean section on women with no absolute obstetric indication compared with similar women who had vaginal delivery. Methods An indication-matched cohort study was undertaken to compare maternal outcomes following caesarean section with those undergoing vaginal delivery, in which the two groups were matched for non-absolute indications. 301 nulliparous women with caesarean section were matched successfully with 301 women who delivered vaginally in the Maternal and Children's Hospitals (MCHs) in Shanghai, China. Logistic regression model or binomial regression model was used to estimate the relative risk (RR) directly. Adjusted RRs were calculated adjusting for propensity score and medical indications. Results The incidence of total complications was 2.2 times higher in the caesarean section group during hospitalization post-partum, compared with the vaginal delivery group (RR = 2.2; 95% CI: 1.1-4.4). The risk of haemorrhage from the start of labour until 2 hours post-partum was significantly higher in the caesarean group (RR = 5.6; 95% CI: 1.2-26.9). The risk of chronic abdominal pain was significantly higher for the caesarean section group (RR = 3.6; 95% CI: 1.2-10.9) than for the vaginal delivery group within 12 months post-partum. The two groups had similar incidences of anaemia and complicating infections such as wound complications or urinary tract infection. Conclusions In nulliparous women who were at low risk, caesarean section was associated with a higher rate of post-partum morbidity. Those requesting the surgical procedure with no conventional medical indication, should be advised of the potential risks.
Competing interests The authors declare that they have no competing interests. Authors' contributions BW undertook the field work, completed the data analyses and draft. LZ designed and supervised the study and was the study guarantor. DC critically reviewed the draft and modified the text significantly. HL drafted the paper under supervision of LZ. YZ, LPZ and XG helped to design the study. YG assisted with managing the project. WY and EG provided expert knowledge during the design and made comments on the draft. All authors read and approved the final manuscript. Pre-publication history The pre-publication history for this paper can be accessed here: http://www.biomedcentral.com/1471-2393/10/78/prepub Supplementary Material
Acknowledgements This study was supported by Grant Number 30070667 from the National Natural Science Foundation of China. The authors' work was independent of the funder. We thank all the medical doctors and nurses in Putuo MCH, Jiading MCH and Yangpu MCH in Shanghai for their great work in enrolling the participants and follow-up. We also thank Ms Zheng Biyu and Ms Tang Guici for coordinating aspects in the study fields.
CC BY
no
2022-01-12 15:21:36
BMC Pregnancy Childbirth. 2010 Dec 2; 10:78
oa_package/35/52/PMC3014869.tar.gz
PMC3014870
21126359
Background Transdifferentiation of the liver epithelial cells (hepatocytes and biliary cells) into each other provides a rescue mechanism in liver disease under the situations where either cell compartment fails to regenerate by itself. We have previously reported transdifferentiation of hepatocytes into biliary epithelial cells (BEC) both in in vivo rat model using biliary toxicant 4,4'-methylenedianiline [diaminodiphenyl methane, (DAPM)] followed by biliary obstruction induced by bile duct ligation (BDL) [ 1 ] and in vitro using hepatocyte organoid cultures treated with hepatocyte growth factor (HGF) and epidermal growth factor (EGF) [ 2 - 4 ]. Other investigators have also demonstrated hepatocyte-to-BEC transdifferentiation in hepatocyte cultures [ 5 ] and following hepatocyte transplantation in spleen [ 6 ]. In humans, chronic biliary liver diseases (CBLD) characterized by progressive biliary epithelial degeneration are also known to be associated with formation of intermediate hepatobiliary cells expressing both hepatocytic and biliary specific markers [ 7 - 9 ]. However, the mechanisms promoting such hepatocyte to BEC transdifferentiation (or vice versa) are not completely understood. In the current study, by repeatedly injuring biliary cells by minimally toxic dose of DAPM administered to rats we established a novel rodent model resembling CBLD [ 10 ]. DAPM selectively injures biliary cells because toxic metabolites of DAPM are excreted in bile [ 10 , 11 ]. Orchestrated network of liver-enriched transcription factors is known to play an important role in pre- and postnatal liver development as well as in lineage specification of hepatoblasts into hepatocytes and BECs [ 12 , 13 ]. Studies with knockout mice have shown that hepatocyte nuclear factor (HNF) 1α and HNF4α regulate transcription of genes essential for the hepatocytic lineage [ 14 - 16 ] whereas HNF1β and HNF6 are involved in development of the gallbladder and bile ducts [ 17 - 19 ]. In the present study, the expression of hepatocyte- and biliary-specific HNFs is examined during reprogramming of cell lineage during transdifferentiation using DAPM + BDL and repeated DAPM treatment models. Gradient of TGFβ expression regulated by Onecut transcription factor HNF6 in ductal plate hepatoblasts during embryonic liver development is crucial for biliary differentiation [ 20 ]. In the present study, TGFβ1 and HNF6 expression pattern was studied in order to determine if similar mechanism is recapitulated during hepatocyte to BEC transdifferentiation in the adult liver. The likely source of hepatocytes capable of functioning as progenitor cells in the event of compromised biliary regeneration is investigated by assessing expression of biliary specific keratin CK19. To examine if hepatocytes transdifferentiate into biliary epithelium after repeated administration of DAPM, dipeptidyl peptidase IV (DPPIV) chimeric rats were utilized that normally carry DPPIV-positive population of only hepatocytes derived from donor DPPIVpositive rats [ 21 , 1 - 3 ]. Neither the hepatocytes nor the BECs express DPPIV in the recipient DPPIV negative rats. Thus, appearance of biliary epithelial cell clusters positive for the hepatocyte marker DPPIV provides strong evidence that BEC are derived from hepatocytes.
Methods Materials Collagenase for hepatocyte isolation was obtained from Boehringer Mannheim (Mannheim, Germany). General reagents and 4,4'-Methylenedianiline (DAPM) were obtained from Sigma Chemical Co. (St. Louis, MO). Primary antibodies used are: CK19 (Dako Corp; 1:100), HNF4α (Santa Cruz; 1:50), HNF6 (Santa Cruz; 1:50), HNF1β (Santa Cruz; 1:100), TGFβ1 (Santa Cruz; 1:200). Biotinylated secondary antibodies were obtained from Jackson Laboratories. Target retrieval solution was obtained from Dako Corp. ABC kit and diaminobenzidine (DAB) kit were from Vector Laboratories. Animals DPPIV positive Fisher 344 male rats were obtained from Charles River Laboratories (Frederick, MD). DPPIV negative Fisher 344 male rats were obtained from Harlan (Indianapolis, IN). The animal husbandry and all procedures performed on the rats employed for these studies were approved under the IACUC protocol #0507596B-2 and conducted according to National Institute of Health guidelines. Generation of rats with chimeric livers DPPIV chimeric livers were generated as previously described [ 3 , 21 ]. Briefly, male DPPIV negative Fisher rats (200 g) were given two intraperitoneal injections of retrorsine (30 mg/kg), dissolved in water. The injections were given 15 days apart. A month after the last injection, the rats were subjected to PHx. During the PHx operation, the rats were also injected directly into the portal circulation (via a peripheral branch of the superior mesenteric vein) with 3.5 million hepatocytes isolated from DPPIV positive male Fisher rats (200 g). The animals were left to recover and were not subjected to any other experimental procedures for the next 3 months. Assessment of the degree of engraftment was made under direct microscopic observation of sections from the chimeric livers, stained for DPPIV. The percentage of DPPIV positive and negative cells was estimated at 40× magnification in optic fields that included at least one portal triad and one central vein. The percentage of DPPIV-positive cells varied from one lobule to another. The range of engraftment per optic field (as defined above) within each animal varied from 30% to 60%. Treatment with DAPM Biliary toxicant DAPM (50 mg/kg, dissolved in DMSO at a concentration of 50 mg/ml) was injected intraperitoneally to either DPPIV chimeric or DPPIV positive male Fisher 344 rats every 2 days. In the pilot study, bile duct injury after single injection of DAPM was at its peak at 24 and 48 h after treatment (Figure 1A, B ) while PCNA analysis indicated that the biliary cells begin cell division at 48 h (Figure 1C ). Based on these findings, we chose to administer DAPM (50 mg/kg, ip) every 2 days. This treatment was continued for total 3 times and the rats were sacrificed at day 30 after the last DAPM injection (Figure 2A ). The livers were harvested and utilized for DPPIV histochemistry. Additional two groups of normal rats ware given either intraperitoneal injection of 50 mg DAPM/kg every two days for 3 times (DAPM × 3) or single DAPM injection (50 mg DAPM/kg) two days before the bile duct ligation (DAPM+BDL). At the end of 30 days after the last treatment, rats were sacrificed Blood was collected for serum analysis. Livers were harvested for further analysis. Bile duct ligation Bile duct ligation was performed as previously described [ 3 ]. Briefly, the animals were subjected to a mid-abdominal incision 3 cm long, under general anesthesia. The common bile duct was ligated in two adjacent positions approximately 1 cm from the porta hepatis. The duct was then severed by incision between the two sites of ligation. Immunohistochemistry Paraffin-embedded liver sections (4 μm thick) were used for immunohistochemical staining. For HNF4α and HNF6 staining, antigen retrieval was achieved by steaming the slides 60 minutes in preheated target retrieval solution (Dako Corporation). For CK19 staining the slides were steamed for 20 minutes in high pH target retrieval solution (Dako Corporation) before blocking. For TGFβ1 staining no antigen retrieval was necessary. The tissue sections were blocked in blue blocker for 20 minutes followed by incubation with pertinent primary antibody overnight at 4°C. The primary antibody was then linked to biotinylated secondary antibody followed by routine avidin-biotin complex method. Diaminobenzidine was used as the chromogen, which resulted in a brown reaction product.
Results Histological and functional bile duct damage after DAPM administration Biliary toxicity induced by single administration of DAPM (50 mg/kg, ip) was monitored by elevations of serum bilirubin and histopathological observations over a time course. Maximum biliary injury in terms of serum bilirubin was apparent by 24 h and consistently stayed high till 48 h after DAPM (Figure 1A ). By day 7, rats appeared to recover from toxicity as indicated by regressing serum bilirubin levels (Figure 1A ). Histopathological observations revealed biliary cell necrosis as early as 12 h after DAPM. Necrosis was accompanied by ductular swelling and inflammation. Some damage to the hepatocytes was also observed in the form of bile infarcts. However, the serum ALT elevations were minimal suggesting hepatocyte injury by DAPM was secondary (Additional File 1 , Figure S1). Based on the quantitative analysis, 70% bile ducts were injured by DAPM at 24 h after DAPM. At 48 h, the bile ducts appeared to be repairing from injury (Figure 1B ). The PCNA analysis indicated that the biliary cells begin cell division at 48 h and continue till day 7 (Figure 1C ). Based on these findings, we chose to administer DAPM (50mg/kg, ip) every 2 days for total 3 times in order to inflict repeated biliary injury and simultaneously impairing their ability to regenerate themselves. It should be noted that it is the same dose of DAPM that was used in our previous study using DAMP + BDL injury model [ 1 ]. Appearance of DPPIV-positive bile ducts after repeated administration of DAPM The DPPIV chimeric rats were injected with DAPM at day 0, day 2, and day 4 (Figure 2A ). On day 30 after the last injection of DAPM the rats were sacrificed and the liver sections from various lobes were examined for DPPIV positivity. Before DAPM administration, there was 40%-50% engraftment of the DPPIV-positive hepatocytes as reported before and none of the biliary cells were DPPIV-positive (Figure 2B ). After DAPM repeated administration ~20% of the bile ducts turned DPPIV-positive indicating that they are derived from DPPIV positive hepatocytes (Figure 2C ). Periportal hepatocyte expression of CK19 CK19 was expressed only in BEC in the normal liver (Figure 3A ). However, after DAPM treatment protocol, selective periportal hepatocytes were also strongly positive for CK19 in addition to the BEC (Figure 3B and 3C ). Periportal hepatocytic CK19 staining was not uniform across the liver lobule. These findings indicate that the periportal hepatocytes only in the proximity of the affected biliary cells offer a pool of facultative stem cells capable of transdifferentiation to biliary cells. Hepatocyte-associated transcription factor HNF4 α expression in newly formed biliary ductules Figure 4 depicts the HNF4α (Figure 4A, B , and 4C ) and CK19 (Figure 4D, E , and 4F ) stainings on the serial liver sections. In the normal rat liver, nuclear HNF4α expression is observed only in the hepatocytes (Figure 4A ). However, the biliary ductules undergoing repair after repeated DAPM administration or DAPM + BDL show incorporation of cells resembling hepatocyte morphology that also had HNF4α positive staining (Figure 4B and 4C , respectively). In Figure 4C and 4F there is a panel of ductules in which only some of the cells in a duct are HNF4α positive and only some of the cells are CK19 positive (with overlap between some of the cells). Appearance of biliary-specific transcription factor HNF1β in hepatocytes intercalated within biliary ductules HNF1β staining is observed only in the biliary nuclei of the normal rat liver (Figure 5A ) but not in the hepatocytes. After DAPM + BDL injury (Figure 5B ) and repeated DAPM toxicity (Figure 5C ), many cells which morphologically appear as hepatocytes are seen intercalated within biliary ductules that coexpress HNF4α, indicating their hepatocytic origin. Many (but not all) of these cells stain positive for HNF1β (Figure 5B and 5C ). Notice the ductules marked with a thin arrow shown as an example have HNF1β stain, but are HNF4α- negative (Figure 5C and 5D ). The cells coexpressing HNF1β and HNF4α appear bigger compared to the normal liver biliary cells, a characteristic of ductular reaction. Transforming growth factor beta 1 (TGFβ1) induction in the periductular region with no change in HNF6 staining Compared to controls (Figure 6A ), TGFβ1 induction was observed in the region surrounding the biliary ductules after DAPM treatment in both the models under study (Figure 6B and 6C ). TGFβ1 Western blot data indicated increasing trend in both the treatment protocols compared to the controls (Figure 6D ), although DAPM + BDL treatment did not show statistical significance from the normal rat liver (NRL) by densitometry. In the control liver (NRL), nuclear HNF6 staining was noticed in hepatocytes and biliary cells (Additional File 2 , Figure S2, A). However, after DAPM toxicity, no significant change in HNF6expression was observed (Additional File 2 , Figure S2, B and C).
Discussion Mature hepatocytes and BECs contribute to the normal cell turnover and respond to various types of liver injuries towards self renewal [ 22 , 23 ]. However, when their own capacity to proliferate is compromised, both hepatocytes and BECs can act as facultative stem cells for each other and compensate for the lost liver tissue mass [ 1 , 23 , 24 ]. Presence of the full time uncommitted stem cells in the liver has been argued historically. Studies have shown that under compromised hepatocyte proliferation, biliary cells transdifferentiate into mature hepatocytes via the "oval cell" (also known as the progenitor cell) pathway [ 25 , 26 ]. When biliary cells are destroyed by DAPM under compromised hepatocyte proliferation, the oval cells do not emerge indicating that biliary cells are the primary source of oval cells [ 27 , 28 ]. Supporting this notion, hepatocyte-associated transcription factor expression by bile duct epithelium and emerging oval cells is observed in the experimental oval cell activation induced by using 2 acetyl aminofluorene (2AAF) + partial hepatectomy (PHx) model [ 29 ] and also in cirrhotic human liver [ 9 , 26 ]. Previously, we demonstrated that hepatocytes can also transdifferentiate into biliary cells under compromised biliary proliferation [ 1 - 4 , 9 ]. Periportal hepatocytes can transform into BEC when the latter are destroyed by DAPM and proliferation of biliary epithelium is triggered by bile duct ligation. Under this compromised biliary proliferation, biliary ducts still appeared and newly emerging ductules carried hepatocyte marker DPPIV in the chimeric liver [ 1 ]. These findings demonstrate that hepatocytes serve as facultative stem cells for the biliary epithelium upon need. In the present study, a novel rodent model of repeated biliary injury was established by repeated low dose of DAPM given to rats. Using this novel model of repeated DAPM treatment regimen, we demonstrate that hepatocytes undergo transdifferentiation into biliary epithelium also during progressive biliary damage. DAPM produces specific injury to the biliary cells because its toxic metabolites are excreted in bile [ 10 , 11 ]. In the DPPIV chimeric rats, bile ducts do not express DPPIV before DAPM administration; however, after repeated DAPM treatment ~20% of the biliary ductules express DPPIV, indicating that they are derived from hepatocytes. In the chimeric liver, 50% of the hepatocytes are derived from DPPIV + donor liver. Therefore, it is possible that DPPIV negative hepatocytes also transform into BEC, however cannot be captured due to lack of DPPIV tag. As per the assumption ~40-50% ducts are derived by transdifferentiation (~20 + % by DPPIV-positive hepatocytes + ~20 + % by DPPIV-negative hepatocytes). The rest of the ducts did not require repair because of lack of injury while part of the restoration can be due to some biliary regeneration itself that escaped repeated DAPM injury. After single DAPM injection, ~70% of the ducts were injured. DPPIV is expressed only in the hepatocytes in the chimeric rats before DAPM treatment and therefore provides strong evidence that DPPIV-positive biliary cells are originated from hepatocytes after DAPM treatment. The longest time point studied in the present study is 30 days after the DAPM treatment when biliary restoration is still underway. It is possible that the biliary cells derived from hepatocytes will suspend the expression of DPPIV as the restoration process come to an end. It can be argued that the biliary cells from the donor liver are the source of new biliary cells observed in the chimeric liver. However, after collagenase perfusion of the donor liver only <5% contamination of small admixture of nonparenchymal cells including biliary, stellate, endothelial, and other cell types was noticed as in routine hepatocyte preparations. In addition, the chimeric rats are treated with DAPM that targets biliary cells specifically. Therefore it is unlikely that newly appearing biliary cells originate from the very small if any biliary contamination engrafted in the chimeric liver. In the chimeric rats, after a thorough examination, not a single DPPIV-positive bile duct epithelial cell was observed in total 45 portal triads examined in the sections taken randomly. DPPIV positive biliary cells are observed in the chimeric liver only after the DAPM treatment regimen. During liver development both hepatocytes and BECs differentiate from hepatoblasts. The lineage-specific differentiation is regulated by cell-specific gene expression in turn controlled primarily by distinct sets of transcription factors [ 30 , 31 ]. Altered patterns of cell specificity in the expression of the transcription factors between hepatocytes and BECs has been observed under severe hepatic necrosis and chronic biliary disease in human patients [ 9 , 26 ] as well as in experimental conditions of 2AAF + PHx treatment [ 29 ]. In the present study, expression of the hepatocyte-specific transcription factor HNF4α was observed in the newly repairing ductules after DAPM + BDL and repeated DAPM injury. The newly repaired biliary ductules showed appearance of hepatocyte-like cells carrying HNF4α expression. It is interesting to note that the level of the HNF4α expression in repairing ductular cells was lower compared to normal hepatocytes suggesting its gradual loss during reprogramming towards biliary phenotype. Consistent with that notion, HNF4α expressing ductular cells also expressed HNF1β, a BEC-specific transcription factor. Specific inactivation of Hnf1β gene in hepatocytes and bile duct cells using the Cre/loxP system results in abnormalities of the gallbladder and intrahepatic bile ducts, suggesting an essential function of Hnf1β in bile duct morphogenesis [ 17 ]. Gain of expression of HNF1β by the hepatocytes normally expressing HNF4α indicates switch to the biliary specification of these cells. In order to examine if the mechanisms that govern the differentiation of hepatoblasts into BECs are recapitulated during transdifferentiation of mature hepatocytes into BECs, expression of TGFβ1 and Onecut factor HNF6 were assessed. During liver embryogenesis, a gradient of TGFβ signaling has been shown to control ductal plate hepatoblasts differentiation [ 20 ]. High TGFβ1 signaling is observed near the portal vein and is considered responsible for differentiation of hepatoblasts into biliary cells. The Onecut transcription factor HNF6, not expressed in the immediate periportal hepatoblasts inhibits TGFβ signaling in the parenchyma, and this allows normal hepatocyte differentiation. In the present study, an induction of TGFβ1 was observed in the hepatocytes the area surrounding the repairing biliary ductules, reminiscent of the changes seen in embryonic development. However, HNF6 immunohistochemistry did not reveal significant changes after DAPM treatment in both the models under study. TGFβ1 induction was also observed in the in vitro hepatocyte organoid cultures undergoing biliary transdifferentiation [ 4 ]. Recently, TGFβ1-treated fetal hepatocytes were found to behave as liver progenitors and also gain expression of CK19 [ 24 ]. The data from our study suggest that TGFβ1 signaling can lead to transdifferentiation without any changes in the HNF6 expression in the adult liver upon need. It is possible that other transcription factors like OC-2 known to have overlapping target genes of HNF6 [ 32 ] may be responsible for the TGFβ1 increase in the periportal hepatocytes. The periportal hepatocytes expressed CK19 after DAPM challenge with or without BDL pointing to the source of the likely pool of hepatocytes capable of undergoing transdifferentiation. These results are also consistent with our previous findings indicating that subpopulation of periportal hepatocytes represents the progenitor pool from which biliary cells may emerge in situations of compromised biliary proliferation [ 1 ]. Taken together the findings from this study indicate that the hepatocytes constitute facultative stem cells for the biliary cells capable of repairing liver histology when the classic biliary regeneration fails. The findings also suggest that subpopulations of hepatocytes in periportal region may have a higher tendency to function as facultative stem cells compared to other cells of their kind, even though they function as hepatocytes under normal circumstances. The exact molecular mechanisms that govern interchange in expression of cell-specific HNFs remain to be elucidated. Our earlier study with hepatocyte organoid cultures point to the role of HGF and EGF in hepatobiliary transdifferentiation [ 4 ]. Via AKT independent PI3 kinase pathway, HGF and EGF promote hepatocyte to BEC transdifferentiation [ 4 ]. It is also known that Foxo transcription factors are regulated by the PI3 kinase/AKT pathway [ 33 ]. It is possible that similar signaling occurs through HGF and/or EGF via PI3 kinase regulating expression of HNF transcription factors that in turn lead to transdifferentiation. Overall, understanding of transdifferentiation of native hepatocytes and BECs may prove to be pivotal in cellular therapy against liver diseases.
Conclusions Under compromised biliary regeneration, transdifferentiation of hepatocytes into biliary cells provides a rescue mechanism. Periportal hepatocytes undergoing transdifferentiation gradually loose the expression of hepatocyte master regulator HNF4α and acquire HNF1β that shifts cellular profile towards biliary lineage. An increase in TGFβ1 expression in periportal region also appears to be important for the shift from hepatocytic to biliary cellular profile.
Background Under compromised biliary regeneration, transdifferentiation of hepatocytes into biliary epithelial cells (BEC) has been previously observed in rats, upon exposure to BEC-specific toxicant methylene dianiline (DAPM) followed by bile duct ligation (BDL), and in patients with chronic biliary liver disease. However, mechanisms promoting such transdifferentiation are not fully understood. In the present study, acquisition of biliary specific transcription factors by hepatocytes leading to reprogramming of BEC-specific cellular profile was investigated as a potential mechanism of transdifferentiation in two different models of compromised biliary regeneration in rats. Results In addition to previously examined DAPM + BDL model, an experimental model resembling chronic biliary damage was established by repeated administration of DAPM. Hepatocyte to BEC transdifferentiation was tracked using dipetidyl dipeptidase IV (DDPIV) chimeric rats that normally carry DPPIV only in hepatocytes. Following DAPM treatment, ~20% BEC population turned DPPIV-positive, indicating that they are derived from DPPIV-positive hepatocytes. New ductules emerging after DAPM + BDL and repeated DAPM exposure expressed hepatocyte-associated transcription factor hepatocyte nuclear factor (HNF) 4α and biliary specific transcription factor HNF1β. In addition, periportal hepatocytes expressed biliary marker CK19 suggesting periportal hepatocytes as a potential source of transdifferentiating cells. Although TGFβ1 was induced, there was no considerable reduction in periportal HNF6 expression, as observed during embryonic biliary development. Conclusions Taken together, these findings indicate that gradual loss of HNF4α and acquisition of HNF1β by hepatocytes, as well as increase in TGFβ1 expression in periportal region, appear to be the underlying mechanisms of hepatocyte-to-BEC transdifferentiation.
Competing interests The authors declare that they have no competing interests. Authors' contributions PL and WB conducted the animal studies, PL and AO performed the immunohistochemical stainings, PL and UA collected tissues and performed Western blotting, PL wrote the manuscript, UA reviewed the manuscript, GM designed the study, examined histological and immunohistochemical stainings, and reviewed the manuscript. All the authors have read and approved the final manuscript. Supplementary Material
CC BY
no
2022-01-12 15:21:36
Comp Hepatol. 2010 Dec 2; 9:9
oa_package/fe/e5/PMC3014870.tar.gz
PMC3014871
21122091
Background Health indicators are tools designed to measure the health status of people and the functioning of health services through the various factors that influence them (demographic, economic, social) [ 1 , 2 ]. These provide the basic information for system analysis and decision-making in policies, planning and health management. The area of mental health presents added difficulties for the development of a useful list of health indicators for a variety of reasons. Firstly, this is a complex area in which health, social, educational and criminal and justice services coexist, where the care teams are multidisciplinary, and in which an integral care focus should be adopted [ 3 ]. Secondly, there are no reliable biological indicators for either the disorders assessed or the results, which complicates epidemiological and outcome research. Thirdly, mental health has been included late into the general health system (in Spain from 1986), it presents problems of under-financing and the lack of national data bases which exists in other disciplines (e.g. Oncology or AIDS) [ 4 ]. The instruments which compile indicators are rarely organised as a knowledge-base and they lack adequate semantic interoperability, as similar names may be used with different meanings and vice versa even in indicator' sets developed and used in the same country. Furthermore, there is no international consensus regarding basic indicators for the evaluation and follow-up of mental health systems, and multiple sources of information are available at the international, national, regional and local levels, including health administration registers and large databases, health surveys, health statistics, commissioned reports, and key contacts or demographic censuses. Although the available international instruments do provide a useful source of indicators (e.g. WHO-AIMS [ 5 ] or the Mental Health Country Profile [ 6 ]), these listings have not been developed as knowledge bases and their taxonomy and hierarchy has not been formalised in an explicit way. An indicator base may allow to select indicators from this base for specific uses in studies, projects and plan monitoring, as well as in specific services, programmes or target populations. In 2008, the Clinical Management Working Group of the Spanish Society of Psychiatry (known by its Spanish acronym GClin-SEP) started the development of a preliminary taxonomy and a related knowledge-base of mental health indicators which would facilitate a future standard indicator set which could be used for inter-regional comparison in Spain, related to the National Health System Mental Health Strategy [ 7 ], taking into account the challenges and problems previously described [ 8 , 9 ]. As a first step a preliminary taxonomy and a related knowledge-base for mental health indicators in Spain was planned. A taxonomy may be defined as a particular classification arranged in a hierarchical structure providing supra and subtype relationships. Within the health care technology field a health knowledge-base is 'a system of storage, classification and presentation of relevant health information which includes databases, glossaries, articles, presentations and other documents regarding a specific health area or subject' [ 10 ]. This should assist the development of a list of basic indicators which would, in turn, facilitate informed evidence in mental health planning.
Methods This project is aimed to developing a conceptual map and a knowledge-base of mental health indicators suitable for mental-health planning which permits inter-regional comparisons, follow-up and evaluation of the health systems that currently exist in Spain. For this a mixed qualitative method was followed using frame analysis and nominal groups. Frame analysis is a broadly defined method of enumerating and defining ideas and themes within a larger topic that is particularly useful for formalising concepts [ 11 ]. Of the four components of frame analysis, we focused on "frame bridging," which manifested as collaborating with experts who are interested in topic but do not commonly interact due to different training backgrounds or other reasons, and on "frame amplification," or the clarifying and elaborating of a framework from which to think about the issue of discussion [ 12 ]. Frame analysis has previously been applied to a wide range of social and health-related topics, such as consensus-building in online special-interest advocacy groups and understanding of the culture of nurse mangers [ 13 ]. Two members of the core group with a background in mental health system research (LS-C), and mental health geography and data management (JAS), searched the relevant literature in PubMed and Google Scholar using the key words: 1) "Mental Health", 2) "Care", "System", "Policy", "Planning" and 3) "Indicator(s)"; as well as a review of other technical documents available such as lists of general indicators of health relevant to mental health, and mental health lists from international, European and national organisations. Also considered were various plans and health reports from the Autonomous Communities or regions in Spain and lists developed by scientific associations. As the aim of this project was to develop a taxonomy usable in Spain within the European context, indicator lists from the US were not included in the analysis. The two researchers arranged this content according to key topics and prepared a framing document and a list of key areas and questions to be debated by the nominal groups. The nominal group technique helps to deal with ill-structured domains while it allows a more structured approach than focus groups, as well as the use of prior information and knowledge. Once ideas and related questions are listed, its relevance to the central problem can be discussed following a question made by the facilitator, ideas can be re-formulated and clustered into coherent groups. All members are encouraged to participate in the discussion following a sequential order and every round is followed by a final debate [ 14 , 15 ]. In the health sector nominal groups have been previously used to develop the preliminary taxonomy of health related habits and lifestyle [ 16 ] and its integration into primary care [ 17 ]. An iterative process was followed to develop the preliminary taxonomy and the related knowledge-base. In all 14 experts in mental health service research and indicator analysis with very different background participated in two nominal groups: a core working group and an external group. The core working group was comprised of seven members: four psychiatrists with experience in the evaluation and management of services, one expert in data-analysis (Knowledge Discovery from Data -KDD) [ 3 ], a health geographer, and an expert in health and social management in the field of mental health. The core group hold three face-to-face meetings in 2009 and 2010, combined with three conference calls and periodic contact by e-mail. Additionally, a panel of experts from the Scientific Association PSICOST provided external support to this core group. This external panel had seven members, a coordinator (LS) and a moderator (JAS). The panel also followed a nominal group methodology and it was comprised of two psychiatrists (LS and JCG), one psychologist (CR) with experience in services evaluation, a public health expert in epidemiology (JA), an expert in health-indicator data analysis (CG), and a public administration manager with experience in mental health and disability (FA). For the development of this taxonomy the model and terminology used at the International Classification of Functioning (ICF) [ 18 ] was adopted for defining health constructs, domains and dimensions. For the definition of entities, their hierarchy and type, we used a basic formal terminology: <it is a>, <it is comprised of>, <it is part of>. A conceptual map was drafted using a tree structure for coding and organising the indicators. This approach had been used previously for the description of resource indicators and the use of mental health services in Spain and in other European countries [ 19 ]. This diagram allows the organisation of indicators into classes (domains), subclasses (subdomains) and additional types. This structure allows the addition of new indicators or the subdivision of previously defined indicators where necessary, without altering the hierarchical structure of the taxonomic system. Subsequently, the two reviewers developed a list of relevant databases, a wide-ranging list of mental health indicators, and a glossary. With respect to the database, and bearing in mind all the information available, the following question was formulated for the nominal expert panel: "Is this a relevant indicator for the evaluation of the mental health system in the various Autonomous Communities?". 'Relevant' was defined here as 'closely connected with the subject and valuable and useful to mental health planners and stakeholders' based on the definition provided by the Oxford Advanced Learner's Dictionary http://www.oxfordadvancedlearnersdictionary.com . The responses were organised into a 4-level Likert scale according to their relevance (none, doubtful, moderate and high). The results were reviewed by the members of the working group and the information gathered was used to develop a definitive list which was added to the preliminary knowledge base and which can be seen at the Spanish Society of Psychiatry website SEP [ 20 ]. The external nominal panel provided an evaluation of the relevance of the various indicators which was reviewed by the core working group.
Results Document basis Fourteen bases of relevant indicators were identified for the evaluation of mental health systems in Spain. These are shown in Table 1 . Preliminary taxonomy For the hierarchical organisation of the classes, a tree structure has been used with four main branches corresponding to 'Context', 'Resources', 'Utilization' and 'Results'. Given the possibility of the taxonomy being used internationally, the decision was made to label them using their English initials (C: Context, R: Resources, U: Utilization, O: Outputs). The conceptual map is represented in Figure 1 . Table 2 details the indicators organised hierarchically into Domains, Subdomains, Types and Subtypes, along with their corresponding code. The Mental Health System Context domain contains three subdomains: Generic Context is, in turn, comprised of eleven types, General Health Context of three types and Mental Health Context of twelve types. The Mental Health Resources domain contains two subdomains: Mental Health Services with thirteen types and Human Resources (personnel/staff) with eight types. The Utilization domain has three subdomains: Activity with four types, Medication treat , and Costs . Finally, the Results domain is comprised of four subdomains: Health Status (containing, in turn, three types), Mortality , Prevalence (with two types), and Quality . A detailed description of the typology of the mental health system indicators can be seen in the database at the Spanish Psychiatric Society website [ 20 ]. Knowledge base components The knowledge base developed by the working group consists of the list of the relevant indicator bases with their links, as well as a database of indicators and a glossary appendix. The mental health system base of indicators is composed of 661 indicators organised according to the proposed taxonomy. The definition of each indicator was developed using cards which containing the name, the unit of measurement and calculation, source, and availability at the geographical area. Evaluation of the relevance of the various indicators by the nominal panel, reviewed by the core working group, can be seen in Table 2 . This evaluation has allowed identification, in accordance with their relevance for the mental health system in Spain, 200 high-relevance indicators, 159 of moderate relevance, 192 of doubtful relevance, and 110 of no relevance to the aim of this list.
Discussion The present work is framed in the context of informed evidence for health policy and planning [ 21 ]. The concept of informed evidence is replacing that of evidence-based care and highlights the need for quality registers and the greatest possible number of information sources available for decision-making in health policy, including local provision and organisation at different levels (micro, meso and macro) [ 22 ]. To our knowledge this is the first preliminary taxonomy of indicators of the mental health system and its related knowledge-base. Other preliminary taxonomies have been recently developed to formally organise other areas of knowledge such as health indicators [ 23 ], patient safety and medical errors [ 24 ], or health related habits [ 16 ]. This preliminary taxonomy does not pretend to develop a completely different conceptual map to what is currently used in the field, but to formally organise the available information and provide a hierarchical order using common terminology as much as possible. The definitions selected were also those more commonly accepted. The extent of the area of health indicators is such that it hinders a complete review of the material; especially for a restricted group with a limited budget. This knowledge base has an incomplete character and several limitations. First, this knowledge-base is country-specific and its generalisability and transferability outside Spain is limited. In any case it is important to note that country-level information is very relevant for international health system research [ 22 ]. Furthermore, the heterogeneity of the Spanish mental health system makes it a unique case for studying different care models under quasi- or universal health care coverage. The existence of 17 different publicly funded mental health systems, with their own policy and practices may provide useful insight for many countries. They range from a practically do-nothing approach until very recently in some regions, to the transformation of the old psychiatric hospitals, complete separation of funding and provision, with market competition and high participation of the private sector working under agreements set by the public health system (e.g. Catalonia). They may have one single public system (comprising both funding and provision) without closure of psychiatric hospitals (e.g. Basque Country) or a public system with full deinstitutionalisation and closure of psychiatric hospitals (e.g. Andalusia) [ 25 ]. In addition the conceptual map included in this preliminary taxonomy has been designed to facilitate the incorporation of new domains and sub-domains as the system is expanded, so it may be refined when information from other sources is incorporated (e.g. user-oriented mental health report cards), or when it is used in other countries in Europe. Second, this knowledge base is expert-oriented and it has excluded international indicator lists not developed or used in Spain, such as the Mental Health Country Profile [ 6 ]; the NF-10 and its related instruments in the US [ 26 ] or the 'State Report Cards' by the National Alliance on Mental Illness (NAMI) [ 27 ]. As said, this knowledge-base should be complemented by user-oriented indicator lists based on concerns reported by consumers which are not currently available in Spain. Third, there are great differences regarding the degree to which this information can be accessed. The majority of indicators are available on a national and regional scale but these are limited in small health areas. The limitations of scale, periodicity and sources mean that some indicators cannot be selected despite their potential relevance. The sources of information for the calculation of the indicators are highly heterogeneous with the institutes of statistics, and information from health administrations and social welfare being the principle sources of data. Furthermore, the reference year of these sources varies across the 17 regions or Autonomous Communities in Spain, and even within the same Autonomous Community. This is related to the fact that, after the devolution process started in 1986, the Spanish Health System actually comprises 17 different health systems with wide variation in mental health care organisation and policy [ 25 ]. Fourth, the extended list included important indicators that have not been incorporated to the 200 indicator list due to usability problems in the Spanish case. These comprise patient reported outcomes, stigma and sensitation, suicide prevention, prevention of depression, training and human rights. To date human rights have been specifically assessed in a single region (Asturias), and results have not been published yet. In any case a list of 200 indicators is too large to be practical for decision making, even though other main lists and instruments contain a similar number of indicators (e.g. WHO-AIMS [ 5 ]). The OECD list comprises 12 indicators which are included in the expanded GClin-SEP list. Unfortunately just one is currently collected in Spain [ 28 ]. GClin-SEP is conducting a Delphi panel on the relevance and usability of these indicators to produce a brief list of 50 indicators usable for comparing mental health systems across the 17 regions, and for the standard monitoring of the Spanish National Mental Health Strategy. This Delphi study will provide data on the feasibility and face validity of the indicators registered in this listing. In addition, there is scant information on the psychometric properties of the indicators in the care system [ 2 ]. The development of a preliminary taxonomy is complementary to the psychometric analysis of the indicator set. Health system indicators are very basic health technology tools, and hence, their feasibility, consistency, validity, reliability, redundancy, sensitivity to change, level of generalisability, and impact analysis should be evaluated following standard procedures [ 29 ]. The existing gap between the literature on the psychometric properties of indicators and its broad use in health service and health system research may be partly related to a lack of awareness by researchers, planners and funding agencies of the relevance of this topic and the need for additional funds in this field.
Conclusion This preliminary taxonomy and its related knowledge-base should serve those embarking on a study of the Spanish Mental Health System, and it may be also valuable to researchers looking for selected indicator lists in specific areas within mental health system research in Spain. It may be also relevant as a contextual case to those analysing indicator lists in other countries, particularly in Europe. On the other hand the preliminary taxonomy and its related conceptual map and hierarchy would require comparison with other related international initiatives and further analysis following a formal ontology approach [ 23 ]. These results should be challenged in other European countries to improve the indicators on Mental Health Systems in this world region.
Background There are many sources of information for mental health indicators but we lack a comprehensive classification and hierarchy to improve their use in mental health planning. This study aims at developing a preliminary taxonomy and its related knowledge base of mental health indicators usable in Spain. Methods A qualitative method with two experts panels was used to develop a framing document, a preliminary taxonomy with a conceptual map of health indicators, and a knowledge base consisting of key documents, glossary and database of indicators with an evaluation of their relevance for Spain. Results A total of 661 indicators were identified and organised hierarchically in 4 domains (Context, Resources, Use and Results), 12 subdomains and 56 types. Among these the expert panels identified 200 indicators of relevance for the Spanish system. Conclusions The classification and hierarchical ordering of the mental health indicators, the evaluation according to their level of relevance and their incorporation into a knowledge base are crucial for the development of a basic list of indicators for use in mental health planning.
Competing interests The authors declare that they have no competing interests. Authors' contributions AB coordinated the project. LSC and JAS prepared the framing document, managed the nominal groups and wrote the draft. MM, MG, KG and MR participated in the core group and reviewed the draft and related documents. All authors read and approved the final manuscript.
Acknowledgements This work has been financed by the Spanish Psychiatry and Mental Health Foundation (FEPSM), the Preventive Activities and Health Promotion Research Network (RedIAPP), the Spanish Research Institute Carlos III and the European Regional Development Fund (ERDF) (PI08/90101). This project has been coordinated by the Spanish Psychiatric Society Clinical Management Working Group (Gclin-SEP) (Chair: Professor A. Bulbena) and the research association PSICOST. Other experts who participated in this GClin-SEP study where: José Almenara, Federico Alonso, Carlos García-Alonso, Juan Carlos García-Gutiérrez and Cristina Romero.
CC BY
no
2022-01-12 15:21:36
Int J Ment Health Syst. 2010 Dec 1; 4:29
oa_package/94/aa/PMC3014871.tar.gz
PMC3014872
21122161
Background In studies of the general population, the incidence of mental disorders among people with substance use disorders (SUD) varies according to catchment area and methodology. It is assumed that 30 - 40% of people with alcohol related disorders, and 40 - 50% of people with other SUD, also have a psychiatric disorder [ 1 - 5 ]. The incidence of psychiatric disorders among individuals with SUD in treatment is even higher [ 6 , 7 ]. Psychiatric disorders have repeatedly been shown to influence treatment outcomes in different treatment seeking SUD samples [ 8 - 10 ]. The psychiatric disorders most commonly associated with SUD are anxiety and depression [ 11 - 13 ]. Symptoms of anxiety and depression have been found to influence the course of treatment [ 10 ], and to predict relapse in SUD [ 14 - 16 ]. On the other hand, several studies have found that anxiety and depressive symptoms among SUD patients often are passing, representing toxic or withdrawal effects that resolve in response to abstinence or to entry into SUD treatment [ 17 - 21 ]. Longitudinal studies in representative population samples suggest that casual relationships can operate in various directions between SUD and symptoms of anxiety and depression [ 22 ]. Anxiety and depression could increase the likelihood of developing SUD; the development of SUD among those with anxiety and depressive symptoms could worsen their course, and symptoms of anxiety and depression could reflect substance-induced conditions [ 23 ]. Previous studies of SUD treatment have shown that patients who stay in treatment longer more likely achieve the best outcomes, regardless of outcome measures [ 20 , 24 ]. In screening for psychiatric disorders, the concept of mental distress is widely used. One screening instrument used in population studies is the HSCL-10 [ 25 ]. The HSCL-10 has been developed from the original HSCL-90 [ 26 ], using two (i.e. anxiety and depression) out of the nine original dimensions [ 27 ]. In the general population, it has been found that 11.4% of the population meets the criteria for further assessment and treatment of psychiatric disorders [ 25 ]. In one study, it was concluded that 50 - 60% of the "cases" identified by instruments like the HSCL-10 meet the criteria for a diagnosis of a psychiatric disorder [ 28 ]. Prior studies have described a close relation between substance use and mental distress [ 29 , 30 ]. Epidemiological surveys have consistently documented higher rates of anxiety and depression among women than men [ 31 , 32 ], and it is a common finding that women report a higher level of mental distress [ 33 ]. In a treatment seeking sample of patients with SUD in Norway, women more often suffered from depression than men [ 6 ]. In a follow-up of the same sample six years after treatment, Bakken et al. [ 34 ] found that mental distress remained high at the time of follow-up, and that abstainers had a significantly lower level of mental distress, especially female abstainers. This finding is consistent with other studies that have shown that women in SUD treatment generally report a higher level of mental distress [ 35 ]. In the general population, higher education is known as a protective factor with respect to physical and mental health [ 36 , 37 ]. A Norwegian population study concluded that a higher level of education seemed to have a protective effect against anxiety and depression [ 38 ]. In Norway, more than 80% of the population hold an education beyond 10 year compulsory school [ 39 ]. There is a consistent relationship between dropping out of high school and substance use [ 40 ]. In one study, it was found that drop-outs used substances at elevated levels compared with in-school peers, and the drop-outs were more likely to develop alcohol related disorders [ 41 ]. In another study, drop-out from school in Norway was shown to be related to frequent alcohol intoxications [ 42 ]. Typically, those in lower socio-economic groups have worse health and higher mortality than those in higher socio-economic groups [ 43 , 44 ]. SUD and other psychiatric disorders are generally associated with a variety of psychosocial risk factors [ 45 - 48 ]. In a recent study [ 30 ], we examined predictors of mental distress at admission to in-patient SUD treatment. This study indicated that gender had a significant impact on the level of mental distress, as women scored higher on mental distress at admission to treatment. A more severe use of substances, as reported on the AUDIT [ 49 ] and the DUDIT [ 50 ], also predicted a higher level of mental distress, as well as having previously received psychiatric treatment. In this study, 83% of the patients scored above the established cut-off for the HSCL-10 at admission. The aim of the present study was to determine predictors of change in the level of mental distress among SUD patients in inpatient treatment. First, we examined to which extent mental distress changed during treatment. Second, we examined possible determinants of the change in mental distress.
Material and methods This is the first study on substance use, mental distress and treatment outcomes in Northern Norway including patients in several units. The project was based on a naturalistic design with measurements taken before treatment and at discharge. All patients admitted to the units and considered competent to consent during the period September 2007 to May 2009 were given written and oral information about the study by a research collaborator working in each unit. Material Data was collected from 164 patients admitted to one of the five inpatient treatment units for SUD use within the catchment area of the University Hospital of Northern Norway. In the study period, 574 patients were admitted to the units. Patients who were considered not able to give an informed consent ( N = 21) or whose hospital stay was too short too be included ( N = 41) were not asked to participate. Of the patients considered relevant for the study ( N = 512), 296 patients (58%) agreed to participate and signed an informed consent. Of these, 273 patients filled out the questionnaire at admission, and of these 172 filled out the questionnaire at discharge. Patients ( N = 8), who had failed to complete the HSCL-10 in both of the questionnaires, were excluded from the analyses. None of the patients' replies deviated strongly statistically. The final sample thus consisted of 164 respondents (74% men, mean age 40, range 18 - 67 years). More than 90% of the sample had a Norwegian origin. As only two percent of patients from the catchment area were referred to outpatient treatment in 2005 [ 51 ], the inpatient group was highly representative of all patients. The units covered a population of 500 000 inhabitants in the counties of Nordland, Troms and Finnmark. Unit 1 offered specialized assessment and treatment of dual diagnoses and provided treatment up to six months ( N = 12) Unit 2 provided treatment up to 18 months according to a therapeutic community model ( N = 9). Unit 3 was a detoxification unit that provided treatment up to six weeks ( N = 54). Unit 4 and 5 provided general SUD inpatient treatment up to six months ( N = 43 and N = 48). All units treated both sexes, used a combination of group and individual therapy, and managed detoxification directly (Unit 2 and 3) or in collaboration with a detoxification unit (Unit 1, 4 and 5). Treatment was composed of a combination of network-based approaches, psychotherapeutic and pharmacological treatments, but these components were given different emphasis in the various units. Instruments Outcome Mental distress Mental distress was measured twice, before admission and at discharge, using a 10 item version of the Hopkins Symptom Check-List (HSCL-10) [ 25 ]. The HSCL-10 is a self-report questionnaire with a four point Likert scale, ranging from 1 (not at all) to 4 (extremely). The HSCL-10 is based on the SCL-90-R [ 26 ], and is composed of two out of the original nine factors (anxiety and depression) [ 27 ]. A mean item score was calculated and used as an index of general distress severity. Missing data were replaced with the mean value if no more than two item scores were missing [ 52 ]. An average score of 1.85 or higher indicates a need of further assessment and possibly a need for psychiatric treatment [ 25 ]. Improvement in mental distress during treatment Improvement was measured as a reduction in the mean HSCL-10 scores from pre to post. Change scores were calculated by subtracting the mean post score from the mean pre score. As the purpose of using change scores in the present study was to identify predictors of change, rather than evaluating absolute change, the reliability of change scores may be estimated with Cronbach's method [ 53 ]. The internal consistency of the change scores was very good (α = .84), while the reliability of the pre- and post-test scores were comparable (α = .91). Predictors Alcohol and drug use Substance use was measured by the Alcohol Use Disorders Identification Test (AUDIT) [ 54 ] and the Drug Use Disorders Identification Test (DUDIT) [ 55 ]. The AUDIT is a widespread instrument measuring severity of alcohol use the past 12 months. It has 10 items with a scoring range from 0 to 40. The DUDIT is a parallel instrument to the AUDIT and is designed to identify persons with drug use problems the past 12 months. It has 11 items with a scoring range from 0 to 44. Current use of specific substances were measured by the self-report Drug Use Disorders identification Test - Extended (DUDIT-E) [ 56 ]. Responses to the DUDIT-E were coded 0 if a substance was used less than twice a month, and 1 if used twice a month or more often [ 57 ]. The number of substances used was calculated from the number of substances used twice a month or more as reported on the DUDIT-E, together with alcohol used twice a week or more as reported on the AUDIT. Variables related to admission and discharge The patients' socio-demographic and treatment histories were assessed using the Norwegian National Client Assessment Form [ 58 ]. Variables in this form include age, sex, occupation, housing, and previous treatment. This form is routinely completed for all patients admitted to SUD treatment in Norway. In addition, the patients' clinicians provided extended information on the patients' socio-demographic history and diagnostic assessment through a form developed especially for this study. The patients stayed in treatment for an average of 56 days (range 3 - 396). The variable defining the treatment setting was dichotomized into the detoxification unit ( N = 54) and all other units ( N = 110). The kind of treatment offered to the patients was registered on a form listing 18 possible treatments, e.g. individual treatment by a clinical psychologist, psychotropic medication, group treatment and family counseling. Procedures Shortly following their consent to participate they responded to the questionnaire. Before discharge, patients responded to a second questionnaire including corresponding questions, as well as questions about treatment satisfaction. Patients were paid compensation in the form of a cinema ticket or two lottery tickets (worth $8). The study was approved by the Regional Committee for Medical Research Ethics (P REK Nord 12/2006) and the Norwegian Social Science Data Services (NSD). Statistical Methods All analyses were conducted using SPSS 16.0. Internal consistency of test scores were assessed with Cronbach's alpha [ 53 ]. A hierarchical regression analysis was conducted to identify predictors of HSCL-10 change scores [ 59 ]. The predictors were entered in the regression model in three steps: 1) demographic variables (age, gender, employment, marital status, living conditions, and education), 2) substance use (score on AUDIT and DUDIT, number of substances used), and 3) treatment variables (previous substance use or psychiatric treatment, number of days in treatment, treatment in the detoxification unit (Unit 3), psychotropic medical treatment, and treatment by clinical psychologist). As we did not have a clear theory or other empirical studies done in Norway to guide the selection of variables within each cluster (block) of predictors, a stepwise procedure was used. Regression diagnostics were performed to test for collinearity, normality, outliers, and leverage. Effect size statistics were reported as Cohen's d (paired t -tests). According to Cohen [ 60 ], effect sizes of .2, .5 and .8 represent weak, moderate and strong effects, respectively.
Results The most frequently occurring substance use problem, according to diagnoses reported by the clinicians, was alcohol dependence (60%), opiate dependence (29%), dependence to hypnotics (20%), and amphetamine dependence (19%). Concurrent polydrug use was common as 64% reported using two or more substances, and 42% using three or more substances. Reduction in Mental Distress On average, the participating patients reported a high mean score of mental distress at admission ( M = 2.54, SD = .75), which decreased during treatment, being significantly lower ( M = 1.86, SD = .67) at discharge ( t 163 = 13.24, p < 0.001). The statistical effect size of this reduction was very strong ( d = 1.14). The proportion of patients scoring above the established cut-off-level of 1.85 on HSCL-10 fell from 82% at admission to 44% at discharge. Predictors of change in Mental Distress The mean and the standard deviation of mental distress change scores were -.68 and .66, respectively. Among the 164 patients, 141 patients reported a reduction in distress (max positive change: -2.80), while 4 patients reported no change and 19 patients reported a negative change by experiencing more mental distress at discharge (max negative change: 1.30). In order to identify which factors that were related to improvement during treatment, a hierarchical regression analysis was conducted on the change scores in mental distress. The standardized regression coefficients are presented in Table 1 . Demographic variables were entered into the first step. Two of the six variables contributed significantly. Education explained 3.1% of the variance ( Adj R 2 ) ( F 1, 162 = 6.21, p = 0.014), indicating that having no education exceeding 10 year compulsory school, was related to a larger reduction in HSCL-scores. Gender explained an additional 2.6% of the variance ( F- change 1, 161 = 4.48, p = 0.036), indicating that women experienced a larger reduction in HSCL-10 scores. In the second step three substance use variables were entered, and two of these contributed significantly. Variation in the AUDIT scores explained an additional 3.0% of the variance ( F -change 1, 160 = 5.47, p = 0.021), and variation in the DUDIT scores explained an additional 4.3% of the variance ( F- change 1, 159 = 7.75, p = 0.006), indicating that a more severe use of alcohol and other substances was related to more reduction in HSCL-10 scores. In the third step, none of the variables related to type of treatment contributed significantly. The final regression coefficient parameters with HSCL-10 change scores as the criterion variable are shown in Table 1 .
Discussion The present study demonstrated that mental distress changed significantly during treatment for patients admitted to inpatient SUD treatment. A lower level of education, being female and having a more severe use of substances, all predicted a greater change in mental distress during treatment. Being female and having a more severe use of substances were also connected to a higher level of mental distress at admission to treatment [ 30 ]. Even though change in mental distress from admission to discharge was substantial in this sample, the level of mental distress was still high at discharge. Almost half of the sample scored above cut-off on the HSCL-10, as opposed to 11.4% in the general population in Norway [ 25 ]. Regression to the mean may in part explain some of the findings. It is a well known fact that the people that have the highest scores (or the most problems) also are in the position to change most. Despite this, a score of change like the one that is used in this study has been shown to be highly reliable [ 61 ]. A more severe use of substances predicted a larger reduction in mental distress during treatment. This could mean that mental distress is connected to the use of substances, and that these symptoms decrease as use of substances decrease and symptoms of intoxication and withdrawal vanish. This assumption is supported by previous studies which have shown that psychiatric symptoms like depression and anxiety to a great extent change after some time in treatment [ 17 , 21 ]. Moreover, some depressive symptoms resolve rapidly after brief periods of abstinence or entry into SUD treatment [ 18 - 20 ]. To some extent, symptoms of depression and anxiety among SUD patients can be seen as toxic or withdrawal symptoms, and can therefore be expected to vanish during treatment. Still, the proportion of patients with a mental distress score above cut-off at discharge is high - 44%. These findings are consistent with previous studies of SUD treatment, which have shown that patients who stay in treatment longer more likely achieve the best outcomes, regardless of the outcome measure [ 20 , 24 ]. This finding could be an argument for offering treatment that is extended beyond the withdrawal phase. Women reported a significantly larger reduction in mental distress during treatment than men. As we found in a prior study, there was also a significant difference in the level of mental distress between men and women at admission to treatment [ 30 ]. It is a well known fact from population studies that women report a higher level of mental distress and qualify more often for diagnoses of anxiety and depression [ 32 , 33 ]. The finding from our study is consistent with prior results [ 6 , 34 ], suggesting that women experience a higher degree of mental distress and also a greater reduction in mental distress if abstinent. This result could also reflect the fact that SUD treatment in our area is more adjusted to the needs of women. A possibly surprising result in this study was that a lower level of education (i.e. 10 year compulsory school at the most) was associated with a greater change in mental distress during treatment. Population studies have found that those in lower socio-economic groups in general have worse health and higher mortality than those in higher socio-economic groups [ 43 ], and in the general population the prevalence of mental distress has been found to increase by decreasing social status [ 44 ]. A higher education level has been found to have a protective effect against anxiety and depression [ 38 ]. Education is known as a factor that impacts physical health, in the way that education may act to support positive lifestyle choices and the development of habits that over time maintain physical health [ 37 ]. A number of psychosocial factors have been shown to be related to the development of problematic substance use [ 46 ]. In our study, it was only the difference between no education beyond compulsory school and any further education that was significant. 43% of the sample was in the group that had no education beyond compulsory school. This is a higher percentage than in the general Norwegian population, where only 21% have no education beyond compulsory school [ 39 ]. An early onset of SUD is related to school drop-out [ 40 - 42 ]. The fact that patients with less education experienced significantly more reduction in mental distress than those with more education could reflect that developing a SUD causes a downward shift in socio-economic group [ 48 ], and in this sense implies a greater loss of functions, a higher sense of loss, and a greater degree of stigma for people with more education than those with less. Further research is necessary to understand this phenomenon, for instance by conducting qualitative interviews of patients, focusing on the content of treatment. Strengths and limitations The present study is subject to a number of limitations. The study sample was selected from five different units for inpatient SUD treatment. The units differed substantially - one unit was primarily concerned with detoxification, one unit focused on the assessment of dual diagnosis patients, and three units offered a more goal directed SUD treatment. There is, therefore, some heterogeneity within the sample. The participation rate was also relatively low. A proportion of the original participants did not complete the survey at discharge. A potential drawback with calculating change scores is the risk of a marked reduction in test score reliability as measurement errors at two points in time are added together, hence increasing total measurement error. The consequence is loss of statistical power. However, a substantial reduced reliability is not always the case, as was demonstrated in a study using generalizability theory to estimate absolute and relative reliability of change scores [ 61 ]. If the variance in change scores and the number of items are large enough, the reliability of change scores may be adequately high. A further limitation can be that the study lacks an untreated control condition, although this is extremely difficult to construct in this study setting. On the other hand, multisite, prospective studies like the present can investigate treatment outcomes in existing services and under actual clinical circumstances, and thereby show a high external validity and allow for a generalization of the findings to clinical settings [ 20 ].
Conclusions Patients with SUD admitted to inpatient treatment reported a significant reduction in mental distress through treatment. An increased severity in the use of substances, as well as being female, both predicted a higher level of mental distress at admission to treatment and a greater change in mental distress during treatment. Holding no education beyond compulsory school only predicted a reduction in mental distress during treatment. The toxic and withdrawal effects of substances, level of education as well as gender, probably explain most of the differences in change. Some of these changes may in part be explained by regression to the mean.
Background Substance users being admitted to inpatient treatment experience a high level of mental distress. In this study we explored changes in mental distress during treatment. Methods Mental distress, as measured by the HSCL-10, was registered at admission and at discharge among 164 substance users in inpatient treatment in Northern Norway. Predictors of reduction in mental distress were examined utilizing hierarchical regression analysis. Results We found a significant reduction in mental distress in the sample, but the number of patients scoring above cut-off on the HSCL-10 at discharge was still much higher than in the general population. A more severe use of substances as measured by the AUDIT and the DUDIT, and being female, predicted a higher level of mental distress at admission to treatment as well as greater reduction in mental distress during treatment. Holding no education beyond 10 year compulsory school only predicted a reduction in mental distress. Conclusions The toxic and withdrawal effects of substances, level of education as well as gender, contributed to the differences in change in mental distress during treatment. Regression to the mean may in part explain some of the findings.
Competing interests The authors declare that they have no competing interests. Authors' contributions EH participated in designing the study, collecting the data, analysing and interpreting the data, and in drafting and revising the manuscript. VB participated in designing the study, interpreting the data, and drafting and revising the manuscript. OF participated in interpreting the data, and drafting and revising the manuscript. RW participated in designing the study, analysing and interpreting the data, and in drafting and revising the manuscript. All authors read and approved the final manuscript.
Acknowledgements We thank the participating patients and staff. The study was supported financially by the Northern Norway Regional Health Authority (Helse Nord RHF).
CC BY
no
2022-01-12 15:21:36
Int J Ment Health Syst. 2010 Dec 2; 4:30
oa_package/15/e2/PMC3014872.tar.gz
PMC3014873
21143878
Background In a recent comprehensive review [ 1 ], we suggest that tea and its bioactive components might reduce bone fracture risk by benefiting bone mineral density (BMD) and supporting osteoblastic activities while suppressing osteoclaistic activities, possibly due to their antioxidant and/or anti-inflammatory functions. Among different types of tea, green tea polyphenols (GTP, extract of green tea) has shown its osteo-protective effects by decreasing oxidative stress [ 2 , 3 ], increasing activity of antioxidant enzymes [ 2 ], and decreasing expression of proinflammatory mediators in rodent models [ 3 ]. However, limited information is available on the protective effect of consumption of tea or its bioactive components (e.g., GTP) on bone health in postmenopausal women. On the other hand, Tai Chi (TC), a form of mind-body, moderate-intensity, aerobic and muscular fitness exercise, has also shown to potentially benefit bone health [ 4 - 7 ]. However, there is limited information based on systematic study of TC's effect on bone health in postmenopausal women with low bone mass. Therefore, the long-term goal of the study is to investigate the effect of GTP and TC exercise on bone health in the targeted population. This paper focuses on the safety and impact on quality of life associated with this combined intervention. Results of bone, inflammation and oxidative stress parameters will be reported in a separate paper. Legislation in use of complementary and alternative medicine (i.e., herbal/dietary supplement) is not uniform, even lacking in many countries. In the US, green tea extract is labeled as a dietary supplement which does not seem to require pre-clinical tests, and its traditional use proves not to be harmful in the specified condition of use [ 8 ]. Although green tea has been a popular beverage for centuries, a systemic review by the recent United States Pharmacopeia (USP) of 216 case reports on green tea products revealed 34 reports concerning liver damage [ 9 ]. Among them, 27 reports were categorized as possible causality and 7 reports as probable causality. Based on this review, the USP Dietary Supplement Information Expert Committee determined that when dietary supplement products containing green tea extract are used and formulated appropriately, the Committee is unaware of significant safety issues that would prohibit monograph development. A caution statement needs to be included in the labeling section [ 9 ]. On the other hand, based on published hepatotoxicity episodes, Mazzanti et al. [ 10 ] concluded that there can be no longer a reasonable doubt that ingestion of concentrated extracts of green tea and infusions of green tea itself poses a real and growing risk to liver health. The hepatotoxicity is probably due to (-)-epigallo-catechin gallate or its metabolites which, under particular conditions related to the patient's metabolism, can induce oxidative stress in the liver. In a few cases, toxicity related to concomitant medications could also be involved [ 10 ]. The above evidence suggests that it is important to assess safety issues in conducting a long-term clinical study involving green tea extract as a treatment. However, most of the published green tea clinical studies were either short-term (≤ 12 weeks) [ 11 - 13 ], with a longer study period but little or limited information on safety data related to liver function [ 14 , 15 ], or relatively small sample sizes [ 11 - 15 ]. The detailed safety information is important because for all of the interest in clinical studies using green tea as study agents, lacking such information hinders the research development. The present work is the first GTP safety report on liver and kidney functions based on a larger sample size in a 24-week placebo-controlled and randomized clinical trial. Tai Chi has been investigated in many clinical studies, and is generally considered a safe intervention/treatment in population with various health issues [ 16 ]. However, no study evaluated the effect of TC in conjunction with GTP supplementation on liver and kidney function in any study population. It is not clear if Tai Chi exercise would interact with GTP to attenuate green tea related toxicity in our study subjects. Such safety data are important to future clinical studies using GTP and/or Tai Chi as study treatment. Therefore, the objective of this paper is to evaluate the safety of 24 weeks of GTP supplementation combined with TC exercise in postmenopausal osteopenic women. In addition to safety, the effects of treatment arms on quality of life (as assessed by SF-36 questionnaires) are also reported.
Methods Study participants Postmenopausal women were recruited primarily through flyers, local TV, radios, newspaper, municipal community centers and clinics to participate in this study. The complete study protocol has been reported in detail previously [ 17 ] and only a brief description is provided here. Inclusion criteria were (i) postmenopausal women (at least 2 years after menopause) with osteopenia (mean lumbar spine and/or hip bone mineral density (BMD) T-score between 1 and 2.5 standard deviation (SD) below the young normal sex-matched areal BMD of the reference database) [ 12 ], (ii) normal function of thyroid (thyroid-stimulating hormone (TSH) > 0.3 and < 5.0 mU/L), liver (bilirubin ≤ 2.0 mg/dL, aspartate aminotransferase (AST)/alanine aminotransferase (ALT) < 3 × upper limit of normal), and kidney (serum creatinine (Crt) ≤ 2.0 mg/dL, blood urea nitrogen (BUN) < 1.5 times), (iii) serum alkaline phosphatase (ALP) (33 - 130 U/L), calcium (Ca) (8.6 - 10.2 mg/dL), and inorganic phosphorus (Pi) (2.5 - 4.5 mg/dL) were within normal ranges, (iv) and serum 25-hydroxy vitamin D (25(OH)D) ≥ 20 ng/mL. Women were excluded if they (i) had a disease condition or were on medication known to affect bone metabolism, (ii) had a history of cancer except for treated superficial basal or squamous cell carcinoma of the skin, (iii) had uncontrolled intercurrent illness or physical condition that would be a contraindication to exercise, (iv) had depression, cognitive impairment, or (v) were unwilling to accept randomization. Written informed consent was obtained from all the participants before enrollment. The study was approved by the Texas Tech University Health Sciences Center Institutional Review Board. Study design and intervention This was a 24-week, placebo-controlled, randomized intervention trial to investigate the effects of GTP and TC on bone parameters. Participants were randomly assigned to one of the four treatment groups: ▪ Placebo group: medicinal starch 500 mg daily ▪ GTP group: GTP 500 mg daily ▪ Placebo + TC group: medicinal starch 500 mg daily and 24-move simplified Yang-style TC training (60 minutes per session, 3 sessions per week) ▪ GTP + TC group: GTP 500 mg daily and 24-move simplified Yang-style TC training (60 minutes per session, 3 sessions per week) Medicinal starch and GTP study agents were supplied by Zhejiang Yuxin Pharmaceutical Co., Ltd., China (US FDA IND number 77,470). The main GTP components were 99.25% pure, with 46.5% of epigallocatechin-3-gallate (EGCG), 21.25% of epigallocatechin (ECG), 10% of epicatechin (EC), 7.5% of epicatechin-3-gallate (EGC), 9.5% of gallocatechin gallate (GCG), and 4.5% of catechin. The daily dose of GTP or placebo material was divided into two capsules (250 mg each). During the 24-week intervention, all participants were provided with 500 mg elemental calcium and 200 IU vitamin D (as cholecalciferol) daily. Randomization and blinding To ensure comparable distribution across treatment arms, eligible participants were stratified before randomization by a fixed randomized scheme based on age (≥ 65 or < 65 years old), history of green tea consumption, and history of mind-body exercise. Both the study participants and investigators responsible for the day-to-day operation and data analyses were blinded to the GTP/placebo group status. Measurements Medical history, physical activity level, depression (mood), and cognitive impairment assessment were collected at the time of enrollment. The depression (mood) assessment was measured by the Yesavage self-rated Geriatric Depression Score [ 18 ]. BMD was determined at baseline for the screening purpose by dual energy X-ray absorptiometry (DEXA) (Norland Excel X-Ray Bone Densitometer). Also at baseline for screening purposes only, overnight fasting blood and urine samples were collected for the measurement of concentrations of serum 25(OH)D and TSH by a certified diagnostic laboratory (Quest Diagnostics, Dallas, TX). Laboratory blood chemistry parameters, including ALP, BUN, bilirubin (Bil), AST, ALT, Ca, Pi, and Crt were assessed in overnight fasting blood samples taken at baseline and every 4 weeks throughout the study period. All samples were processed and analyzed in a certified diagnostic laboratory (Quest Diagnostic Laboratory, Dallas, TX). General health status was measured with the Medical Outcomes Study 36-item short form Health Survey (SF-36, version 2) at baseline, 12 and 24 weeks of study. SF-36 has been reported to have good validity, internal consistency, and reliability in the assessment of physical and mental health status of subjects and their progression [ 19 , 20 ]. The SF-36 consists of eight dimensions of health (physical function, bodily pain, general health, vitality, mental health, social function, and role of physical and emotional health) in the conduct of daily activity [ 21 ]. Adverse event monitoring In the course of the 24-week clinical trial, adverse events associated with study agents were self-reported by the participants, and by monitoring liver enzyme activities, AST and ALT in particular, through blood analysis. Participants in the TC exercise groups (placebo + TC and the GTP + TC groups) were also queried about any adverse events due to TC during TC training sessions. They were also encouraged to self-report any adverse events by telephone. All observed and self-reported adverse events, regardless of suspected causal relationship to the study treatments, were recorded on the adverse event form throughout the study. Compliance Adherence/compliance of GTP or placebo study agents was determined as the percentage of all capsules of GTP or placebo capsules ingested throughout the study period. Compliance of TC classes was assessed by TC class attendance record for each TC session. Statistical analysis For this longitudinal study, a model of repeated measurements with random effect error terms was used with "intention-to-treat analysis" for missing data, if applicable. Statistical software SPSS 16.0 (Clicago, IL, USA) was employed to conduct the analyses, controlling for the within subject correlation. First, participant characteristics were compared to detect any difference among the four groups at baseline. Second, changes in the measurements between baseline and the follow-ups were analyzed. For between-group differences over time, a repeated measure ANOVA was conducted and controlled for within-subject correlation. The two treatment factors are GTP (vs. placebo) and TC (vs. no TC). Third, the characteristics of participants who dropped out were compared with those of the participants who stayed for the entire study period in order to detect potential biases.
Results Participants A total of 1065 patients were prescreened. Among them, 171 were qualified and randomized, and 150 completed the 24-week study (Figure 1 ). Seven (16%) participants in the Placebo arm, 8 (17%) in the GTP arm, 5 (12%) in the Placebo + TC arm, and 1 (3%) in the GTP + TC arm withdrew before the end of the study, due to accidental fall (1 subject), relocation (2 subjects), time conflicts (6 subjects), lost to follow-up (5 subjects), and lost interest (7 subjects). Baseline characteristics were similar among different treatment groups (Table 1 ). No statistically significant differences between the subjects who withdrew from the study and those who completed the study were observed in any parameter listed in Table 1 . All subjects were instructed to maintain their pre-existing physical activity, dietary habits, and medications, if any, throughout the study. Based on the results of pill count, the compliance rate was 89% for both GTP and placebo capsules. The compliance rate for TC classes was 83%. Safety At the baseline, there was no significant difference in any of the blood chemistry parameters among all treatment groups (Table 2 ). Based on the results of ANOVA, the levels of serum AST and ALT (indicators of liver functions) were not affected by either GTP or TC intervention during the 24-week study period (Table 2 ). Similarly, neither GTP supplementation nor TC exercise influenced serum BUN in subjects (Table 2 ). On the other hand, throughout the course of the 24-week intervention, there were significant decreasing trends in levels of serum Bil, ALP, Crt, Ca, and Pi over time with different magnitude in each treatment arm. However, in analyzing interaction between the time factor and the two treatment factors (GTP and TC), these parameters in the subjects were not statistically different over time across all the treatment arms (Table 2 ). Four participants reported side/adverse effects during the study. One subject in the Placebo arm experienced nausea and diarrhea several times. One subject in the GTP arm had elevated AST and ALT levels, possibly due to concomitant medications for cold symptoms (Ibuprofen 400 mg daily for 9 days), lowering cholesterol (Lipitor 20 mg daily) and hypertension (Metoprolol 25 mg daily). After discontinuation of the medication for cold symptoms, this patient's serum AST and ALT fell back to the normal range. One subject in the Placebo + TC arm reported having retinal bleeding on a non-exercise day, probably due to her uncontrolled high blood pressure and blood glucose, along with a family history of retinal bleeding. Another subject in the Placebo + TC arm reported having a broken wrist on a non-exercise day, due to an accidental fall. These four reports, as judged by the safety monitoring team, were unlikely related to the study protocol. No adverse event due to TC was observed or reported in this study. There were only sporadic complaints about muscle soreness during the first two weeks. Quality of life Data demonstrating the effects of GTP and TC on quality of life, including all 8 domains, in postmenopausal osteopenic women are presented in Table 3 . At baseline, there was no significant difference in any domain of quality of life among all 4 treatment groups. Throughout the course of the 24-week intervention, there was no statistically significant change in any domain with time in all treatment groups, except that scores for physical function decreased with time ( P < 0.001). However, when taking into account the interaction between time and the two treatment factors (GTP and TC), scores for physical function were not statistically different. Compared to those in the non-TC (Placebo and GTP) groups, subjects in the TC (Placebo + TC and GTP + TC) groups showed significant improvement in their scores for role-emotional ( P = 0.036) and mental health ( P = 0.003) after the 24-week intervention (Table 3 ). There was no significant difference in other domains of quality of life, including role-physical, bodily pain, general health, vitality, and social function ( P > 0.05) (Table 3 ).
Discussion There is generally very little clinical information on the safety of long-term consumption of green tea extract supplements. The limited number of published studies were either short-term or with a small sample size, and most of them were not randomized controlled trials. This is the first placebo-controlled randomized study to evaluate the safety of long-term ingestion of green tea extract in postmenopausal women. This study demonstrated that supplementation of 500-mg GTP daily for 24 weeks did not cause any safety concern (Table 2 ) with regard to liver function (in terms of AST, ALT, Bil, and ALP levels) as well as kidney function (in terms of Crt and BUN levels). Considering a typical commercial decaffeinated green tea bag that contains approximately 80-100 mg green tea flavanols per serving [ 22 ], the GTP daily dose (500 mg with 99.25% purity) used in this study was approximately equivalent to beverage prepared by 5-6 commercial decaffeinated tea bags. On the other hand, our previous animal study showed that GTP supplementation through 0.5% GTP in drinking water benefited bone remodeling in ovariectomized middle-aged rats [ 2 ]. This dose of GTP consumption by rats in that study was comparable to the dosage employed in the present study. GTP dosages similar to our study have been adopted in study populations with different health issues. However, the study periods were generally short (up to 12 weeks) in most studies with the following two exceptions. Matsuyama et al. [ 14 ] reported that 24 weeks of beverage ingestion containing catechin (576 mg daily) ameliorated serious obesity and cardiovascular disease risk factors without raising any safety concerns in obese Japanese children (aged 6-16 years). Janjua et al. [ 15 ] reported that GTP supplementation (500 mg with 70% catechin daily) for two years did not demonstrate a significant benefit superior to placebo in improving clinical or histological photoaging parameters of women's skin (aged 25 to 75 years). However, none of these studies investigated GTP's safety in terms of possible liver and kidney damages through monthly blood tests. Further, the sample sizes of these published studies were small. In this study, we observed decreasing trends in the levels of serum Bil, ALP, Crt, Ca, and Pi over the study period (Table 2 ). However, such trends disappeared when analyzing interaction between the time factor and the two treatment factors (GTP and TC), suggesting possible body's adaptation to intervention stimuli over time. In the present study, the four adverse events observed in different treatment arms were judged as unlikely related to the study protocol. Previous studies reporting adverse events with green tea extract supplementation, including acute liver failure in a few isolated case reports [ 23 - 26 ], in controlled human intervention trials [ 27 , 28 ], and in epidemiological studies suggested that possible medication contamination and other unknown factors may have contributed to hepatotoxicity [ 29 ]. Hepatotoxicity might also possibly be due to unusual dosing protocols, such as fasting, or a genetic variation (single nucleotide polymorphisms) in phase I and phase II enzymes in some affected individuals [ 30 , 31 ]. No adverse event attributed to TC was observed or reported in this study. This is in agreement with previous studies reported by us and others [ 16 ]. TC, featuring gentle, slow and flowing movements, has been considered a safe exercise with very low risk of injury. As expected, TC did not influence any parameters related to liver and kidney function, except for a decreasing trend of serum Pi with time, which became not significant considering interaction between time and TC (Table 2 ). In addition, there was no interaction between GTP supplementation and TC exercise on liver and kidney function in the present study. The present results show that 24 weeks of TC exercise confers beneficial effects on postmenopausal women in terms of improving their role-emotional and mental health (Table 3 ). The favorable profiles of TC on mental health in the present study are consistent with those reported by Ko et al. [ 32 ] in healthy women, and by Abbott et al. [ 33 ] in patients with tension headaches. The positive impact of TC on the role-emotional domain also agrees with findings by Abbott et al. [ 33 ]. On the other hand, after involving GTP treatment, the interaction among time, GTP and TC was not significant ( P > 0.05) in the domain of either role-emotional or mental health. Although time × TC did reach statistical significance, but time × GTP did not reach statistical significance, therefore, resulting in no significance in the results of time × GTP × TC. This is the first study investigating the effect of GTP supplementation on quality of life, and the result showed no effect. There was also no evidence supporting that selenium supplementation benefited quality of life in apparently healthy elderly (aged 60-74) in a double-blind, placebo-controlled intervention [ 34 ]. Another study found that vitamin E intake did not change quality of life in patients with amyotrophic lateral sclerosis [ 35 ]. Although all these supplements (GTP, selenium, vitamin E) are considered to be functional in protecting cells from oxidative stress, these published studies along with the present study seem to suggest no benefit of these supplements in quality of life.
Conclusion Supplementation of 500-mg GTP daily to postmenopausal osteopenic women for 24 weeks did not cause any adverse effects on liver and kidney function, as determined by blood test parameters, and had no influence on quality of life (as assessed by SF-36 questionnaires). TC exercise for 24 weeks (3 hr/wk) significantly improved quality of life in terms of role-emotional and mental health in these subjects. Based on our findings, GTP at a dose of 500 mg per day and/or TC exercise at 3 hr/week for 24 weeks appear to be safe in postmenopausal osteopenic women.
Background Evidence suggests that both green tea polyphenols (GTP) and Tai Chi (TC) exercise may benefit bone health in osteopenic women. However, their safety in this population has never been systematically investigated. In particular, there have been hepatotoxicity concerns related to green tea extract. This study was to evaluate the safety of 24 weeks of GTP supplementation combined with TC exercise in postmenopausal osteopenic women, along with effects on quality of life in this population. Methods 171 postmenopausal women with osteopenia were randomly assigned to 4 treatment arms for 24 weeks: (1) Placebo (500 mg starch/day), (2) GTP (500 mg GTP/day), (3) Placebo + TC (placebo plus TC training at 60 min/session, 3 sessions/week), and (4) GTP + TC (GTP plus TC training). Safety was examined by assessing liver enzymes (aspartate aminotransferase, alanine aminotransferase), alkaline phosphatase, and total bilirubin at baseline and every 4 weeks. Kidney function (urea nitrogen and creatinine), calcium, and inorganic phosphorus were also assessed at the same times. Qualify of life using SF-36 questionnaire was evaluated at baseline, 12, and 24 weeks. A mixed model of repeated measures ANOVA was applied for analysis. Results 150 subjects completed the study (12% attrition rate). The compliance rates for study agents and TC exercise were 89% and 83%, respectively. Neither GTP supplementation nor TC exercise affected liver or kidney function parameters throughout the study. No adverse event due to study treatment was reported by the participants. TC exercise significantly improved the scores for role-emotional and mental health of subjects, while no effect on quality of life was observed due to GTP supplementation. Conclusions GTP at a dose of 500 mg/day and/or TC exercise at 3 hr/week for 24 weeks appear to be safe in postmenopausal osteopenic women, particularly in terms of liver and kidney functions. TC exercise for 24 weeks (3 hr/wk) significantly improved quality of life in terms of role-emotional and mental health in these subjects. ClinicalTrials.gov identifier: NCT00625391.
Competing interests The authors declare that they have no competing interests. Authors' contributions CLS received the research funding, led the entire study, and drafted the manuscript. MCC participated in the design of this study protocol and recruitment, implemented the exercise program, and drafted the manuscript. BCP, JKY, and JSW contributed to the design of this study protocol. CKF participated in the study design and oversaw participants' medical affairs. YZ participated in the design of the study and performed the statistical analysis. SD coordinated the study including blood/urine sample collection. All authors read and approved the final manuscript. Pre-publication history The pre-publication history for this paper can be accessed here: http://www.biomedcentral.com/1472-6882/10/76/prepub
Acknowledgements We gratefully acknowledge the study participants; without them this study would not have been possible. We thank for the assistance of Mary J. Flores, Raul Y. Dagda, and Marisela Dagda for data collection. This study was supported by the National Center for Complementary and Alternative Medicine (NCCAM) of the National Institutes of Health, under grant 1R21AT003735. The contents of this manuscript are solely the responsibility of the authors and do not necessarily represent the official views of the NCCAM or the National Institutes of Health.
CC BY
no
2022-01-12 15:21:36
BMC Complement Altern Med. 2010 Dec 9; 10:76
oa_package/76/84/PMC3014873.tar.gz
PMC3014874
21143929
Background In 2008, there were approximately two million deaths worldwide from AIDS, including 280,000 deaths among children under 15 years of age. Sub-Saharan Africa remains the most heavily affected region in the world with 1.4 million deaths [ 1 ]. In South Africa, which has the largest number of people living with HIV/AIDS in the world, AIDS continues to be the leading cause of death. In 2007, there were approximately 5.7 million people living with HIV in South Africa, and 350,000 people died of AIDS in that year alone [ 2 ]. We do not know the cumulative number of children who have died of AIDS in South Africa or even how many children died of AIDS within the past year. In the 2010 country progress report on HIV/AIDS, the South African government acknowledged that the lack of reliable data on infant mortality was a problem [ 1 ]. Recent estimates indicate that 2.5% of children aged two to 14 years old are living with HIV in South Africa [ 3 ]. For 2009, the total number of new HIV infections in South Africa was estimated to be 413,000, including 59,000 among children [ 4 ]. KwaZulu-Natal continues to be the most affected province in South Africa with 37.4% of pregnant women living with HIV [ 3 ]. This article describes a study that explored the experiences of mothers in KwaZulu-Natal who had lost a young child to AIDS. The unwillingness of successive governments in South Africa to deal effectively with the AIDS epidemic - most notably the failure to provide access to life-prolonging antiretroviral treatment (ART) to people with HIV/AIDS - has resulted in the needless deaths of many children and adults to AIDS [ 5 ]. For the period, 2002-2005, it was calculated that more than 330,000 premature deaths could have been prevented if the government had provided antiretroviral drugs to people with AIDS, and 35,000 babies were born with HIV because nevirapine was not administered to prevent pregnant women from infecting their babies [ 6 ]. Since this time, ART has become more widely available to people with HIV/AIDS in South Africa, but there continue to be problems with implementation. For instance, recent estimates indicate that only 70,000 children were receiving ART in 2009 out of 106,000 who needed it [ 4 ]. Providing ART to children presents special challenges, including the difficulty of diagnosing HIV in children, faster progression to AIDS and death, and challenges in developing appropriate and affordable ART regimens for children [ 1 ]. Furthermore, a significant number of children still have trouble adhering to ART regimens [ 7 ]. Considering the high number of AIDS deaths in South Africa, it is disappointing that the issue of AIDS-related bereavement in the South African context has not been adequately addressed [ 8 , 9 ]. Part of the reason may be that there is little discussion in South African society about AIDS deaths or acknowledgement that AIDS was the cause of death when someone has died [ 10 ]. Only a handful of studies on AIDS-related bereavement have been conducted in the South African context and they demonstrate the substantial impact that AIDS deaths have had on surviving adults and children [ 11 - 13 ].
Methods Participants In this qualitative study, two distinct groups were targeted for interviews. The first group consisted of 10 women in KwaZulu-Natal who had lost a child to AIDS. The second group consisted of 12 professionals in KwaZulu-Natal who had experience working with children and families affected by HIV/AIDS. Purposive sampling was used for each group to select participants with a range of experiences [ 14 ]. Several local community-based organizations served as gatekeepers for locating potential participants. The use of community gatekeepers is particularly recommended when conducting research with vulnerable families [ 15 ]. To be eligible for the first group, participants had to be women who had lost a biological child under the age of 18 years to AIDS. Participants in the second group were selected based on their knowledge and experience of working with children and families impacted by HIV/AIDS in the region. They were professionals employed in local non-governmental organizations, clinics and hospitals in and around the city of Durban and nearby urban area of Pinetown. Procedure Data collection took place from June 2008 to May 2009. Participants in both groups were recruited and interviewed until it was determined that no new themes emerged from the analyses (i.e., a state of theoretical saturation was reached) [ 16 ]. To put it another way, the consistency and breadth of themes identified in these interviews suggests that a sufficient number of interviews were conducted to give the analysis depth and relevance. Semi-structured interviews were conducted with participants from both groups. Interviews typically ran for an hour to an hour and a half. Participants in the first group were asked to describe their experiences of losing a young child to AIDS and how their present lives had been affected by the death of the child. Participants in the second group were asked to describe their experiences of working with mothers who had lost a child to AIDS. All interviews were scheduled at a time and place convenient for participants from both groups and were conducted by trained Zulu-speaking social workers. The importance of establishing trust with participants was reinforced among the trained interviewers, as was the need for the voices of participants to be heard and to be aware of one's own feelings and prejudices [ 17 ]. All interviews were conducted in isiZulu and recorded. They were transcribed verbatim and translated into English by the interviewers. The suggestions of Horowitz, Ladden and Moriarity [ 15 ] were followed by communicating the relevance of the study to potential participants, making the data collection process as user-friendly as possible, stressing that all views and perspectives were welcomed, and providing appropriate reimbursement to participants. In acknowledgement of their contribution to the study, participants in the first group were paid R70 (approximately $10) after completing the interview. Professionals in the second group received no compensation for their participation in this study. Being mindful that the interview process could be stressful or even traumatic for participants (in both study groups), the interviewers solicited feedback from participants about how they were feeling as a result of the interview, and referrals to local mental health resources were made when needed. Data analysis A thematic analysis was employed in this study. Transcripts of the interviews were carefully read, and patterns and themes were identified. Following the coding procedures outlined by Strauss and Corbin [ 16 ], phenomena were grouped into categories of like meaning and the contents of the categories were compared between and within interviews. There was a continuous process of collecting data and comparing data with previously coded data. The qualitative software programme, NVivo8, was used to mechanically code and facilitate the analysis of the transcripts of the in-depth interviews. Software programmes such as this facilitate hierarchical or "tree-like" coding and analysis of large amounts of text across multiple themes [ 18 ]. To ensure trustworthiness [ 19 ], the research team discussed coding, themes and key findings until consensus was reached. Ethical considerations This study was approved by the institutional review boards of the University of KwaZulu-Natal in South Africa and Lehman College in the United States. Bearing in mind that some participants in the first group (mothers) could be illiterate or have poor reading comprehension, as well as the fact that there was a risk of these participants' names being linked to consent forms in a country with high HIV stigma, oral informed consent was obtained from these participants, instead of written informed consent. Prior to each interview, the interviewer followed a prepared script (written in isiZulu), which they read to each participant advising them of the purpose of the study and requesting their permission to audiotape the interview. Participants were advised that whatever they said would be kept confidential and that they were free to stop participating at any time. Written, informed consent was obtained before each interview from participants in the second group.
Results Participants from the first group (mothers) ranged in age from 25 to 60 years and the mean age was 30. All were HIV-positive, black women who lived in the province of KwaZulu-Natal. Two were married, two were widowed, four had partners, and two were single. One-half of participants were receiving ART. One participant had completed high school and the rest had completed only a few years of primary schooling. One participant had a temporary full-time job and one had a part-time job; the other participants were unemployed and had not been able to find formal employment in several years. Two participants lived in their own homes and eight stayed with relatives. The number of people living in each household ranged from four to 13 with an average of six people in each household. Most participants depended on child support grants or old age pensions from grandmothers. They sometimes received financial support from other family members or partners, but this was sporadic. Participants had, on average, two surviving children. The deceased children were, on average, six years old at the time of death, and the average time since the death of the child was two years prior to the study. All participants had lost one child to AIDS, except one participant who had lost two children: a one year old and a four year old. In the second group (key informants), two participants were male and 10 were female, and their average age was 44. Six participants were counsellors, four were nurses, one was a social worker and one was an AIDS project coordinator. They had, on average, six years of professional experience working with children and families impacted by HIV/AIDS. Six main themes emerged from the data analysis and are described in the pages that follow: caring for a sick child; the moment of death; relationships with health professionals; daily stresses; coping; and support. Caring for a sick child Some women reported that their child was ill for a very short period of time and then died. This mostly occurred among infants. A few weeks or months after giving birth, the woman would notice that her child was not looking well, the child would be hospitalized, and then die a few days later: There were no signs at all that she was going to die. She was growing up very well like a normal baby. She was just attacked by the flu. I took her to the hospital and it was the end of her. She died. [Participant 3] Not all women in this study were aware of their own HIV status before they gave birth. Some had not returned for their HIV results after been tested at the antenatal clinic out of fear that they could be HIV positive. Some women minimized or overlooked symptoms in their child and only sought medical care for the child when the disease was in its advanced stages. Among the key informants interviewed there was a great deal of frustration that the stigma and secrecy surrounding AIDS in South African society continued to cause many needless deaths among children: They just pretend as if everything is normal. They don't test their children after giving birth. They just pray for miracles to happen, no matter how you educated them on the importance of testing their children at birth. They take action when the child is sick. In most cases, that is too late. [Key informant 4] Several key informants advocated mandatory HIV testing of all children as a solution to the problem of parents not testing their child or testing the child too late due to stigma. The other consequence of stigma mentioned was parents' failure to ensure that their children adhered to treatment, sometimes resulting in the death of the child. Not all family members are made part of the caring of the child ... Some parents or caregivers are too preoccupied with hiding the fact that the child is sick and cannot therefore fully comply with support services and other available resources of a sick child. [Key informant 3] But there were women who knew their HIV status and the cause of their child's illness, and they did everything they could to get their child appropriate medical care, as challenging as it was. Frequent hospitalizations and watching the child's health deteriorate took its toll on these women: What stressed me so much is that I did not know how to help her. To see her in pain was the most painful thing to me. [Participant 5] Their sense of helplessness was aggravated by guilt for causing the child's sickness, as well as feeling unsupported during this stressful time. Typically, the only person they could rely on for any kind of help was their own mother. The child's father was usually absent from the picture, either because he was dead or he had abandoned the mother and child. This caused anger and resentment among the women: The father of these babies are not supportive ... they have children all over but they do not bother about caring for their children. Their children are like mushrooms, I am telling you ... I am not sure whether they even realize the pain that they cause us and the suffering they cause to the children. [Participant 1] Having little to no support from the child's father, the women struggled to make ends meet each day and worries about money created another layer of stress and impacted their ability to care for the child, especially during long hospital stays: When my daughter was sick, I was the breadwinner. Even though she was sick, I had to leave her alone and go and sell vegetables so that we can have food on the table. I did not have anyone to help her. I used to think that if there someone at home working and bringing in income, I was going to provide better care to my daughter. I was going to stay at home and look after her. That is what makes me feel guilty and have lots of regrets. I did not provide better care. [Participant 1] A few women had no-one to help them either emotionally or financially during this period, and talking about it brought back strong memories of being overwhelmed. Some were philosophical about going it alone and had no expectations of receiving support from anyone, but for others, the feeling of rejection still hurt: Ay sister, I suffered. No-one from my own family and from my child's family supported me (crying). [Participant 2] The moment of death Dredging up memories of the last moments with their child was very painful, yet the women proceeded to describe in detail the moment their child died. All of the children died in the hospital and all women, except one, were present at the time of death. In most cases, the woman was alone with no other family members present. The initial reaction to the death of the child was one of confusion, shock and disbelief. The following statement illustrates a mother's final moments with her child: She passed away in front of my eyes; then they quickly asked me to leave the ward. I was not prepared to deal with it. My mind was still telling me that maybe the doctors were still going to do something to revive her. I was confused. She managed to calm me down. She died in front of my eyes in hospital ... That is the day I will always remember. It is still fresh as yesterday. [Participant 1] None of the women talked about being allowed to spend time in the room with their deceased child. The goal of the medical staff seemed to be on removing the mother from the room and taking her elsewhere to calm her down. While some women recalled a social worker or counsellor speaking to them briefly after their child died, this was not usually the case. Typically, the woman called a family member to have them pick her up or she left the hospital alone shortly after the child died. Relationships with health professionals The women reported mixed experiences dealing with doctors and nurses on medical appointments and when their child was in the hospital. Sometimes the quality of care they received depended on a particular shift or the hospital they went to. Some women remembered incidents that still made them angry. A common complaint was being required to sit with their child for long periods of time at the hospital while waiting to be seen by a doctor and being ignored even when the child was clearly in distress: We were sitting with the child in the hospital benches ... He was not offered even a bed ... The child could not even sit up straight. He was vomiting and had diarrhoea at the same time. He also had stroke in his one side. They only put him on a drip. We called for their attention when the drip was finished. No one cared or responded. The child was bleeding. I said, "You see, this is a waste of time." At 12 we left for home. We took the drip off the child on our way home. [Participant 8] After the child was admitted to the hospital, the quality of care was a problem for some women. Nurses did not change feeding tubes or did not bath their child, and it was often left to the mother to perform these tasks. Several women exclaimed that nurses "just sit there" and that the only time the nurses responded was when the mothers complained. Not being told what was happening to their child caused frustration and heightened their anxiety. Several women took their child out of the hospital and returned home because they were so dissatisfied with the treatment and the attitude of the medical staff. One woman related this incident: The doctor did not treat my child's situation as an emergency. She was supposed to hurry up. When she arrived, she ... was supposed to be caring and supportive. Do you know that during the time when my baby was vomiting blood, the doctors ordered me to carry her? I carried her and blood was coming from everywhere. Ay, no one helped me. When I was calling nurses, they were ignoring me. No one helped me. [Participant 5] Conversely, some women were satisfied and grateful for the way the medical staff treated them and their child. There was recognition of how overworked the medical staff was, and it meant a lot to them when nurses kept them updated on the progress of their child or found time to comfort them. But overall, communication, or rather a lack thereof, by medical staff was a common complaint; it made the women feel both powerless and disrespected. In addition, little to no support or counselling was provided by social workers or counsellors at the hospital, either before the child died or after the death. Key informants in this study expressed a deep commitment to their work and clearly understood the context in which these mothers lived and the challenges facing them. The issue of scarce institutional resources was a common concern among key informants, as well as the lack of a coordinated response in addressing the needs of these mothers. The issue of grief was frequently not addressed by service providers because these mothers were primarily concerned about meeting urgent daily needs, such as food and shelter, and also because service providers sometimes felt ill prepared to provide this type of counselling. Key informants acknowledged the importance of helping these mothers to open up and to talk about their loss while simultaneously helping them with their daily needs. They expressed some frustration that these mothers were not aware that they needed to take care of their psychological needs as well: These parents need to acknowledge their pains. They need to talk about their loss. When their children are sick, they need both material and emotional support. [Key informant 6] Daily stresses Besides the trauma of losing a child, these women were confronted with circumstances that compounded and sometimes overshadowed their grief. Most of the women had experienced periods of being very ill and a few had nearly died. The death of a child reminded them of their own mortality. They worried less about dying and more about what would happen to their surviving children if they died. But most of them tried not to think about it and preferred to stay focused on the present and caring for their surviving children: If I think about death now, I won't reach where I want to be. [Participant 4] Some women had given birth to another child since their loss and now worried about this child's health, while some were dealing with other children who were HIV positive and sick. Some women had not had their other children tested for HIV or were too afraid to return for the results. It was difficult for the women to talk to their children about the loss of their sibling, especially that their sibling died of AIDS. Most women were waiting for the right time to talk to their older children about this, but admitted that they were procrastinating and felt that they did not know how to do it. While their children were aware that their mother and sibling had been sick, they were not told it was HIV related. AIDS-related deaths, both within the family and among people they knew in their local community, were common, yet the nature of the death was rarely acknowledged or discussed because of stigma. Most women had trouble estimating the number of funerals they had attended in recent years for people who were known or suspected to have had AIDS. They made it a point to no longer attend funerals, except for family funerals, and tried to put funerals and death out of their minds. When asked if they knew of other mothers who had lost a child to AIDS, most replied that they did not because it was not talked about. Problems relating to other family members were a source of great concern. Most women had a partner who was usually the father of one of their children, but most of the time the partner lived elsewhere and they complained about him having multiple girlfriends and neglecting them. Besides not providing them with enough money to care for themselves and their children, these women also had to deal with such issues as alcoholism, domestic violence, and sexual coercion by their partners. But the most dominant daily worry of these bereaved mothers involved being impoverished: finding money for food, shelter, school fees and so on. Typically, they had few sources of income and they relied on their husbands or boyfriends, as well as their mothers, for money. A couple of women earned a little money selling beads or produce or working part-time as a domestic worker. Jobs were scarce and a lack of education and skills, together with health problems, meant that most women had been unemployed for many years. The only regular source of income was a small monthly government grant that some women received for a sick child, or they relied on a grandmother's old age pension. Households typically consisted of five to 13 people, and the woman and her children lived in a relative's house. Food was scarce and households were simply unable to support all their members. The following statements illustrate the burden of poverty: In my family we are very poor ... Food is not always available. We sometimes sleep without food. Even my baby goes without food if I have no money to help her. [Participant 3] Ay, my life is just full of problems ... Whatever money I get, I have to try to meet all my kid's needs and food. There is nowhere to take a cent from. When I buy a two litre cool drink for my kids, they become happy for that day and I become happy with them. I tell them that things will be better in future and I will be a good parent ... I have faith because my partner still provides us with porridge. [Participant 8] Coping The loss of a child was devastating and some women did not want to go on living. It was only the fact that they had other children to worry about that prevented them from giving up. After the child's death, the women assumed responsibility for the burial of the child, sometimes relying on their mothers or another family member to help with the expenses. In only a few cases, the father of the child helped with funeral arrangements or expenses. In several cases, the father did not even attend their child's funeral. Almost without exception, the women kept their grief to themselves, not because of an unwillingness or inability to confront their loss, but more as a matter of survival. Because of the context in which they lived, the women had no choice but to contain their grief and focus their emotional and physical energy on coping with the hardships of daily life. When asked how they were feeling during the interview, the reaction was typically that talking about their loss reminded them of what happened in the past. They acknowledged that past efforts to suppress their grief were not always successful. No matter how hard they tried to forget and no matter how long ago their child died, certain things would remind them of their loss, and the pain would return. Despite their best efforts to continue, the women felt overwhelmed by sadness and despair and they no longer felt they were the same person. They felt that they were more irritable and short-tempered, had problems sleeping, and except for their children, they derived little happiness out of life. They were worn down by negative attitudes in society toward people with HIV/AIDS and they felt unloved. Life had not turned out the way they had hoped and there was little chance things would improve in the future for them. One woman carried her daughter's death certificate around in her bag, even though her daughter had died two years previously. She did so to remind herself of her pain. The following statement illustrates the way many of these bereaved mothers felt: I cannot focus on the past. I have to find a way to manage these feelings when they come ... Even now her space is still there. Her death is still fresh in my mind and heart. It left an empty hole (sobbing). [Participant 1] African cultural tradition prescribes that when a child dies, the family and community rally around the mother for a few weeks, sitting with her, bringing food, cleaning her house, in addition to talking about the pain of the loss. And supposedly by the time of the funeral, family members have talked enough about the death and are ready to let go of the deceased child. Several key informants mentioned the urge of mothers to move on after the death. Times have changed and key informants acknowledged that communities were no longer tight knit and members often did not know one another. Consequently, there was a greater need for mothers to turn to outsiders for assistance with their grief. Key informants stressed the importance of providing opportunities for the mother to talk about her loss and to not hold it in. Support The women in this study perceived there to be few sources of support available to assist them with their loss. Some could not identify a single person that they could talk to about their loss. They harboured no expectations of their families in this regard. In most cases, they could only talk to their mothers, but even then, AIDS was often circumvented. Some women derived strength from prayer and obtained some measure of support being with other members of their church, but they seldom disclosed their status or revealed the cause of their child's death to church members for fear of being ostracized. The value of talking about one's feelings was recognized by some women and when asked if they would be interested in joining a support group for mothers who had lost a child to AIDS if it were to be created, most expressed interest. They looked forward to the opportunity of talking to other women who were in similar situations and they felt it would make them feel less alone. The idea of making friends was appealing, but some reservations were expressed about confidentiality and there were concerns about the cost of transport. As one mother explained, "I sometimes can't even afford a loaf of bread." From the mothers' perspective, financial concerns took priority over the need for counselling. They desperately needed help with food, clothing, school fees and caring for their children. Key informants acknowledged that interventions needed to incorporate both financial and emotional elements and a suitable option would be an income-generation project that provided mothers with the opportunity to share their loss with other mothers while working on something like beadwork or a vegetable garden to earn income: Most of our clients are unemployed, come from poor families. We felt that we cannot exclusively meet their emotional needs without material assistance. We have created an environment where we can talk about their problems and at the same time attending to their pressing bread and butter needs. [Key informant 6] Key informants believed that it was important for these mothers to receive emotional or psychological support, but that mothers were frequently unaware of services available. Mothers should be provided with money for transport costs, as well as offered food, since they often travelled long distances on empty stomachs to the organization. Key informants felt that they needed to do a better job of educating mothers about the benefits of receiving counselling and support, as well as making these services more accessible, but a lack of organizational funds and a shortage of mental health professionals made it difficult to do so. The following statements highlight the views of key informants with regard to providing counselling to these mothers: People like me and you understand the importance of counselling but the people that our organization deals with ... the majority ... are poor and uneducated people who worry more about the physiological needs than counselling. [Key informant 4] They don't recognize the importance of attending to the emotional self. They place most emphasis on the physical self and neglect the emotional self ... They normally come to our agency for poverty-related conditions. During the conversation you then learn about a sick child or a deceased child ... I think people need to realize that they need to tell us how they want to be helped. We cannot help them if we don't how they want to be helped. [Key informant 6]
Discussion This study represents the first known study about the bereavement experiences of women who have lost a young child to AIDS in South Africa. The issue of HIV stigma had a profound influence on the bereavement experiences and daily lives of the women in this study. People with HIV/AIDS have been stigmatized worldwide since the beginning of the epidemic. HIV stigma remains a major obstacle to prevention, treatment and support efforts for people affected by HIV/AIDS in South Africa [ 20 ]. In the present study, societal hostility toward people with HIV/AIDS caused the women to delay getting themselves or their babies tested, to not seek out medical treatment for their child in a timely manner, and to not get the support they needed to cope with their child's illness and subsequent death. Based on these women's experiences, it is evident that as a society, South Africa has still a long way to go to effectively address HIV stigma and to reduce its impact on all aspects of the daily lives of people affected by HIV/AIDS. A study of women living with HIV/AIDS in the Western Cape Province of South Africa showed a relationship between those who experienced more HIV stigma and severe post-traumatic stress, depression, a low quality of life and fear of disclosure [ 21 ]. A study of older adults in the Eastern Cape Province of South Africa reported high levels of grief associated with the death of children and/or grandchildren to AIDS, with stigma being the most important predictor of grief [ 22 ]. Getting through each day was a challenge for the women in this study. They had to fend for themselves for the most part since there were few people they could count on for support, whether it be financial or emotional support. Families impacted by AIDS deaths in South Africa could benefit from making greater use of mental health services. At the same time, there exist barriers to seeking this type of help, such as stigma, lack of awareness of resources available and the perception by people that physiological needs and issues of daily survival take priority over mental health needs. In a study among residents in a South African township, additional barriers that were cited in terms of using mental health resources were mistrust of mental health professionals, doubts about the nature and value of psychotherapy, and concerns about the ability of professionals to be culturally sensitive [ 23 ]. The emotional and physical burden on these bereaved mothers warrants further investigation, especially in light of their HIV status and associated health issues. The evidence suggests that bereavement distress is likely to be greater among HIV-infected individuals than non-infected individuals [ 24 , 25 ]. Furthermore, grief reaction may be more severe among HIV-infected bereaved individuals who are sicker [ 26 ]. As a result of the absence of social support and faced with urgent problems related to their socio-economic circumstances, the women in this study spent most of the time repressing their grief. This did not mean that they did not have moments of extreme sadness, but they felt that they had no choice but to put aside their grief so they could have the strength to deal with the hardships of daily life. There has been growing support in recent years for the view that most people are resilient and are naturally able to cope with the most adverse circumstances without experiencing a significant disruption in daily functioning or requiring psychotherapy [ 27 ]. Resilience is defined as "the ability of adults in otherwise normal circumstances who are exposed to an isolated and potentially highly disruptive event, such as the death of a close relation or a violent or life-threatening situation, to maintain relatively stable, healthy levels of psychological and physical functioning" [ 28 ]. It would be useful for future studies to measure the extent of resilience among these women and, more importantly, to examine which factors promote resilience among them. An important gap in our knowledge is how resilience varies across cultures. Using the concept of resilience for examining and understanding the experiences of women in Africa who have lost a child to AIDS would also help us learn how cultures other than those in western countries effectively (and perhaps ineffectively) cope with extreme adversity. We are still learning about what factors promote resilience, but preliminary evidence shows that resilience tends to occur in people who have the personality traits of hardiness and self-enhancement, those who are repressive copers and those who are able to express positive emotion and laughter [ 29 ]. In the present study, these women certainly displayed enormous strength and courage in confronting very challenging circumstances and they were able to express positive emotions and possess hope for the future, and it would appear that they possessed qualities consistent with those who have hardy personalities. There was evidence of repressive coping, but it is unclear to what extent this was natural emotional response and to what extent it was the only way these women could cope in light of the absence of support and the need to address more urgent daily needs of survival. These women did not appear to display extraordinary high levels of self-esteem, narcissism or an unrealistic sense of their strengths, which have been associated with the self-enhancement trait [ 28 ]. But they also did not dwell on their personal limitations and in a sense, they assumed the role of superwoman because there was no alternative. No-one was coming to their rescue and they had to rely on themselves to deal with their loss and to meet the needs of their surviving children. Future research on resilience needs to more accurately assess the pathways to resilience, especially in contexts that are vastly different to countries in the west. In this study, mothers assumed total responsibility for their sick child while the father was excused of all responsibility by virtue of having abandoned his family. This is a problem that is endemic in South African society. Throughout the epidemic in South Africa, the burden has fallen on women to be involved in HIV prevention, treatment and support efforts. Men are very difficult to reach and to get involved in community initiatives around HIV/AIDS. Many men have eschewed responsibility in transmitting the virus and for caring for loved ones who are sick. To a large extent, cultural norms have encouraged this. It is essential that men begin assuming more responsibility in the care and support of people with HIV/AIDS, and interventions need to be developed that clearly involve men in these activities [ 30 ]. The women in this study reported both positive and negative experiences with healthcare professionals during the time their child was sick in the hospital. A study of patient and provider perceptions in public health clinics in the KwaZulu-Natal and Gauteng provinces in South Africa revealed that there were gaps in HIV/AIDS knowledge among healthcare providers, and patients reported mixed experiences with the quality of care [ 31 ]. What was especially troubling in the current study was the minimal amount of support given to these women by hospital staff at the time of the child's death or after the death. Referrals were seldom made to community resources to help these women with their grief or to help them with practical matters, such as funeral arrangements. In essence, many of the mothers felt abandoned by the hospital.
Conclusions With the recent push to increase access to ART for people with HIV/AIDS in South Africa, the outlook for HIV-positive mothers and their children is more promising than ever, and hopefully, we will soon see a sharp decline in AIDS deaths. In the meantime, the needs of these women and children deserve top priority and medical treatment needs to be combined with appropriate psychosocial support.
Background AIDS continues to be the leading cause of death in South Africa. Little is known about the experiences of mothers who have lost a young child to AIDS. The purpose of this qualitative study was to explore the attitudes and experiences of women who had lost a young child to HIV/AIDS in KwaZulu-Natal Province, South Africa. Methods In-depth interviews were conducted with 10 women who had lost a child to AIDS. The average age of the deceased children was six years. Interviews were also conducted with 12 key informants to obtain their perspectives on working with women who had lost a child to AIDS. A thematic analysis of the transcripts was performed. Results In addition to the pain of losing a child, the women in this study had to endure multiple stresses within a harsh and sometimes hostile environment. Confronted with pervasive stigma and extreme poverty, they had few people they could rely on during their child's sickness and death. They were forced to keep their emotions to themselves since they were not likely to obtain much support from family members or people in the community. Throughout the period of caring for a sick child and watching the child die, they were essentially alone. The demands of caring for their child and subsequent grief, together with daily subsistence worries, took its toll. Key informants struggled to address the needs of these women due to several factors, including scarce resources, lack of training around bereavement issues, reluctance by people in the community to seek help with emotional issues, and poverty. Conclusions The present study offers one of the first perspectives on the experiences of mothers who have lost a young child to AIDS. Interventions that are tailored to the local context and address bereavement issues, as well as other issues that affect the daily lives of these mothers, are urgently needed. Further studies are needed to identify factors that promote resilience among these women.
Competing interests The authors declare that they have no competing interests. Authors' contributions This manuscript was conceived, drafted and authored by CD.
Acknowledgements The project described was supported by Grant Number 1R21NR010423A from the National Institute of Nursing Research from 8/29/2007 to 5/31/2010. The content is solely the responsibility of the author and does not necessarily represent the official views of the National Institute of Nursing Research and the National Institutes of Health. The author is grateful to the South African team led by Dr Vishanthie Sewpaul and Ms Thora Mansfield and, most of all, to the people who agreed to participate in this study.
CC BY
no
2022-01-12 16:54:40
J Int AIDS Soc. 2010 Dec 9; 13:50
oa_package/f3/fa/PMC3014874.tar.gz
PMC3014875
21122160
Background It is clear that a continuous increase in the overall level of drug use, especially among elderly, has been noted in several countries. Moreover, an increase in the number of individuals experiencing polypharmacy, ie. the concurrent use of several different drugs, has also been reported [ 1 - 3 ]. Whilst the use of a number of different drugs for many individuals appears to be a rational drug therapy, and polypharmacy is assumed to provide major health benefits for the well being of large groups of individuals suffering from different diseases, polypharmacy is also a well known risk factor due to adverse drug reactions, drug-drug interactions, and low adherence to drug therapy [ 3 - 5 ]. In addition, it is also assumed that polypharmacy causes unnecessary health expenditure [ 5 ], directly due to redundant drug sales and indirectly due to the increased level of hospitalization caused by drug-related problems [ 6 ]. Drug-related problems are reported to cause a substantial proportion of all emergency treatment and admissions to hospitals as regards elderly patients [ 4 , 7 ]. Consequently, there have been many attempts to reduce the number of prescribed drugs to individuals experiencing polypharmacy, especially as regards the elderly [ 1 ]. Furthermore, it should also be noted that previous studies of polypharmacy have primarily been conducted on samples of elderly individuals admitted to hospitals or nursing homes [ 5 , 8 ]. Only a few studies have been based on population-based information [ 5 , 9 , 10 ], and some of these studies have also been limited to elderly individuals [ 11 - 14 ]. A recent register study showed that 2/3 of all individuals in a national population who were being prescribed with 5 or more drugs were < 70 years of age [ 15 ], indicating that multiple medication use is not only relevant as regards elderly individuals. Clearly, for policymakers, as well as for clinicians, it is important to follow the developing trends in drug use and polypharmacy over time, and not only in the elderly age groups but also for the large number of middle-aged individuals subject to polypharmacy. In this context, the establishment of the Swedish Prescribed Drug Register in 2005 made it possible to apply individual data in exploring and analyzing the utilization of polypharmacy in an entire national population. Such individual-based data may also be applied in longitudinal studies of the development of drug use. Aim of the study We wanted to study if the prevalence of polypharmacy in an entire national population has changed during a 4-year period.
Methods By using individual based data on dispensed drugs, we studied all dispensed prescribed drugs for the entire Swedish population during four 3-month periods (July, August and September) 2005-2008. These data were extracted from the Swedish Prescribed Drug Register [ 10 ]. In this study, the prevalence of polypharmacy was defined as the proportion of individuals receiving five or more dispensed prescription drugs (DP≥5) during a 3-month period. As a definition of excessive polypharmacy, we applied ten or more dispensed drugs (DP≥10) for an individual during the study period [ 12 ]. Consequently, the prevalence of excessive polypharmacy was defined as the proportion of individuals receiving ten or more dispensed drugs during a 3-month period. As five or more dispensed drugs comprises the most commonly applied definition of polypharmacy [ 8 , 12 , 16 ] and 10 or more dispensed drugs is the most widely used definition of "excessive" polypharmacy [ 9 , 12 , 16 , 17 ] our definitions are intended to enable comparisons with other studies. The development of the prevalence of drug use defined as the proportion of individuals with one or more dispensed drugs (DP≥1) during a 3-month period is illustrated for purpose of comparison. The Swedish Prescribed Drug Register The Swedish Prescribed Drug Register covers the entire Swedish population and includes approximately 82% of all Defined Daily Doses (DDD) dispensed in Sweden. The register does not include data on OTC medications (13%), in-hospital medications (4%), and non-institutional care medications (1% of all DDD distributed in Sweden). This register is not complete as regards vaccines or for non-dose-dispensed drugs in nursing homes. The Swedish Prescribed Drug Register is individual-based and contains data from dispensed out-patient prescriptions at all Swedish pharmacies from July 1, 2005. The registration of dispensed drugs is mandatory and the following data from the register was used in our study: dispensed drug (substance), date of dispensing, age, gender, and a unique identifier (personal identification number) of the patient. All processing of the individual data of dispensed drugs in our study was undertaken anonymously, without the original personal identification number. Instead, a unique temporary individual identifier, specifying gender and year of birth, was applied and the study population was stratified by gender and age (10-year classes). The results of our study were presented with respect to the number of individuals per gender and age group in the Swedish population during the corresponding periods. Also, the values applied were the number of individuals and the number of dispensed prescription drugs per individual, and the definition of drug was the chemical entity or substance comprising the fifth level in the Anatomical Therapeutic Chemical (ATC) classification system. Calculation of sums and frequencies were aggregated using Microsoft Excel (version 5.1.26). The study was approved by the Regional Ethical Review Board in Linköping, Sweden.
Results The development of polypharmacy The prevalence of polypharmacy (DP≥5) in the entire population increased by 8.2% (from 0.102 to 0.111) 2005-2008 (Table 1 ). The number of individuals with DP≥5 increased by 10.4% (from 922,949 to 1,019,324) (Table 2 ). The prevalence increased in all age groups, except for the age group 0-9 years and the largest increase in the prevalence was in the age group 10-19 with an increase of 9.1%. In the age groups 60-69 to 90-years, the increase was between 7.2% and 8.6% (Figure 1 ). For men, the prevalence of polypharmacy (DP≥5) increased in all age groups (11.9%) except for the age group 0-9 years. The largest increase in the prevalence of polypharmacy was in the age group 60-69 with an increase of 12.3%, whilst in the age groups 70-79 to 90-years, the increase was between 8.4% and 10.1% (Figure 2 ). For women, the prevalence of polypharmacy (DP≥5) increased in all age groups (5.9%) except for the age group 0-9 years. The largest increase in the prevalence was in the age group 10-19 with an increase of 13.3%, whilst in the age groups 60-69 to 90-years, the increase was between 5.8% and 7.6% (Figure 2 ). The development of excessive polypharmacy The prevalence of excessive polypharmacy (DP≥10) in the entire population increased by 15.7% (from 0.021 to 0.024) 2005-2008 (Table 1 ). The number of individuals with DP≥10 increased by 18.1% (from 185,618 to 219,244) (Table 2 ). The level of prevalence increased in all age groups except for the age group 0-9 years and the largest increase in the prevalence was in the age group 90-with an increase of 28.5%. In the age groups 60-69 to 80-89 years, the increase was between 10.6% and 21.6% (Figure 1 ). For men, the prevalence of excessive polypharmacy (DP≥10) increased in all age groups (20.2%), except for the age group 0-9 years. The largest increase in the prevalence was in the age group 90-, with an increase of 36.4%. In the age groups 60-69 to 80-89 years, the increase was between 15.7% and 23.0% (Figure 3 ). For women, the prevalence of excessive polypharmacy (DP≥10) increased all age groups (13.5%), except for the age group 0-9 years. The largest increase in the prevalence was in the age group 90-with an increase of 26.8%, and in the age groups 60-69 to 90-years, the increase was between 7.3% and 21.2% (Figure 3 ). The development of the mean number of dispensed drugs per individual The mean number of dispensed drugs per individual, during a 3-month period for all individuals in Sweden receiving dispensed drugs, increased by 3.6% (from 3.3 to 3.4 drug per individual) during the study period 2005-2008. For elderly individuals, 70 years and above, the mean number of dispensed drugs per individual increased by 3.9% (from 4.8 to 5.0 drugs), by 6.1% (from 5.7 to 6.1 drugs), and by 7.6% (from 6.1 to 6.6 drugs), in the respective age groups. The increase (%) of the mean number of dispensed drugs for men and for women was similar.
Discussion Principal findings and possible explanations The prevalence of polypharmacy and excessive polypharmacy increased year-by-year, in the entire Swedish population 2005-2008. With the exception of the age group 0-9 years, the prevalence of polypharmacy and excessive polypharmacy increased in all age groups. The prevalence of excessive polypharmacy displayed a clear age trend, with the largest increase for the age groups 70 years and above. Generally, the increase in the prevalence of polypharmacy was approximately twice as high for men as for women, and the increase in prevalence of excessive polypharmacy was about 1.5 as high for men as for women. The increase rate for both polypharmacy and excessive polypharmacy levelled out during the study period, but between separate years, we noted a variation in rate of increase. This variation refers to the different age groups and to both genders. The increase in the prevalence of polypharmacy may have several different causes: changes in the recommended prescriptions for various drug treatments as well as the introduction of specific drugs for treatment of conditions/diseases regarding which they have previously not been applied. Furthermore, middle-aged individuals are increasingly informed, and become, consequently, more prone to request an increased amount of prescription drugs. Finally, more drugs are being prescribed for preventive use. All together, these factors may have resulted in a change in the physicians' prescription patterns. The decrease in the prevalence of polypharmacy in the age group 0-9 years can be explained by the national interventions to reduce the prescribing of antibiotics to children, in order to prevent antimicrobial resistance. Nearly 80% of the children 0-9 years with polypharmacy received antibiotics in 2006, clearly indicating that antibiotics have the largest impact on the prevalence of polypharmacy in this particular age group [ 15 ]. Both the overall increase and the differences in the rate of increase between the years are puzzling. These increases suggest relatively rapid changes in prescription patterns among prescribers; changes that may have a variety of causes, e.g. the introduction of new clinical guidelines. Prior to 2005, national clinical guidelines were available for only three different areas in Sweden. During the study period, 2005-2008, The National Board of Health and Welfare in Sweden introduced four new national clinical guidelines; Stroke, Chest-Colorectal cancer and Prostate cancer, Heart disease, and Addiction, and in 2009-2011 seven other new clinical guidelines are planned to be introduced (e.g. Depression, Dementias, Diabetics, Lung cancer). Prior to being officially introduced, new clinical guidelines exist only in preliminary versions. Consequently, these guidelines might influence the prescription habits and the development of polypharmacy a number of years before the guidelines being officially introduced. The introduction of national clinical guidelines for heart diseases and prostate cancer might explain both the unequal increase between genders, and the variation in increase rate between the different years. In a study from Sweden concerning general practitioners' (GPs') perceptions of multiple-medicine use [ 18 ], clinical guidelines were viewed as "medicine generators". GPs' expressed frustration concerning guideline recommendations for certain diagnoses, e.g. cardiovascular diagnoses that "immediately result in five medicines". Regardless of the patients' other diseases, many guidelines were perceived as too rigid, leading to a standard "kit" of medicines per indication, and thereby resulting in that individuals with multiple diseases received an increasing number of different drugs. The introduction of new national guidelines might therefore also contribute to explaining the age trend in the development of excessive polypharmacy, as older patients are more often exposed to several diseases. The elderly may receive, as a result of the guidelines, more often than others, a number of different "kits" of drugs added [ 18 ]. Strengths and weaknesses of the study Our study presents an overview of the development of polypharmacy in an entire national population. The applied 3-month period prevalence of dispensed drugs includes all drugs that are prescribed on a regular basis (e.g. drug used in diabetes), when needed (e.g. analgesics), and temporarily (e.g. antibiotics). The periodically used drugs have been shown to have a different impact on the prevalence of polypharmacy in different age groups [ 15 ]. As the study included all individuals in the population, we avoided certain known problems concerning sampling, recall, interview and confidence. On the other hand, when the register data regarding the dispensed drugs is used as an estimator of drug use and polypharmacy, over-as well as underestimations of actual drug use arises. The extracted data included dispensed prescription drugs only, corresponding to approximately 82% of all Defined Daily Doses (DDD) distributed in Sweden. Also, additional sources of drugs, such as OTC medications, in-hospital medications and non-institutional care medications, herbal and alternative remedies together with previously filled prescriptions (before the study period), gifts and elicit Internet sales, were not included in the study, and resulted in an underestimation of the total consumption of drugs. In addition, generic duplication (intended and unintended duplication of dispensed drugs with the same substance) might also have caused an underestimation of polypharmacy in our data, as we calculated only the number of dispensed drugs comprised of different substances. In sample studies of drug use among individuals with polypharmacy, patients often have two or more drugs with the same substance [ 4 , 19 , 20 ]. In register studies, it is difficult to make distinction between generic duplication and generic substitution (an intended switch between two drugs with the same substance). If the generic duplicate had been taken into account, this would have resulted in an even larger prevalence of polypharmacy. Whether the generic duplicate could have any impact on the development of the prevalence of polypharmacy over the study period has not been addressed. Conversely, dispensed drugs as an indicator of drug use might result in an overestimation, as it is well known that a certain proportion of all dispensed drugs will never be used [ 21 ]. Strengths and weaknesses in relation to other studies The displayed increase of polypharmacy in the entire population in Sweden since 2005 is in line with studies focusing only on elderly individuals during the 1980's and 1990's [ 2 , 22 - 25 ]. However, there are certain difficulties in comparing our results concerning the elderly population with some of the previous studies. Firstly, some studies have addressed the level of drug use for the same individuals over time, concluding that drug use and polypharmacy increase with increasing age, but without an increased prevalence over time [ 26 - 30 ]. Secondly, some studies have applied varying time periods, different definitions of drug use and polypharmacy or different samplings of individuals [ 3 ]. Finally, certain studies are based on interviews, and their results might be influenced by the sampling, recall or interview bias impedes comparison with results from register-based studies [ 15 , 31 ]. The displayed year-by-year increase in drug use, polypharmacy and the mean number of dispensed drugs in the present study is generally minor compared to the increase shown in previous studies of the development of drug use in the 1980 s and 1990 s, e.g. a displayed 3-fold increase in the prevalence of polypharmacy and mean number of drugs per person during a ten year period [ 2 ]. This difference might be explained by the fact that our data included all individuals in the national population. Previous studies have often used samples of only the elderly admitted to hospitals or living in nursing homes. Relatively healthy individuals might, therefore, not have been included in these earlier studies. Another possible explanation is that the recent efforts to reduce the increases in drug use and polypharmacy actually have had an effect. Implications for clinicians and policymakers The substantial increase in the prevalence of polypharmacy and excessive polypharmacy occurs simultaneously with the introduction of new clinical guidelines aimed at increasing the benefits of the medical treatment. The increase also occurs when the potential risks with polypharmacy have been highlighted, and various efforts have been made to reduce the number of drugs prescribed to individuals with an excessive number of drugs, especially the elderly. In Sweden, efforts to reduce the prevalence of polypharmacy have been focused on, at in first hand, the reduction of unintended generic duplication. The assessment of the increasing prevalence of polypharmacy is not interpreted in a unanimous manner. For certain clinicians and policymakers, the results of the present study may be interpreted as the regrettable further development of polypharmacy, and that, in particular, excessive polypharmacy is continuing in an undesirable direction. However, the results of our study may also be interpreted to imply that a larger proportion of patients are receiving recommended drug treatment in line with new clinical guidelines. The prevalence of polypharmacy may hide the fact that the benefits and/or risks of polypharmacy can be evaluated at individual level only. For clinicians, recommendations are required as to the manner in which to combine and balance different clinical guidelines to achieve an appropriate drug therapy for patients with multiple diseases. Unanswered questions and future research Over the 4-year study period, the increase in the prevalence of polypharmacy and excessive polypharmacy was particularly notable for men, 12% and 20%, respectively, and was even more notable for elderly men. This increase in drug use remains to be analyzed, and can possibly be associated with the introduction of new national clinical guidelines during the period with special relevance for men (e.g. guidelines for Heart diseases and Prostate cancer).
Conclusions The prevalence of polypharmacy and excessive polypharmacy, as well as the mean number of dispensed drugs per individual, increased year-by-year in Sweden 2005-2008.
Background An increase in the use of drugs and polypharmacy have been displayed over time in spite of the fact that polypharmacy represents a well known risk factor as regards patients' health due to the adverse drug reactions, drug-drug interactions, and low adherence to drug therapy arising from polypharmacy. For policymakers, as well as for clinicians, it is important to follow the developing trends in drug use and polypharmacy over time. We wanted to study if the prevalence of polypharmacy in an entire national population has changed during a 4-year period. Methods By applying individual-based data on dispensed drugs, we have studied all dispensed prescribed drugs for the entire Swedish population during four 3-month periods 2005-2008. Five or more (DP ≥5) and ten or more (DP ≥10) dispensed drugs during the 3-month period was applied as the cut-offs indicating the existence of polypharmacy and excessive polypharmacy respectively. Results During the period 2005-2008, the prevalence of polypharmacy (DP≥5) increased by 8.2% (from 0.102 to 0.111), and the prevalence of excessive polypharmacy (DP≥10) increased by 15.7% (from 0.021 to 0.024). In terms of age groups, the prevalence of polypharmacy and excessive polypharmacy increased as regards all ages with the exception of the age group 0-9 years. However, the prevalence of excessive polypharmacy displayed a clear age trend, with the largest increase for the groups 70 years and above. Furthermore, the increase in the prevalence of polypharmacy was, generally, approximately twice as high for men as for women. Finally, the mean number of dispensed drugs per individual increased by 3.6% (from 3.3 to 3.4) during the study period. Conclusions The prevalence of polypharmacy and excessive polypharmacy, as well as the mean number of dispensed drugs per individual, increased year-by-year in Sweden 2005-2008.
Competing interests The authors declare that they have no competing interests. Authors' contributions All authors participated in the design of the study and the discussion of findings. BH and KH executed the data management and BH drafted the manuscript. KH, BÅ and GP revised the manuscript. All authors read and approved the final manuscript. Pre-publication history The pre-publication history for this paper can be accessed here: http://www.biomedcentral.com/1472-6904/10/16/prepub
Acknowledgements We wish to thank Andrejs Leimanis and Helena Schiöler, at The Swedish Board for Health and Welfare, for their assistance with data material and statistical procedures from the Swedish Prescribed Drug Register. The study was financed through grants from the National Corporation of Pharmacies (Apoteket AB), and was designed and conducted independent of Apoteket AB.
CC BY
no
2022-01-12 15:21:36
BMC Clin Pharmacol. 2010 Dec 2; 10:16
oa_package/d5/b5/PMC3014875.tar.gz
PMC3014876
21134255
Introduction Declining ischemic heart disease (IHD) mortality in the world's developed regions can be partly explained by decreasing incidence of the disease, suggesting effective primary prevention measures, and partly by reduced case fatality rates, reflecting improved primary and secondary care [ 1 ]. Downward trends in IHD mortality have been seen in Scotland [ 2 ], but to a lesser extent than in other Western European countries, resulting in Scotland having one of the worst IHD mortality rates in the region [ 3 ]. Incidence [ 2 ] and case fatality [ 4 - 7 ] from the disease have also been declining rapidly in Scotland, but its fatal and nonfatal event rates remain high when compared internationally [ 8 ]. The overall picture for survival is not so bleak, with 28-day case fatality following an acute myocardial infarction (AMI) in Scotland shown to be the same as or lower than the average across all populations monitored by the World Health Organization MONICA Project [ 8 ]. However, for Scotland to achieve its health potential, it is important that the downward trends in AMI case fatality are experienced by all sectors of society. Despite large reductions in rates, there remain strong regional differences [ 9 , 10 ] and socioeconomic inequalities [ 10 , 11 ] in IHD mortality in Scotland. This partly reflects increasing socioeconomic and geographic variations in AMI incidence [ 2 ] as well as similar patterning of AMI case fatality. The majority of Scottish studies exploring the effect of deprivation on AMI case fatality have shown that socioeconomic inequalities exist [ 5 - 7 , 12 - 14 ]. Such inequalities also contribute to the gradient in IHD mortality. Studies outside Scotland have provided conflicting evidence, with some showing evidence of a socioeconomic gradient in AMI case fatality whose strength is dependent on whether the focus is on in-hospital [ 15 - 17 ] or out-of-hospital [ 15 , 18 ] AMI events. Some work, however, has suggested that the associations between short-term mortality and area deprivation or education are weak and inconsistent [ 19 ]. Further, in Scotland, there is conflicting evidence of sex differences in the socioeconomic inequalities in short-term AMI case fatality. For example, in terms of hospital admissions, inequalities in one-month case fatality have been shown to exist in men but not women [ 7 ], but also to be stronger in women than in men [ 13 ]. The most up-to-date study comparing socioeconomic inequalities between males and females examined AMI events up to 1995. The extent of socioeconomic differences and the Scottish government's commitment to tackling inequalities in health emphasize the importance of exploring whether similar declines in case fatality rates over recent years have been experienced by all population groups. This study was designed to explore socioeconomic inequalities in short-term AMI case fatality and, in particular, to examine any temporal changes and associations with age, sex, and geography. Specifically, we will address three hypotheses. We hypothesize that a socioeconomic inequality pattern will exist for one-day case fatality, and that, similar to AMI incidence inequalities [ 2 ], this pattern will persist over time and will be steeper at younger ages and steeper for women than for men. For days 1-27, we hypothesize that no socioeconomic inequalities will exist. Finally, we hypothesize that any geographical variations in case fatality will be explained by the patterns of socioeconomic deprivation in Scotland. Population-based studies in AMI case fatality are fairly uncommon, but Scotland has the advantage of having the only morbidity record database in the United Kingdom that routinely links information on all hospital admissions with all mortality data for a geographically defined population, the 5.1 million people in Scotland [ 20 ]. The aim of this work is to examine the trends and inequalities in short-term case fatality after a first AMI event in Scotland between 1988 and 2004. Our data contain accurate information from 1981 to 2004 on 1,035,692 individual IHD events, of which 457,363 resulted in deaths.
Methods Data source The data were obtained from the Scottish system of hospital discharge records. The Information and Services Division of the National Health Service in Scotland routinely links these records to mortality data provided by the General Register Office for Scotland [ 21 ]. Incidence was defined as a first-time attack within a seven-year period, with AMI (ICD9: 410, ICD10: I21-I22) as primary or secondary diagnosis at discharge or underlying or contributory cause of death or other IHD (ICD9:410-414; ICD10: I20-I25) as underlying cause of death [ 22 ]. Linkage enabled us to identify 375,848 individuals aged 30+ years who had an incident event between 1988 and 2004. Short-term case fatality was defined as the proportion of AMI incident events in which the patient died from any cause within a 28-day period. More specifically, Day0 means death on the day of the event, with a denominator of all incident events, and Day1-27 means death within 28 days, with a denominator of all who survived the day of the event. Each patient record provides information on age, sex, postcode, and Health Board (HB) of residence, date of admission, discharge, and death, if it occurred. Postcode sectors (mean population 5402, range 53-20,512) were used to allocate patients into seven deprivation categories (DEPCAT) using Carstairs socioeconomic deprivation scores [ 23 ]. Throughout the period, 3,499 patient records (1%) had missing postcode information and were excluded from our analysis. Statistical analysis Case fatality is defined as the proportion of events ending fatally within a defined period from the onset of an attack [ 24 ] and is therefore modeled using logistic regression. We analyzed the data using multilevel modeling [ 25 ] in MLwiN [ 26 ] to take account of the hierarchical data structure. Individuals are nested within postcode sectors within HBs; the rates for postcode sectors within the same HB are likely to be correlated, as is case fatality for individuals within a given postcode sector. Adjustment was made for age, sex, year of first AMI event, DEPCAT (1-least; 7-most), and significant interactions between these. Odds ratios were used to assess the effect of these factors on AMI case fatality rates. Directly standardized rates are also presented within these strata. Geographic inequalities were assessed using the intraclass correlation coefficient [ 27 ], which partitions the total variation in case fatality to that attributable to each of postcode sector (n = 1,010) and HB (n = 15) levels. A larger variance at a given level indicates greater geographical inequality. Larsen and Merlo [ 28 ] discuss the disadvantages of using the intraclass correlation when examining binary responses and propose the use of the median odds ratio (MOR) as an alternative. These are also presented. The MOR quantifies the variation between areas by comparing two persons with identical characteristics from two different, randomly chosen areas. It is the median of the odds ratios obtained between the person from the area with higher propensity of case fatality and the person from the area with lower propensity.
Results Baseline characteristics of incidents and case fatalities of AMI The number of people who experienced their first AMI (372,349) between 1988 and 2004 is shown in Table 1 broken down by sex, age, deprivation, and year. Alongside these figures are numbers and percentages of those who died on the day of their AMI and those who died within 28 days. Between 1988 and 2004, 178,781 patients with a first AMI died on the day of their event (crude Day0 case fatality 48%), and of the 192,568 patients who survived the day of their first AMI, 34,198 died within 28 days (crude Day1-27 case fatality 18%). Age-standardized Day0 case fatality decreased from 51% in 1988-90 to 41% in 2003-04, and Day1-27 decreased from 29% to 18% over the same time period. For each case fatality definition, standardized rates are shown, and univariable models were fitted to explore the relationships with sex, age, DEPCAT, and year. Sex significantly affects each definition. Women have significantly higher short-term case fatality rates than men, and this sex difference is strongest in Day1-27 case fatality. As expected, age is strongly related to case fatality. The chances of short-term survival are reduced significantly as age increases. The effect of deprivation was unclear from the unadjusted results. There was a clear downward time trend, with the odds of case fatality decreasing significantly as year increases. Geographic variation in case fatality Table 2 shows the random part estimates from fitting models for each case fatality outcome before and after the inclusion of deprivation. Examination of the intraclass correlation coefficients from each model indicate that the majority (98-99%) of the variation in short-term case fatality is due to differences between individuals, suggesting little geographic variation in short-term survival. The MOR at Level 3 quantifies the differences between HBs, while at Level 2 it quantifies the difference between postcode sectors within the same HB. Focusing on Day0 case fatality, the MOR at each level is 1.16, and the overall MOR, associated with differences between randomly chosen postcode sectors from different HBs, is therefore 1.35 (= 1.16 × 1.16). These geographic inequalities are on a similar scale to, for example, the socioeconomic inequalities experienced by men aged 60+ years (Table 3 ). When DEPCAT is included in the model, the unexplained heterogeneity (comparing persons from postcode sectors of the same kind; e.g., both areas of low deprivation) does not change much for either of the case fatality outcomes. Therefore, the geographic variation between areas remains even after accounting for small area deprivation. Interaction between age, sex, deprivation, and year on Day0 case fatality There was evidence of a significant four-way interaction ( p -value < 0.05) between age, sex, year, and DEPCAT for Day0 case fatality. The standardized rates and odds ratios in Table 3 explore the nature of this interaction. To simplify the results, age has been recategorized into two groups (30-59 and 60+ years) and year into three groups (1988-1993, 1994-1999, and 2000-2004). Odds ratios comparing each DEPCAT to DEPCAT1 (most affluent areas) are presented by sex, age, and year group. Significant socioeconomic inequalities existed for men of all ages; these persisted over time and appeared slightly stronger in 30- to 59-year-olds. For example, the odds of case fatality for men aged 30-59 living in the most deprived areas in 2000-2004 were 1.67 (1.28-2.17) times as high as in the least deprived areas. Similar inequalities were apparent in women; however, the odds of case fatality were only significantly greater when comparing the most deprived areas (DEPCAT 7) to the least for women aged 30-59. Again, these inequalities persisted over time. In the younger age group, the odds of case fatality in the most deprived areas were 1.86 (1.08-3.21) times as high as in the least deprived areas. Interaction between age, sex, deprivation, and year on Day1-27 case fatality For Day1-27 case fatality, there was a significant interaction between sex, age, and DEPCAT ( p -value < 0.05). Table 4 explores the nature of this interaction by, presenting odds ratios comparing each DEPCAT with DEPCAT1 for men and women in each age group. There was little evidence of socioeconomic inequality, with only slightly elevated odds ratios for men aged 60+ living in DEPCATs 3-6; e.g., the odds of case fatality for those men living in DEPCAT 6 were 1.11 (1.00, 1.22) times as high as in DEPCAT 1 (least deprived).
Discussion Unlike other studies of this type, ours examines trends in socioeconomic inequalities in short-term case fatality following a first AMI and the associations with geography, age, and sex. It is the largest population-based study to explore short-term survival from the disease. Inequalities in "immediate" case fatality following a first AMI There has been a steep downward trend in short-term case fatality rates after a first AMI over recent years in Scotland. However, close to one-half of AMI incident events still result in death on the day of the event, implying that a high proportion of these first AMIs are sudden cardiac deaths. There is evidence of socioeconomic inequalities in immediate case fatality [ 15 , 29 , 30 ]; however, it is unclear whether these gradients differ by age and sex and how these have changed over time. We found socioeconomic inequalities existed in immediate case fatalities in Scotland but varied by age, sex, and time. As hypothesized, the inequalities for men persisted over time and appeared slightly stronger in the younger (30-59 years) age group. Similar inequalities were also evident for women of this age, and these persisted over time when comparing those in the most deprived areas to the least. Although the magnitude of inequalities for this AMI outcome are not as large as generally observed for other outcomes such as coronary heart diseases (CHD) mortality [ 10 ] or incidence [ 2 ], they are still making an important contribution to the overall picture of CHD inequalities. We hypothesized that geographical variations in case fatality would be explained by the patterning of socioeconomic deprivation in Scotland. However, after adjusting for area-level deprivation, we found that small but significant geographical variations in case fatality remained. It is logical to think that Day0 case fatalities are mainly sudden cardiac deaths, with limited potential for treatment to be effective. A possible exception is the effect of delays in receiving treatment due to people living in remote areas. Further analysis, not shown here, showed that the significant geographical variation was mainly due to rates being higher in more rural Health Boards. It has been shown elsewhere that short-term case fatality after an AMI is greater in rural areas of Scotland after taking into account deprivation [ 5 ], suggesting that a delay in the provision of service is likely to be influential here. Reducing immediate deaths after AMI incident events requires a reduction in the number of severe first AMIs, which principally requires a focus on primary prevention. Although there is a lack of literature exploring the association between inequalities in AMI risk factor exposure and AMI case fatality, we will discuss which risk factor inequalities have been shown to exist in Scotland and hypothesize any associations with our outcomes. Changes in smoking patterns are likely to be related to changes in immediate case fatality from AMI. Smoking declines over the last 30 years in Scotland have been steeper in men than in women [ 31 ], and cigarette smoking prevalence is highest in younger age groups [ 32 ]. There are clear socioeconomic inequalities in cigarette smoking, and evidence suggests that gradients are now greater in women than in men in Scotland [ 32 ]. The ratio of mean number of cigarettes smoked in the lowest household income quintile compared to the highest was 1.09 in men and 1.51 in women in 2003. A further major risk factor for AMI is obesity, and differences in prevalence are likely to be affecting the severity of first AMI events and hence inequalities in Day0 case fatality. There have been substantial increases in obesity in Scotland over recent years, with higher prevalence in women and the more deprived [ 32 ]. Diabetes rates, which are associated with obesity levels, have also been increasing in Scotland [ 32 ] and are associated with deprivation, with a suggestion of a stronger gradient in women [ 32 ]. There is also evidence that the effect of diabetes on AMI risk is stronger in women than men. Huxley et al [ 33 ] found that the association between CHD mortality and diabetes was stronger for women; the pooled relative risk of death was 3.50 for women and 2.06 for men. There is also evidence of a widening of socioeconomic inequalities in high blood pressure in women in the UK [ 34 ]. The relationship between cardiovascular risk factors and deprivation is complex and varies across sex and age groups. However, inequalities in risk factors such as those mentioned must be addressed to have a positive effect on Day0 case fatality. Improved primary prevention strategies are vital to reduce the rates overall as these early deaths account for a large proportion (84% in 2003-04) of 28-day case fatality and contribute strongly to Scotland's poor IHD mortality record. Along with reducing incidence rates [ 2 ], efforts must be made to reduce the severity of first-time AMIs in Scotland's population. Inequalities in 28-day case fatality following a first AMI There has been a significant drop in 28-day case fatality among those who survive the day of their first AMI event. Such patients will have reached a hospital and received treatment, so reductions in case fatality largely reflect improvements in such treatments. The National Health Service in Scotland provides free health care to all permanent residents, hence we hypothesized that there would be no socioeconomic variations in 28-day case fatality. There was little evidence from our data of such inequalities. As previously mentioned, risk factor exposure is likely to have a stronger effect on immediate case fatality than on 28-day case fatality. A study examining the effect a range of risk factors have on hospitalized case fatality showed a lack of association with some of the common risk factors, such as smoking and high cholesterol and blood pressure [ 35 ]. However, high levels of physical activity and moderate drinking were associated with lower case fatality, so variations in these in Scotland may explain the small socioeconomic inequalities in case fatality among those who survived the day of their first AMI. Study limitations One limitation of our study is that we only had an area-based measure of deprivation available and not individual socioeconomic status. The Carstairs deprivation index is a commonly used measure and has been validated against individual socioeconomic status [ 36 ]. However, the existing literature lacks information on whether the effect of area-level socioeconomic status in case fatality in Scotland is of relative importance over and above the individual's socioeconomic position. Previous work has shown that population size of the geographic area for which a deprivation index is derived may influence estimation of socioeconomic gradients, whereby estimates of inequalities have been shown to be diluted when the geographical units are large [ 37 ]. Our deprivation index is based on areas with a mean population of 5,402, and therefore it is likely our estimates of socioeconomic inequalities are underestimates. It is unfortunate that the routine data used in this study do not permit adjustment for both individual and contextual measures. Further work is needed in the area. A further limitation is that we do not have individual data on IHD risk factors or comorbidity and therefore can only hypothesize as to why rates are decreasing and inequalities persisting or changing over time. We can also only hypothesize about the contribution reductions and inequalities in AMI case fatality are having on the trends and patterns of AMI mortality in Scotland. It should be noted that our study examines a slightly different hospitalized case fatality outcome than most of the other studies described in this paper. We include patients who have been hospitalized and who have survived the day of their event. We are, therefore, making inferences about a slightly different population group: one that contains fewer of the more severe AMI cases. It should also be noted that we have focused on relative inequalities in case fatality and that inequalities in absolute numbers of incident events resulting in death do differ but are also persisting over time.
Conclusions There have been progressive improvements in short-term case fatality from AMI in Scotland. This may reflect improved treatments and a reduction in the incidence of sudden deaths. A high proportion of AMI incident events result in death on the day of the event, mainly sudden cardiac deaths. This highlights the need for primary prevention strategies to reduce risk factor exposure. Socioeconomic inequalities in immediate case fatality were persisting over time in younger men and women, suggesting socioeconomic gradients in risk factor exposure for this age group. In contrast, of those who survive the day of their first AMI, there was little evidence of socioeconomic inequality in 28-day case fatality. This reflects, to some extent, socioeconomic equality in the provision of health care and access to treatment across Scotland. Inequalities in immediate AMI case fatality suggest that this type of mortality may be highly preventable in Scotland, emphasizing the need for population-wide primary prevention. Reducing case fatality rates in the most disadvantaged populations is key to reducing total AMI mortality in Scotland and would help bring rates to a level comparable with the rest of Western Europe.
Background There have been substantial declines in ischemic heart disease in Scotland, partly due to decreases in acute myocardial infarction (AMI) incidence and case fatality (CF). Despite this, Scotland's IHD mortality rates are among the worst in Europe. We examine trends in socioeconomic inequalities in short-term CF after a first AMI event and their associations with age, sex, and geography. Methods We used linked hospital discharge and death records covering the Scottish population (5.1 million). Between 1988 and 2004, 178,781 of 372,349 patients with a first AMI died on the day of the event (Day0 CF) and 34,198 died within 28 days after surviving the day of their AMI (Day1-27 CF). Results Age-standardized Day0 CF at 30+ years decreased from 51% in 1988-90 to 41% in 2003-04. Day1-27 CF decreased from 29% to 18% over that period. Socioeconomic inequalities in Day0 CF existed for both sexes and persisted over time. The odds of case fatality for men aged 30-59 living in the most deprived areas in 2000-04 were 1.7 (95%CI: 1.3-2.2) times as high as in the least deprived areas and 1.9 (1.1-3.2) times as high for women. There was little evidence of socioeconomic inequality in Day1-27 CF in men or women. After adjustment for socioeconomic deprivation, significant geographic variation still remained for both CF definitions. Conclusions A high proportion of AMI incidents in Scotland result in death on the day of the first event; many of these are sudden cardiac deaths. Short-term CF has improved, perhaps reflecting treatment advances and reductions in first AMI severity. However, persistent socioeconomic and geographic inequalities suggest these improvements are not uniform across all population groups, emphasizing the need for population-wide primary prevention.
Competing interests The authors declare that they have no competing interests. Authors' contributions CD is the corresponding author and guarantor of this paper. CD formulated the research question, analysed and interpreted the data and wrote the paper. AL initiated the study and commented on the paper. Both authors read and approved the final manuscript.
Acknowledgements We thank the Information Services Division of the NHS in Scotland for providing the data. The Social and Public Health Sciences Unit is jointly funded by the Medical Research Council and the Chief Scientist Office of the Scottish Government Health Directorate. This research was funded by the Chief Scientist Office as part of the Measuring health, variations in health and determinants of health programme, wbs U.1300.00.001.
CC BY
no
2022-01-12 15:21:36
Popul Health Metr. 2010 Dec 6; 8:33
oa_package/8c/72/PMC3014876.tar.gz
PMC3014877
21129173
Background Centronuclear myopathies (CNM) are a group of congenital disorders characterized by hypotonia and skeletal muscle biopsies typically showing small rounded fibers with central nuclei [ 1 - 4 ]. Abnormal nuclear positioning is seen in several myopathies, but clinical, genetic and pathological factors clearly distinguish these myopathies from CNM. Three CNM classes have been described: the severe neonatal X-linked form, also called myotubular myopathy (XLCNM, OMIM 310400), the autosomal recessive form with childhood onset (ARCNM, OMIM 255200), and the autosomal dominant form with adult onset (ADCNM, OMIM 160150). Myotubularin ( MTM1 ) is mutated in XLCNM [ 5 ] and belongs to a large family of ubiquitously expressed phosphoinositide phosphatases implicated in intracellular vesicle trafficking [ 6 - 8 ]. The large GTPase dynamin 2 ( DNM2 ), mutated in ADCNM, is a mechanochemical enzyme and a key factor in membrane trafficking and endocytosis [ 9 - 11 ]. Amphiphysin 2 ( BIN1 ) is mutated in ARCNM and possesses an N-terminal BAR domain able to sense and bend membranes and a SH3 domain mediating protein-protein interactions [ 12 , 13 ]. A muscle-specific isoform is implicated in T-tubule biogenesis and contains a polybasic residue sequence binding to phosphoinositides [ 14 ]. Only 4 unrelated individuals with BIN1 mutations have been molecularly and clinically characterized to date [ 12 , 15 ] and this report is the first description of intrafamilal variability in two patients from a consanguineous family. Clinical analysis of respiratory and cardiac involvement diagnosed for the more severely affected male patient expand the phenotypic spectrum in autosomal recessive centronuclear myopathy. It is furthermore the first time that patients with a BIN1 mutation are analyzed by whole-body MRI and the results contrast previous findings on DNM2 -related CNM.
Clinical report and results Patient 1 is a 13 year old girl belonging to a consanguineous family from Turkey without ancestral history of neuromuscular disorders (Figure 1A-B ). There were no complications during pregnancy, antenatal signs for muscle disorders as polyhydramnios and reduced fetal movements were not noted. Hypotonia was diagnosed at birth and motor development was delayed: head control was achieved at 6 months, walking at 18 months and running at 36 months. Muscle weakness was predominantly proximal, accompanied by mild facial weakness, ptosis and ophtalmoplegia/paresis. Tendinous reflexes were absent and she has no contractures. Although she has mild mental retardation (IQ 60), speech development was normal and she integrated the regular educational system. Echocardiography, electrocardiography and electroneuromyography were normal, and there were no indications of myotonia or neuromuscular junction abnormalities. Serum creatine kinase was mildly elevated [380 IU/L (70-150); normal range 60 - 320 IU/L]. She is currently walking independently but she has difficulty climbing stairs and running. Pulmonary function tests are normal. Patient 1 has one non-affected sister and none of the parents displays clinical features of a muscle disorder. Patient 2, a 14 year old boy, is the first-degree cousin of patient 1 and belongs to a second consanguineous family loop (Figure 1A-B ). The course of the disease was rather similar to patient 1 with normal pregnancy, hypotonia at birth, delayed motor milestones and normal speech development despite a mild mental retardation (IQ 60). Head control was achieved at 6 months, walking at 18 months and running at 36 months. Likewise, patient 2 presents a predominantly proximal muscle weakness, absent tendinous reflexes, facial weakness, ptosis and opthalmoplegia/paresis. However, his phenotype is more severe as he is not able to walk independently since the age of 10 years and is wheelchair-bound. Furthermore, the degree of ophtalmoplegia/paresis and ptosis is more prominent than in patient 1. In addition, electrocardiography and HOLTER examination revealed premature ventricular complexes while echocardiography was normal. Serum creatine kinase was 450 IU/L (70-150) and electromyography revealed myopathic changes in all muscle groups. He needs non-invasive respiratory support for four hours per day. Patient 2 has healthy parents and 3 non-affected siblings. Whole body MRI of both patients revealed similar results with increased signals on T2 and T1 weighted images in thigh muscles, upper and lower extremities which are consistent with fatty infiltrations. Detailed axial imaging of the femoral and crural regions of patient 1 revealed prominent fatty involvement of soleus, tibialis anterior, peroneal and extensor muscles, but sparing of the gastrocnemius (Figure 1C ). All thigh muscle groups were affected without selective pattern. Imaging of upper limb demonstrated relative sparing of triceps, subscapularis and flexor muscle groups (Figure 1C ). No abnormalities of brain, heart or other organs were noted. Diagnosis of CNM for both patients was suggested on muscle biopsies showing numerous centrally located and partially clustered nuclei, variable fiber size, type 1 fiber predominance, extensive myofibrillar disorganization (Figure 1F ) and fibrosis (Figure 1D ). Dystrophin expression was normal (Figure 1E ). BIN1 sequencing revealed a homozygous nonsense mutation in exon 20 in both patients (c.1717C > T; p.Gln573stop). Both patients have healthy parents heterozygous for this mutation.
Discussion By direct sequencing of the 20 BIN1 exons and the adjacent splice-relevant regions we identified the novel homozygous nonsense BIN1 mutation p.Gln573stop in two first-degree cousins from a consanguineous family. Both patients present predominantly proximal muscle weakness and classical features of CNM with a general progressive hypotonia involving facial weakness and ptosis. Ophtalmoplegia/paresis, as seen in both patients, is not a common sign of ARCNM (Table 1 ), while it is consistently reported for the X-linked form. However, Ophtalmoplegia/paresis often evolves over time and might not have been diagnosed in all BIN1 patients due to their young age. Whole body MRI showed fatty infiltrations of different muscle groups with selective muscle involvement in the lower leg, and a general muscle involvement in the thigh. This contrasts MRI findings in DNM2 -related centronuclear myopathies where prominent fatty atrophy was predominantly documented in the lower leg muscles, but only in specific thigh muscles: increased signals were reported for adductor longus, semimembranosus, rectus femoris, biceps femoris, and vastus intermedius muscles, while the adductor magnus, gracilis, sartorius, semitendinosus, vastus lateralis, and vastus medialis muscles were only minimally affected [ 16 ]. This is consistent with the observation that ARCNM patients with BIN1 mutations predominantly display a proximal muscle weakness (Table 1 ), whereas ADCNM patients with DNM2 mutations rather present involvement of the distal muscles [ 17 ]. Characterization of additional ARCNM and ADCNM patients is required to confirm that MRI could be used as a differential marker to direct genetic diagnosis. To our knowledge, this is the second documented ARCNM family with more than one molecularly characterized member. Nicot et al. reported a family with three affected members, two of which died within the first year of life, precluding a long term comparison of clinical signs [[ 12 ], Table 1 ]. In the present study we observed a clear intra-familiar variability, which might be linked to gender or modifier genes differing between individuals. Patient 2 does not walk independently, has a more pronounced ophtalmoplegia/paresis and ptosis, and electroneuromyography revealed myopathic changes not detected in patient 1. Patient 2 has also been diagnosed for additional respiratory system and cardiac involvements. Abnormal ventilation has been documented for two other autosomal recessive cases [[ 15 , 18 ], Table 1 ] and a further patient died at birth due to respiratory failure [[ 12 ], Table 1 ]. Cardiac arrhythmia was stated for patient 2, while ECG examinations did not reveal abnormalities for patient 1. As cardiac abnormalities have been reported for another ARCNM patient who died from myocarditis shortly after birth [[ 12 ], Table 1 ], we suggest careful cardiac function examinations and long term follow-up of patients with BIN1 mutations. A mild mental retardation, as seen in both patients, has recently been described in another ARCNM patient [[ 15 ], Table 1 ]. Mental impairment was not noticed in the other BIN1 -patients and is rarely present in other CNM forms. However, decreased synaptic vesicle recycling in the murine brain was described in amphiphysin 1 knockout mice, suggesting a possible pathological mechanism affecting cognitive abilities [ 19 ]. We though cannot exclude that the mental retardation might not be correlated to the BIN1 mutation, especially in a consanguineous family. BIN1 was initially identified as a c-Myc interacting pro-apoptotic tumor suppressor [ 20 ]; BIN1 expression is reduced in several cancers and mice deficient for BIN1 develop more aggressive tumors [ 21 , 22 ]. However, no tumors were reported in the small set of ARCNM patients with BIN1 mutations. The novel p.Gln573stop mutation described in this study is in direct spatial proximity to the previously identified p.Lys575stop mutation, which results in the expression of a truncated protein with decreased dynamin 2 binding [ 12 ]. At the time of publication, the p.Lys575stop patient was 17 years old, able to walk short distances, had normal cognitive development and no cardiac involvement [[ 12 , 18 ], Table 1 ], contrasting the present study. Table 1 gives an overview of the clinical manifestations of all currently published BIN1 patients. Disease onset at birth was stated for all patients except for ADR71 and AEY47 (p.Asp151Asn and pArg154Gln, respectively), harboring adjacent missense mutations in the BAR-domain and presenting a generally milder etiopathology. The identification of more BIN1 mutations and respective detailed clinical descriptions might help to establish an unambiguous genotype/phenotype correlation and to clarify if the most 3' BIN1 exon represents a hot spot prone to mutations. In conclusion, this study expands the phenotypic spectrum of BIN1 -related centronuclear myopathy and is the first clinical description of intrafamilial variability in a consanguineous CNM family.
Centronuclear myopathies (CNM) describe a group of rare muscle diseases typically presenting an abnormal positioning of nuclei in muscle fibers. To date, three genes are known to be associated to a classical CNM phenotype. The X-linked neonatal form (XLCNM) is due to mutations in MTM1 and involves a severe and generalized muscle weakness at birth. The autosomal dominant form results from DNM2 mutations and has been described with early childhood and adult onset (ADCNM). Autosomal recessive centronuclear myopathy (ARCNM) is less characterized and has recently been associated to mutations in BIN1 , encoding amphiphysin 2. Here we present the first clinical description of intrafamilal variability in two first-degree cousins with a novel BIN1 stop mutation. In addition to skeletal muscle defects, both patients have mild mental retardation and the more severely affected male also displays abnormal ventilation and cardiac arrhythmia, thus expanding the phenotypic spectrum of BIN1 -related CNM to non skeletal muscle defects. We provide an up-to-date review of all previous cases with ARCNM and BIN1 mutations.
Competing interests The authors declare that they have no competing interests. Authors' contributions JB carried out the molecular genetics studies. UY, SHK and ED carried out the clinical investigation. RO carried out the histologic studies. HC carried out MRI. JB and JL wrote the manuscript. JL conceived and coordinated the study. All authors have read and approved the final manuscript Consent Written informed consent was obtained from the patient's parents for publication of these case reports and accompanying images.
Acknowledgements The authors thank the patients and families for their participation. This study was supported by the Institut National de la Santé et de la Recherche Médicale (INSERM), Centre National de la Recherche Scientifique (CNRS), University of Strasbourg, Collège de France, the Association Française contre les Myopathies (AFM), Fondation Recherche Médicale, Agence Nationale de la Recherche and E-rare program. Johann Böhm was supported by the Deutsche Forschungsgemeinschaft (DFG).
CC BY
no
2022-01-12 15:21:36
Orphanet J Rare Dis. 2010 Dec 3; 5:35
oa_package/34/88/PMC3014877.tar.gz
PMC3014878
21126355
Background Preeclampsia, characterized by hypertension and proteinuria developing after midgestation, is a severe complication of human pregnancy with a worldwide incidence of 2-10%. It is one of the leading causes of maternal, as well as perinatal morbidity and mortality, even in developed countries. Despite intensive research efforts, the etiology and pathogenesis of preeclampsia are not completely understood. Increasing evidence suggests that an excessive maternal systemic inflammatory response to pregnancy with activation of both the innate and adaptive arms of the immune system is involved in the pathogenesis of the disease [ 1 , 2 ]. The development of preeclampsia is influenced by both genetic and environmental risk factors, suggesting its multifactorial inheritance [ 3 - 8 ]. An important feature of systemic inflammation in preeclampsia is the absence of Th2 skewness characteristic for healthy pregnancy, and thus the predominance of Th1-type immunity. Saito et al. reported firstly that the percentage of Th1 cells and the ratios of Th1/Th2 were significantly higher, while the percentage of Th2 cells was significantly lower in the peripheral blood in preeclampsia than in the third trimester of normal pregnancy [ 9 ]. In another study, this group observed increased production of interleukin (IL)-2, interferon (IFN)-γ and tumor necrosis factor (TNF)-α by peripheral blood mononuclear cells (PBMCs) in preeclampsia and, interestingly, positive correlations between mean blood pressure and concentrations of Th1 cytokines [ 10 ]. The shift to a predominant Th1-type immunity in preeclampsia was reinforced by other experiments on intracellular cytokine measurements in peripheral blood T (both helper and cytotoxic) cells and NK cells, as well as by assessment of cytokine secretion levels of PBMCs isolated from preeclamptic patients [ 11 - 14 ]. However, the studies on circulating levels of cytokines in normal pregnancy and preeclampsia yielded conflicting results [ 15 , 16 ]. The discrepancies may be due to different techniques used for cytokine detection, differences in the ethnicity of the study populations, disease severity or sample sizes. The aim of this study was to determine circulating levels of cytokines, chemokines and adhesion molecules in a comprehensive manner involving a large number of healthy non-pregnant and pregnant women and preeclamptic patients. We also measured several markers of processes involved in the pathogenesis of preeclampsia, and investigated whether serum cytokine, chemokine and adhesion molecule levels were related to the clinical characteristics and laboratory parameters of the study participants, including markers of overall inflammation (C-reactive protein), endothelial activation (von Willebrand factor antigen) and endothelial injury (fibronectin), oxidative stress (malondialdehyde) and trophoblast debris (cell-free fetal DNA).
Methods Study patients Our study was designed using a case-controlled approach. Sixty preeclamptic patients, 60 healthy pregnant women with uncomplicated pregnancies and 59 healthy non-pregnant women were involved in the study. The study participants were enrolled in the First Department of Obstetrics and Gynecology and in the Department of Obstetrics and Gynecology of Kútvölgyi Clinical Center, at the Semmelweis University, Budapest, Hungary. All women were Caucasian and resided in the same geographic area in Hungary. Exclusion criteria were multifetal gestation, chronic hypertension, diabetes mellitus, autoimmune disease, angiopathy, renal disorder, maternal or fetal infection and fetal congenital anomaly. The women were fasting, none of the pregnant women were in active labor, and none had rupture of membranes. The healthy non-pregnant women were in the early follicular phase of the menstrual cycle (between cycle days 3 and 5), and none of them received hormonal contraception. Preeclampsia was defined by increased blood pressure (≥140 mmHg systolic or ≥90 mmHg diastolic on ≥2 occasions at least 6 hours apart) that occurred after 20 weeks of gestation in a woman with previously normal blood pressure, accompanied by proteinuria (≥0.3 g/24 h or ≥1 + on dipstick in the absence of urinary tract infection). Blood pressure returned to normal by 12 weeks postpartum in each preeclamptic study patient. Preeclampsia was regarded as severe if any of the following criteria was present: blood pressure ≥160 mmHg systolic or ≥110 mmHg diastolic, or proteinuria ≥5 g/24 h (or ≥3 + on dipstick). Pregnant women with eclampsia or HELLP syndrome (hemolysis, elevated liver enzymes, and low platelet count) were not enrolled in this study. Early onset of preeclampsia was defined as onset of the disease before 34 weeks of gestation (between 20 and 33 completed gestational weeks). Fetal growth restriction was diagnosed if the fetal birth weight was below the 10 th percentile for gestational age and gender, based on Hungarian birth weight percentiles [ 17 ]. The study protocol was approved by the Regional and Institutional Committee of Science and Research Ethics of the Semmelweis University, and written informed consent was obtained from each patient. The study was conducted in accordance with the Declaration of Helsinki. Biological samples Blood samples were taken from an antecubital vein into plain, as well as EDTA- or sodium citrate anticoagulated tubes, and then centrifuged at room temperature with a relative centrifugal force of 3000 g for 10 minutes. The aliquots of serum and plasma were stored at -80°C until the analyses. Laboratory methods Serum levels of IL-1β, IL-1 receptor antagonist (IL-1ra), IL-2, IL-4, IL-6, IL-8, IL-10, IL-12p40, IL-12p70, IL-18, IFN-γ, TNF-α, interferon-γ-inducible protein (IP)-10, monocyte chemotactic protein (MCP)-1, intercellular adhesion molecule (ICAM)-1 and vascular cell adhesion molecule (VCAM)-1 were measured by multiplex suspension array (Bio-Plex, Cat. No. X500317TGY and XF0000ZGAI) on a Bio-Plex 200 analyzer (Bio-Rad Laboratories, Hercules, California, USA). Levels of transforming growth factor (TGF)-β1 in maternal sera were assessed by ELISA (DRG International, Mountainside, New Jersey, USA, Cat. No. EIA-1864). Standard laboratory parameters (clinical chemistry) and C-reactive protein (CRP) levels were determined by an autoanalyzer (Cobas Integra 800, Roche, Mannheim, Germany) using the manufacturer's kits. Plasma von Willebrand factor antigen (VWF:Ag) levels were quantified by ELISA (Dakopatts, Glostrup, Denmark), while plasma fibronectin concentration by nephelometry (Dade Behring, Marburg, Germany), according to the manufacturer's instructions. After extracting DNA with the silica adsorption method, the amount of cell-free fetal DNA in maternal plasma was determined in patients with male newborns by quantitative real-time PCR analysis of the sex-determining region Y (SRY) gene, as we described previously [ 18 ]. Plasma malondialdehyde levels were measured by the thiobarbituric acid-based colorimetric assay [ 19 ]. Statistical analysis The normality of continuous variables was assessed using the Shapiro-Wilk's W -test. As the continuous variables were not normally distributed, nonparametric statistical methods were used. To compare continuous variables between two groups, the Mann-Whitney U -test was applied, whereas to compare them among multiple groups, the Kruskal-Wallis analysis of variance by ranks test was performed. Multiple comparisons of mean ranks for all groups were carried out as post-hoc tests. The Fisher exact and Pearson χ 2 tests were used to compare categorical variables between groups. The Spearman rank order correlation was applied to calculate correlation coefficients. Statistical analyses were performed using the following software: STATISTICA (version 8.0; StatSoft, Inc., Tulsa, Oklahoma, USA) and Statistical Package for the Social Sciences (version 15.0 for Windows; SPSS, Inc., Chicago, Illinois, USA). For all statistical analyses, p < 0.05 was considered statistically significant. In the article, data are reported as median (25-75 percentile) for continuous variables and as number (percentage) for categorical variables.
Results Patient characteristics The clinical characteristics of the study participants are described in Table 1 . There was no statistically significant difference in terms of age among the study groups. Furthermore, no significant differences were observed in gestational age at blood collection and the percentage of primiparas between preeclamptic patients and healthy pregnant women. However, all of the other clinical features presented in Table 1 differed significantly among our study groups. Fetal growth restriction was absent in healthy pregnant women, whereas the frequency of this condition was 18.3% in the preeclamptic group. Twenty-one women had severe preeclampsia and 5 patients experienced early onset of the disease. Laboratory parameters The laboratory parameters of the study subjects are displayed in Table 2 . As can be seen in the table, there were significant differences in most of the measured laboratory parameters among the three study groups except for serum aspartate aminotransferase (AST) activity. Circulating levels of cytokines, chemokines and adhesion molecules are shown in Table 3 . Apart from serum IL-1β and TGF-β1 levels, all of the measured inflammatory variables differed significantly among our study groups. There were no significant differences in the ratios of IL-2 to IL-4 and IFN-γ to IL-4 between healthy non-pregnant and pregnant women, whereas these ratios were significantly increased in preeclamptic patients as compared to healthy pregnant women (Figure 1 , 2 ). On the contrary, IL-18/IL-12p70 ratios were significantly higher, while IL-12p70/IL-12p40 ratios were significantly lower in healthy pregnant than in non-pregnant women, but they showed the same level in preeclamptic patients compared with healthy pregnant women (Figure 3 , 4 ). In the group of preeclamptic patients, no statistically significant differences were found in serum levels of the measured cytokines, chemokines and adhesion molecules between patients with mild and severe preeclampsia, between patients with late and early onset of the disease, or between preeclamptic patients with and without fetal growth restriction (data not shown). Relationship of serum cytokine, chemokine and adhesion molecule levels of the study subjects with their clinical characteristics and laboratory parameters We also investigated whether serum cytokine, chemokine and adhesion molecule levels of the study participants were related to their clinical features and laboratory parameters by calculating the Spearman rank order correlation coefficients (continuous variables) or by the Mann-Whitney U -test (categorical variables). In healthy non-pregnant women, serum IL-6 and TNF-α concentrations correlated significantly with CRP levels (Spearman R = 0.28 and 0.29, respectively, p < 0.05). In the group of healthy pregnant women, we found statistically significant negative correlations between serum IL-2 and IFN-γ concentrations and gestational age at delivery (R = -0.27 and -0.29, respectively, p < 0.05). A significant positive correlation was observed between IL-6 and CRP levels of healthy pregnant women (R = 0.45, p < 0.05), while their TGF-β1 and malondialdehyde concentrations correlated inversely with each other (R = -0.38, p < 0.05). Serum IP-10 levels of healthy pregnant women showed significant positive correlations with serum creatinine levels (R = 0.53, p < 0.05), as well as with plasma levels of VWF:Ag (R = 0.54, p < 0.001) and fibronectin (R = 0.42, p < 0.05), while a significant inverse correlation with fetal birth weight (R = -0.38, p < 0.05). Furthermore, there were significant positive correlations between their serum MCP-1 concentrations and serum creatinine (R = 0.39, p < 0.05), as well as plasma fibronectin levels (R = 0.48, p < 0.001). Significant correlations between inflammatory variables of preeclamptic patients and their clinical characteristics and laboratory parameters are presented in Table 4 . There was no other relationship between serum cytokine, chemokine and adhesion molecule levels of the study subjects and their clinical features and measured laboratory parameters in either study group.
Discussion In this study, we determined circulating levels of several cytokines, chemokines and adhesion molecules in healthy non-pregnant and pregnant women and preeclamptic patients by high-throughput multiplex suspension array technology. Except for serum IL-1β and TGF-β1 levels, all of the measured inflammatory variables differed significantly among the three study groups. Simultaneous measurement of several markers of disease processes enabled us to explore their role in the pathogenesis of preeclampsia. Normal pregnancy is characterized by a shift towards Th2-type immunity and the inhibition of cytotoxic Th1 immune responses, which could be harmful to the fetus (reflected by the inverse correlation of serum IL-2 and IFN-γ levels with gestational age at delivery in our healthy pregnant women) [ 20 ]. IL-18 and IL-12 are the key cytokines regulating Th1/Th2 balance. IL-18 alone can induce Th2-type immunity, but in the presence of IL-12, IL-18 stimulates Th1-mediated immune responses [ 21 ]. Indeed, the ratios of IL-18 to IL-12 secreted by PBMCs have been reported to be significantly increased in normal pregnancy [ 22 ]. In healthy pregnant women, the relative abundance of circulating IL-18 over IL-12 expressed by the increased serum IL-18/IL-12p70 ratios observed in our study, as well as the relative deficiency of the bioactive IL-12p70 in relation to IL-12p40 (its competitive inhibitor) reflected by the decreased serum IL-12p70/IL-12p40 ratios, might favour Th2-type immunity. In our preeclamptic patients, serum IL-12p70 levels were significantly higher as compared to healthy pregnant women. Although circulating IL-18 and IL-12p40 levels were also elevated yielding similar IL-18/IL-12p70 and IL-12p70/IL-12p40 ratios as in normal pregnancy, the relative abundance of circulating IL-2 and IFN-γ over IL-4 - as shown by the increased serum IL-2/IL-4 and IFN-γ/IL-4 ratios - might provide a Th1-biased systemic environment in preeclampsia. In addition to changes in Th1/Th2 balance, several other soluble inflammatory variables were also altered in normal pregnancy and preeclampsia. Circulating levels of the pro-inflammatory cytokines IL-6 and TNF-α, the chemokines IL-8, IP-10 and MCP-1, as well as the adhesion molecules ICAM-1 and VCAM-1, were raised in preeclampsia compared with healthy pregnancy, resulting in an overall pro-inflammatory systemic environment. Elevated circulating IL-1 receptor antagonist concentrations in preeclampsia reflect increased activity of the pro-inflammatory cytokines IL-1α and β, which have a very short half-life in the circulation, and therefore it is difficult to detect a difference in their serum levels [ 23 ]. The increase in levels of the immunoregulatory cytokine IL-10 in our preeclamptic patients is in line with previous findings and might be a compensatory phenomenon [ 24 ]. On the other hand, the changes in circulating cytokine profile in our healthy pregnant group were - at least in part - anti-inflammatory as shown by the decreased IL-1ra, TNF-α and MCP-1 concentrations relative to non-pregnant women. However, decreased serum IL-10 and increased IP-10 levels found in our healthy pregnant women might drive pro-inflammatory responses. Indeed, the third trimester of normal pregnancy seems to be a controlled state of systemic inflammation, as expressed also by the elevated serum CRP levels in our study [ 25 ]. Interestingly, a state of controlled inflammation at the feto-maternal interface in early pregnancy with production of pro-inflammatory cytokines and chemokines is thought to be beneficial for trophoblast invasion [ 26 , 27 ]. Although serum concentrations of TGF-β1 did not differ among our study groups, elevated levels of its soluble co-receptor, endoglin, have been observed in preeclampsia previously [ 28 ]. Soluble endoglin impairs binding of TGF-β1 to its receptors and downstream signalling, leading to dysregulated TGF-β signalling in the vasculature. The maternal systemic inflammatory response characteristic of both the third trimester of normal pregnancy and - in an excessive form - preeclampsia involves an acute-phase reaction as well as systemic oxidative stress, and circulating cytokines are central to these processes [ 29 ]. Pro-inflammatory cytokines, primarily IL-6, can induce an acute-phase response [ 30 ]. Furthermore, cytokines can cause the release of oxygen free radicals, whereas reactive oxygen metabolites can up-regulate the genes that code for pro-inflammatory cytokines and adhesion molecules [ 31 ]. Indeed, serum IL-6 (and TNF-α) concentrations correlated with CRP levels in our healthy non-pregnant and pregnant groups. The inverse correlation between TGF-β1 and malondialdehyde levels of our healthy pregnant women indicates that TGF-β1 could inhibit lipid peroxide production in normal pregnancy. Interestingly, serum MCP-1 and ICAM-1 concentrations showed significant positive correlations with CRP and malondialdehyde levels in the group of preeclamptic patients, which implies that recruitment and adhesion of leukocytes to endothelial cells are central features of the generalized intravascular inflammatory reaction and oxidative stress observed in preeclampsia. The correlation of MCP-1 and ICAM-1 concentrations with blood pressure values and liver function parameters, respectively, suggests that these cytokines and the inflammatory processes they mediate might contribute to the development of hypertension and hepatocellular injury in this pregnancy-specific disorder. Cytokines, chemokines and adhesion molecules could be potential mediators of endothelial dysfunction, which is a hallmark of the maternal syndrome of preeclampsia. Therefore, we examined whether these inflammatory variables were related to the markers of endothelial activation (von Willebrand factor antigen) and injury (fibronectin). In this study, significant correlations were found between IP-10, MCP-1 and VCAM-1 levels and endothelial markers in normal pregnancy and preeclampsia. Certain organs with fenestrated (discontinuous) endothelium, such as the kidney (glomeruli), liver (sinusoids) and brain (choroid plexus) are disproportionally affected in preeclampsia. Interestingly, serum IP-10, MCP-1 and VCAM-1 concentrations were also related to renal and liver function parameters in our study. These findings denote the central role of these inflammatory molecules in mediating endothelial damage. IP-10 showed the strongest association with endothelial dysfunction in our healthy pregnant women and preeclamptic patients. Indeed, IP-10 (CXCL10) has pro-inflammatory and anti-angiogenic properties, and this chemokine has been proposed to be a potential link between inflammation and anti-angiogenesis in preeclampsia [ 32 ]. The inverse correlation of IP-10 levels with fetal birth weight of healthy pregnant women suggests its inhibitory role in placental angiogenesis. Although TNF-α can also elicit endothelial cell dysfunction and injury, no significant relationship was observed between its serum concentration and endothelial markers in this study [ 33 ]. Nevertheless, we did not measure levels of soluble TNF receptors, which have a longer half-life than TNF-α and, thus, are thought to be more reliable markers of TNF-α activity. The placenta is supposed to be a potential source of circulating inflammatory cytokines in preeclampsia [ 34 ]. Interestingly, syncytiotrophoblast sheds placental debris into the maternal circulation in preeclampsia with elevated amounts. The mass of this trophoblast debris can be assessed by the measurement of copies of cell-free fetal DNA in the maternal plasma [ 35 , 36 ]. However, circulating levels of cytokines and other inflammatory molecules did not show a significant correlation with those of cell-free fetal DNA in our study, indicating that trophoblast deportation process may not substantially contribute to the elevated circulating concentrations of inflammatory molecules. Others have also questioned that the placenta is the major source of pro-inflammatory cytokines in the circulation of preeclamptic women [ 37 ]. Indeed, dysfunctional maternal endothelial cells and activated circulating leukocytes could also release inflammatory molecules into the blood in this disorder. Additionally, there is a strong genetic influence on cytokine production. Therefore, genetic factors might also account - at least partly - for the abnormal cytokine profile observed in preeclampsia [ 4 , 38 - 41 ]. In this study, the similar cytokine profile of preeclamptic patients regardless of the severity, the time of onset of the disease or the presence of fetal growth restriction might be explained by the multifactorial etiology of preeclampsia. Several genetic, behavioural and environmental factors need to interact to produce the complete picture of this pregnancy-specific disorder. Our research group reported various genetic and soluble factors that were associated with the severity or complications of preeclampsia, including HELLP syndrome and fetal growth restriction [ 42 - 45 ]. Nevertheless, it is also possible that the relatively small sample size of this study prevented to detect an effect in the subgroup analyses.
Conclusions According to our findings, preeclampsia was associated with an overall pro-inflammatory systemic environment. Elevated amounts of pro-inflammatory cytokines, chemokines and adhesion molecules in the maternal circulation might play a central role in the excessive systemic inflammatory response, as well as in the generalized endothelial dysfunction characteristics of the maternal syndrome of preeclampsia.
Background Preeclampsia is a severe complication of pregnancy characterized by an excessive maternal systemic inflammatory response with activation of both the innate and adaptive arms of the immune system. Cytokines, chemokines and adhesion molecules are central to innate and adaptive immune processes. The purpose of this study was to determine circulating levels of cytokines, chemokines and adhesion molecules in normal pregnancy and preeclampsia in a comprehensive manner, and to investigate their relationship to the clinical features and laboratory parameters of the study participants, including markers of overall inflammation (C-reactive protein), endothelial activation (von Willebrand factor antigen) and endothelial injury (fibronectin), oxidative stress (malondialdehyde) and trophoblast debris (cell-free fetal DNA). Results Serum levels of interleukin (IL)-1beta, IL-1 receptor antagonist (IL-1ra), IL-2, IL-4, IL-6, IL-8, IL-10, IL-12p40, IL-12p70, IL-18, interferon (IFN)-gamma, tumor necrosis factor (TNF)-alpha, transforming growth factor (TGF)-beta1, interferon-gamma-inducible protein (IP)-10, monocyte chemotactic protein (MCP)-1, intercellular adhesion molecule (ICAM)-1 and vascular cell adhesion molecule (VCAM)-1 were measured in 60 preeclamptic patients, 60 healthy pregnant women and 59 healthy non-pregnant women by multiplex suspension array and ELISA. In normal pregnancy, the relative abundance of circulating IL-18 over IL-12p70 and the relative deficiency of the bioactive IL-12p70 in relation to IL-12p40 might favour Th2-type immunity. Although decreased IL-1ra, TNF-alpha and MCP-1 concentrations of healthy pregnant relative to non-pregnant women reflect anti-inflammatory changes in circulating cytokine profile, their decreased serum IL-10 and increased IP-10 levels might drive pro-inflammatory responses. In addition to a shift towards Th1-type immunity (expressed by the increased IL-2/IL-4 and IFN-gamma/IL-4 ratios), circulating levels of the pro-inflammatory cytokines IL-6 and TNF-alpha, the chemokines IL-8, IP-10 and MCP-1, as well as the adhesion molecules ICAM-1 and VCAM-1, were raised in preeclampsia compared with healthy pregnancy, resulting in an overall pro-inflammatory systemic environment. Increased IP-10, MCP-1, ICAM-1 and VCAM-1 concentrations of preeclamptic patients showed significant correlations with blood pressure values, renal and liver function parameters, as well as with CRP, malondialdehyde, von Willebrand factor antigen and fibronectin levels. Conclusions According to our findings, preeclampsia was associated with an overall pro-inflammatory systemic environment. Elevated amounts of pro-inflammatory cytokines, chemokines and adhesion molecules in the maternal circulation might play a central role in the excessive systemic inflammatory response, as well as in the generalized endothelial dysfunction characteristics of the maternal syndrome of preeclampsia.
Authors' contributions ASZ collected data and drafted the manuscript. JR participated in the design of the study. LL determined cell-free fetal DNA. GB carried out multiplex suspension array measurements. AM conceived of the study, participated in its design and coordination, performed statistical analyses and helped to draft the manuscript. All authors read and approved the final manuscript.
Acknowledgements We thank Veronika Makó, László Cervenak, Krisztián Balogh and Miklós Mézes for measuring plasma von Willebrand factor antigen and malondialdehyde concentrations. This work was supported by a research grant from the Faculty of Medicine of the Semmelweis University, as well as by the János Bolyai Research Scholarship of the Hungarian Academy of Sciences.
CC BY
no
2022-01-12 15:21:36
BMC Immunol. 2010 Dec 2; 11:59
oa_package/25/c9/PMC3014878.tar.gz
PMC3014880
21126363
Background Kaposi sarcoma-associated herpesvirus (KSHV, also known as human herpesvirus 8) is considered a necessary but insufficient cause of Kaposi sarcoma (KS)[ 1 ]. Without overt immunosuppression such as AIDS or allogeneic transplant, the annual incidence rate of classic KS (cKS) after age 50 is only about 6.2/100,000 and 2.5/100,000 for KSHV-seropositive men and women, respectively [ 2 ]. Non-smoking, diabetes, and use of corticosteroid medications have 2- to 4-fold effects on the risk of cKS [ 3 , 4 ], but additional cofactors remain to be identified. Because it has unusual clinical and geographic features, at least four categories of environmental cofactors for KS have been proposed. Noting similarities to podoconiosis, Ziegler postulated that KS may result from volcanic soil chronically embedded in the skin [ 5 ]. Mbulaiteye suggested that KS may result from enhancement of T-helper type 2 immunity due to chronic schistosome or other parasite infections [ 6 ]. Coluzzi thought that KS may result from alterations of cellular immunity induced by biting flies [ 7 ]. Lastly, Whitby postulated that KS may result from increased KSHV lytic replication induced by contact with phorbol esters or other constituents of plants [ 8 ]. We conducted a population-based study of cKS in Sicily, where KSHV seroprevalence is approximately 10% [ 4 ]. In addition to non-smoking, diabetes, and use of corticosteroid medications, cKS risk was independently increased 2.7-fold with residential exposure to chromic luvisol [ 9 ]. Soils are only one component of a complex ecology that includes insects, microbial organisms, and plants. Herein, we began to dissect these issues by investigating whether cKS or KSHV serostatus among controls was related to residential exposure to various soils or to direct contact with plants that have postulated biologic effects.
Materials and methods Population, Specimen and Data Collection Detailed methods for the case-control study of cKS in Sicily during 2002-2006 have been published [ 4 ]. Briefly, incident cases were ascertained from all histopathology laboratories on the island. Population-based controls, aged 30-99 years were selected using stratified two-stage cluster sampling. As all residents of Italy are assigned to a primary care physician, 450 physicians were randomly selected with the probability proportional to the number of patients on the roster. Up to 12 controls, frequency matched to cKS cases by sex and age in 5-year strata, were selected from each roster. Institutional review board approval was obtained from the U.S. National Cancer Institute, local institutions in Sicily (Ragusa and Palermo), and the coordinating center (RTI International). Following signed informed consent, recruited participants provided a blood sample and responses to a standardized questionnaire that included demographic, clinical and exposure variables. Serologic Classification KSHV serostatus was defined using immunofluorescence assays (IFA) for antibodies to KSHV lytic and latent nuclear antigens, as well as enzyme-linked immunosorbent assays (ELISA) for antibodies to the KSHV K8.1 and ORF73 gene products [ 4 ]. Subjects were considered KSHV seropositive if the latent IFA was positive or the K8.1 optical density (OD) was >1.2. KSHV seronegative was defined as latent IFA negative, K8.1 OD ≤ 0.8, and ORF73 OD ≤ 0.8 [ 4 ]. Other controls (n = 59) were seroindeterminate and excluded from the current analysis. Classification of Exposure to Plants Participants were shown color photographs of 20 plants, labeled with common Italian names, and they were asked "Have you ever used or had direct contact with this plant?" Participants who answered "yes" were classified as exposed to that plant. If the participant was uncertain about a particular plant, prompts included common uses of the plant. The 20 specific plant species (listed in footnote of Table 2 ) were selected on the advice of local botanists based on the prevalence and likelihood of contact in Sicily, known medicinal or cosmetic uses, toxicities, or genetic relatedness to plants reported to induce KSHV lytic replication [ 8 , 16 ]. The questionnaire quantified cumulative exposure, during adulthood, to each plant in categories (zero, <10, 10-100, 100-1000, >1000 contacts). Classification of Exposure to Soils As described previously, exposure to soil was ecologic.(9) Briefly, the questionnaire ascertained each participant's community of residence at birth, during childhood (up to age 12), during adulthood (for 10 years prior to study enrollment), and at enrollment. Exact address was not collected. A map with the boundaries of all 390 communities in Sicily was projected onto the soil map of Sicily [ 26 ]. The proportion of each soil was then calculated as the area (the number of pixels) of each soil type in each community. For the current analysis, the previous methods were modified to reduce misclassification of exposure, by weighting for population density in each soil area. Population density was estimated by projecting the map of nocturnal illumination of Sicily ( http://ngdc.noaa.gov/dmsp/downloadV4composites.html ) onto the soil and community boundary maps (Figure 1A ). The type of soil in each pixel (approximately 250 m 2 ) was multiplied by that pixel's luminescence (range 0-63, http://www.ngdc.noaa.gov/dmsp/gcv2_readme.txt ) (Figure 1C-E ), generating luminescence-weighted soil values that were summed for each community (Figure 1F ). Statistical Analysis Strategy and Methods The primary objective was to identify cofactors for cKS among people with KSHV infection. The secondary objective was to identify variables that distinguished KSHV seropositive from KSHV seronegative people without cKS. To address these objectives, KSHV seropositive controls were used as the referent group, and the multinomial logistic regression procedure was used to calculate the odds ratio (OR) and 95% confidence interval (CI) for each variable's association with cKS and, among the controls, with KSHV seronegativity. As described [ 4 ], weights were included in each regression model to adjust for the multi-stage sampling of the controls. Base weights were calculated as the product of the reciprocal of the selection probabilities at each stage of sampling. Non-response adjusted weights were then calculated as the product of these base weights and cross-classified categories of age, gender, and (for controls) region (eastern/western Sicily). These non-response adjusted weights were further adjusted by using post-stratification to constrain the weights to reflect the population totals by age, gender and six zones (three community sizes × 2 regions). These non-response/post-stratification-adjusted weights, that were rescaled to sum to the sample sizes of the cases and controls, are the final sample weights for each participant's data. PROC MULTILOG in SUDAAN statistical software (SAS-Callable SUDAAN Release 10.0.1, Research Triangle Institute) was used to conduct weighted multinomial logistic regression analyses that incorporated the sample weights and accounted for the stratified cluster sampling of the controls. Prior to considering plant and soil exposures, a core model was developed with 5 variables: sex and age category (<68, 68-74, 75-80, ≥81 years) to account for matching variables, plus diabetes, use of oral or topical corticosteroid medication in past 10 years, and cigarette smoking (current, former, never). Cumulative time working with plants or soils, previously noted to be associated with elevated KSHV seroprevalence among women [ 9 , 10 ], was considered but not retained in the core model. All plant and soil analyses were built on this core model, and all models included the identical participants. To assess confounding, plant and soil models were repeated with exclusion of the one core variable (diabetes) found to be associated with KSHV seronegativity. History of asthma [ 3 ], level of attained education [ 4 ], and both of these were added to the final model to further assess possible confounding or effect modification. To evaluate how exposures to multiple plants might relate to cKS risk, three dimension-reducing methods were employed. Total contacts with all 20 plants, assuming values of 0, 2, 20, 200, and 2000 for each plant for the exposure categories (zero, <10, 10-100, 100-1000, >1000), were summed (range of values, 0 - 23,224) then divided into quartiles for regression analysis. Factor analysis uses covariance relationships among multiple observed variables to generate a few underlying, but unobservable, quantities called factors. Four factors were generated with an orthogonal rotation method (VARIMAX and PROC FACTOR, SAS Institute, Cary, NC) based on the proportion of variance explained in the exposures to the 20 plants. These factors were labeled descriptively (Asteraceae, Euphorbia/Datura/Agave, Hypericum, and food/beverage/gladiolus) based on the interpretation of the factors from their factor loadings. The score for each factor was dichotomized at its median value for inclusion as an independent variable in the multinomial regression analysis. PROC FASTCLUS in SAS was used to partition participants into clusters based on the Euclidean distances computed from the levels of contact with the 20 plants. The uncommon clusters, labeled C (high exposures including Hypericum and Euphorbia) and B (high exposures to plants other than Hypericum and Euphorbia), were compared to the more common cluster (relatively few plant exposures). For 14 typical soils, the likelihood of each participant's exposure was categorized as none (childhood community with zero for soil or luminescence), low (<median of non-zero luminescence-weighted soil value) or high (≥ non-zero median). For two widely distributed soils (lithosol and eutric regosol) that were present in nearly all communities (<200 controls with zero exposure), tertiles of luminescence-weighted values were used. One uncommon soil (gleyic arenosol) was dichotomized as any versus no exposure. Lastly, all 20 plants in levels (zero, <100, ≥100 contacts; except any/none for Datura stramonium , Euphorbia characias euphorbiaceae , Hypericum perforatum guttiferae , and Hypericum hiricinum to which fewer than 20 participants reported ≥100 contacts) and all 17 soils (classified as in the preceding paragraph) were included in a backward-elimination stepwise regression model. In addition to 5 variables in the core model, individual plants and soil with P trend ≤ 0.15 were retained. As a sensitivity analysis, childhood residential soil exposures were substituted with adulthood soil exposures. Overlaps of the soils that were strongly associated with cKS risk were illustrated (Figure 1E and 1F ). In all models, P ≤ 0.05 was considered statistically significant.
Results The analysis was restricted to 962 subjects: 122 cases, 752 KSHV seronegative controls, and 88 KSHV seropositive controls with childhood residence in a Sicilian community and with complete data on contact with all 20 plants. From the parent study of 1374 subjects, the 412 excluded subjects included 48 with childhood residence outside Sicily, 299 with incomplete plant data, 3 with incomplete cortisone data, 59 controls with indeterminate KSHV serostatus, and 3 with residence in a community that lacked soil data. Table 1 presents the core model with the distributions for sex and age group (the matching variables) and three cofactors for the 962 included subjects. The associations of cKS with non-smoking ( P trend = 0.05), cortisone use and diabetes were similar to those reported previously [ 4 ]. Cumulative work with plants or soils (none, ≤900 weeks, >900 weeks) was not associated with cKS ( P trend = 0.81) and thus not retained in the core model. Plant and soil associations with cKS Adjusted for the "core model" variables, Table 2 presents the risk estimates for cKS in three models that differ in plant categorization and quantification. In the first model, cKS risk was unrelated to cumulative exposure to all 20 plants [per quartile adjusted odds ratio (OR adj ) 0.96, P trend = 0.87]. In the second model, cKS risk also was unrelated to uncommon types of plant exposures, as represented in cluster B (OR adj 2.10, 95% CI 0.83-5.29) and cluster C (OR adj 0.72, 95% CI 0.19-2.80), compared to the common cluster A. Likewise, in the third model, cKS risk was unrelated to four factors of the plant exposure data, descriptively labeled Asteraceae factor (OR adj 1.12), Euphorbia/Datura/Agave factor (OR adj 0.77), Hypericum factor (OR adj 0.92), and food/beverage/gladiolus factor (OR adj 1.23, range of P = 0.44-0.81). Table 3 presents the five individual plants and six soils that were associated with cKS risk in the elimination regression model. No other plants or soils met the criterion of P trend ≤ 0.15, adjusted for the core-model and other variables. Results (not presented) differed negligibly when the model was modified by deleting diabetes or by adding asthma history or attained education level. Three plants were associated with elevated risk. One of these, Taraxacum officinale (dandelion), had a higher odds ratio (OR adj 3.59) with <100 contacts than with ≥100 contacts (OR adj 1.50). The second, Datura stramonium (jimson weed), had a high odds ratio (OR adj 4.26, 95% CI 1.09-16.70) based on only 11 exposed cases. The third, Lupinus albus (white lupine), had a high odds ratio with ≥100 contacts (OR adj 3.58) but marginal significance ( P trend = 0.07). Risk of cKS was significantly lower with Matricaria chamomilla compositae (chamomile, P trend = 0.02), and it tended to be lower with Acanthus mollis (bear's breech, P trend = 0.10). Childhood residence in a community with eutric regosol and/or lithosol was associated with an approximately 8-fold higher risk of cKS ( P trend = 0.01, Table 3 ). Risk also was increased with exposure to chromic and/or pellic vertisol ( P trend = 0.04). Risk of cKS risk was significantly lower with childhood residential exposure to rendzina ( P trend = 0.01) and orthic luvisol ( P trend = 0.01), and non-significantly lower with vertic cambisol ( P trend = 0.10) and eutric cambisol ( P trend = 0.13). When adulthood, rather than childhood, residential soils were used, the elimination model retained the identical variables shown in Table 3 , except eutric cambisol which did not meet the P trend criterion. Figure 1 illustrates the geography of one high-risk soil (eutric regosol and/or lithosol), one low-risk soil (orthic luvisol), and the overlap of these. Associations with KSHV serostatus among controls As shown in Table 1 , history of diabetes was much more common in KSHV seronegative compared to seropositive controls (OR adj 4.69, 95% CI 1.97 - 11.17). Adjusted for diabetes and the other core model variables, aggregate plant exposure was significantly associated with KSHV seronegativity (Table 2 ). KSHV seronegatives tended to have more cumulative exposure to the 20 plants ( P trend = 0.04), and they were 3-fold more likely (95% CI 1.31-6.92) to be in cluster B (high exposures to plants other than Hypericum/Euphorbia) than in cluster A (relatively few plant exposures). KSHV seroprevalence was not related to cluster C (high exposures including Hypericum/Euphorbia), nor was it associated with any of the four plant factors (Table 2 ). When diabetes was eliminated from the model to test for confounding, the associations of KSHV seronegativity with higher cumulative plant exposure and with plant cluster B were essentially unaltered (results not presented). Except for seronegativity with Taraxacum officinale ( P trend = 0.06) and seropositivity with rendzina ( P trend = 0.02), none of the individual plants or soils associated with cKS was also associated with KSHV serostatus (Table 3 ).
Discussion Our primary objective was to determine whether exposures to plants or soils were associated with cKS. Neither cumulative nor categorical contacts with plants were related to cKS. Cases with cKS did report more contacts with three individual plants, and they were more likely to have residential exposure to eutric regosol and chromic/pellic vertisol. For our secondary objective, we found that KSHV seroprevalence among controls was modestly lower with overall exposure to plants (Table 2 ). This contrasts with our previous observation of higher KSHV seroprevalence with occupational or recreational exposure to plants or soil [ 10 ]. Because the earlier seroprevalence analysis was adjusted only for sex and age group, we examined whether the discrepancy might relate to adjusting for diabetes, which was strongly associated with KSHV seronegativity (Table 1 ). No confounding by diabetes was found. Specifically, exclusion of diabetes from the regression models yielded associations with seronegativity that were almost identical to those presented in Tables 2 and 3 . One soil (rendzina) and one plant ( Taraxacum officinale ) had mirror-image associations with KSHV seroprevalence and cKS risk, which probably appeared by chance. No plant or soil was associated with high seroprevalence and high cKS risk, or with low seroprevalence and low cKS risk. Soils and cKS risk KS risk has repeatedly been associated with soils [ 5 , 9 , 11 , 12 ], and an effect of iron has been proposed [ 13 ]. Table 4 summarizes the characteristics of the six soils associated with cKS risk in our study [ 14 ]. Risk of cKS was elevated in communities with high levels of eutric regosol or chromic/pellic vertisol, all of which are used for cultivation of durum wheat, thereby supporting the higher risk of cKS observed for cereal farmers in Sardinia [ 15 ]. Chromic luvisol was associated with cKS in our previous study [ 9 ] but not in the current one. Unlike the previous study, the current one simultaneously considered many plants and soils. Of these, orthic luvisol was strongly associated with decreased cKS risk. Areas with luvisols are widely used for vineyards, orchards and citrus groves. Despite this commonality, chromic luvisol generally has a higher content of iron and kaolinite compared to orthic luvisol [ 14 ]. Although not offering a simple ecologic pattern, these soil associations can serve to focus future studies. For example, Does cKS risk differ by direct contact with eutric regosol versus orthic luvisol? To address this, much better exposure assessment would be needed. We had only residential data and not occupational or other types of soil exposures. In addition, we collected merely community of residence, not exact location. We used objective data on population density (Figure 1A ) to improve the assessment of exposure to soils, but lifetime residential history with exact addresses would be highly desirable. Plants and cKS risk Contact with jimson weed ( Datura stramonium ) was associated with a 4-fold higher risk of cKS; it was not significantly associated with KSHV seroprevalence, but the data were sparse. Higher cKS risk with dandelion ( Taraxacum officinale ) contact was nominally significant but inconsistent with respect to dose-response, and dandelion had a marginal association with KSHV seroprevalence that suggests confounding. The 3-fold lower risk with chamomile ( Matricaria chamomilla compositae , P trend = 0.02), which was unrelated to seroprevalence, is noteworthy. Bear's breech ( Acanthus mollis ) had only a marginal association with cKS ( P trend = 0.10). The likelihood of residual confounding by an unmeasured variable, as well as small numbers of exposed cases (for jimson weed and bear's breech) and no difference in cKS risk with cumulative or grouped exposures to plants (Table 2 ), implies that the 20 plants that we evaluated are irrelevant to the risk of cKS. Notably, we found no associations with any of the four Euphorbia species that we queried, despite the ability of some of their phorbol esters (notably 12- O -tetradecanoylphorbol-13-acetate, TPA) to promote tumor growth and induce replication of herpesviruses in vitro [reviewed in refs. [ 8 ] and [ 16 ]]. We did not ask about contact with durum wheat. Strengths and limitations The strengths of this study include sampling the entire population of the island of Sicily, as well as state-of-the-art KSHV serology and statistical methods. The limitations of our study are several. First, we did not narrowly define an exposure hypothesis. For this reason, exposure was not restricted to dermal contact. Foods and beverages from plants, such as chamomile tea, may have true biologic effects on cKS risk, but they also may be surrogates for socioeconomic or other unmeasured confounding variables. Our latent factor analysis, with one factor heavily weighted to foods and beverages, should have mitigated this problem. Moreover, inclusion or exclusion of education level, as a surrogate for socioeconomic status, did not substantially alter the associations. Second, dermal contact with plants could not be distinguished from dermal contact with soil. Agricultural and gardening work was not related to cKS risk ( P trend = 0.81) [ 9 ], but we did not collect data on cereal farming per se [ 15 ]. Third, the critical exposure time for a true cKS cofactor is unknown. By including plant exposures occurring over several decades, rather than within a few years of cKS onset, we may have missed a true association. Fourth, although this is the largest cKS case-control study thus far, we had sparse data for some comparisons due to relatively small numbers of cKS cases and KSHV seropositive controls. Finally, some of the associations that we found may have arisen merely by chance from the multiple comparisons that we performed.
Conclusions The risk of cKS, compared to KSHV seropositive controls, differed with reported contacts with a few plants and with residential exposure to certain soils. These associations could have arisen by chance due to the multiple comparisons that we performed. Reassuringly, most of these plants and soils were not associated with KSHV serostatus. Future studies might focus on how contacts with farm animals, pesticides and parasites, as well as soils and plants such as durum wheat, affect KSHV viremia, which is strongly associated with risk for KS incidence and progression [ 17 - 21 ]. Associations of cKS and KSHV viremia with human genetic polymorphisms, most of which have not been consistently replicated,[ 22 - 25 ] should also be considered. Understanding these environmental and host interactions will lead to novel insights and means to prevent KS and other herpesvirus-associated malignancies.
Background Ecologic and in vitro studies suggest that exposures to plants or soil may influence risk of Kaposi sarcoma (KS). Methods In a population-based study of Sicily, we analyzed data on contact with 20 plants and residential exposure to 17 soils reported by 122 classic KS cases and 840 sex- and age-matched controls. With 88 KS-associated herpesvirus (KSHV) seropositive controls as the referent group, novel correlates of KS risk were sought, along with factors distinguishing seronegatives, in multinomial logistic regression models that included matching variables and known KS cofactors - smoking, cortisone use, and diabetes history. All plants were summed for cumulative exposure. Factor and cluster analyses were used to obtain scores and groups, respectively. Individual plants and soils in three levels of exposure with P trend ≤ 0.15 were retained in a backward elimination regression model. Results Adjusted for known cofactors, KS was not related to cumulative exposures to 20 plants [per quartile adjusted odds ratio (OR adj ) 0.96, 95% confidence interval (CI) 0.73 - 1.25, P trend = 0.87], nor was it related to any factor scores or cluster of plants ( P = 0.11 to 0.81). In the elimination regression model, KS risk was associated with five plants ( P trend = 0.02 to 0.10) and with residential exposure to six soils ( P trend = 0.01 to 0.13), including three soils (eutric regosol, chromic/pellic vertisol) used to cultivate durum wheat. None of the KS-associated plants and only one soil was also associated with KSHV serostatus. Diabetes was associated with KSHV seronegativity (OR adj 4.69, 95% CI 1.97 - 11.17), but the plant and soil associations had little effect on previous findings that KS risk was elevated for diabetics (OR adj 7.47, 95% CI 3.04 - 18.35) and lower for current and former smokers (OR adj 0.26 and 0.47, respectively, P trend = 0.05). Conclusions KS risk was associated with exposure to a few plants and soils, but these may merely be due to chance. Study of the effects of durum wheat, which was previously associated with cKS, may be warranted.
Abbreviations AIDS: (Acquired Immunodeficiency Syndrome); CI: (confidence interval); cKS: (classical Kaposi sarcoma); ELISA: (enzyme-linked immunosorbent assays); IFA: (immunofluorescence assays); KS: (Kaposi sarcoma); KSHV: (KS-associated herpesvirus). Competing interests The authors declare that they have no competing interests. Authors' contributions JJG designed the study, obtained funding, supervised the overall project, and drafted the manuscript. GC managed recruitment, coordinated shipments, and collected the questionnaire data and blood specimens. CD provided the soil map data and the utilization of the soils (Table 4 ). AP processed the blood specimens and performed the KSHV immunofluorescence assays. CP, LAA and CM performed statistical analyses. LRP managed the data and performed the final statistical analyses. MA obtained the luminescence data and constructed the maps. BIG proposed the clustering, factoring, and multinomial logistic regression approaches and supervised the final statistical analyses. AM supervised the processing of specimens and the laboratory activities in eastern Sicily. CL helped to select the plants and supervised field activities in eastern Sicily. NR supervised the laboratory and field activities in western Sicily. All authors contributed to and approved the final manuscript.
Acknowledgements We thank Prof. Francesco Vitale for his steadfast leadership on this project; Prof.ssa M.R. Melati (Dipartimento Scienze Botaniche - Universitá di Palermo) and Dr. Gaudioso, MD, for help in classification of the plants; Dr. Denise Whitby for KSHV serology; Dr. Sam Mbulaiteye for helpful discussions; Dr. Charles Rabkin for reviewing the manuscript; Filippa Bonura, Anna Maria Perna, Fabio Tramuto, Anna Fidilio, Michele Massimino, Stefania Stella, and Georgina Mbisa for specimen processing and KSHV antibody testing; Enza Viviano, Rosalia Valenti, Elisa Martorana, Prof. Lorenzo Gafà, Giuseppe Arena, Gianclaudio Antonelli, Laura Leggio, Giuliana Buscema, Veronica Paparazzo, MariaChiara DiPasquale, Anna Tortorici, Irene Bocchieri, and Laboratorio Brinch-Battaglia for recruitment and collection of data and specimens; the staff at RTI International, including Mary-Anne Ardini, Dr. Barbara Kroner, and Dr. Mansour Fahimi, for coordination, computation of weights, and analyses; and especially Dr. Santo LoGalbo of the Assessorato Regionale Della Salute for providing the population roster for Sicily. This study was supported by the Intramural Research Program of the National Cancer Institute, in part under a contract with RTI International (N02-CP-91027).
CC BY
no
2022-01-12 15:21:36
Infect Agent Cancer. 2010 Dec 2; 5:23
oa_package/ff/35/PMC3014880.tar.gz
PMC3014881
21129222
Background Human papillomaviruses (HPVs) are a family of small (55 nm) icosahedral, non-enveloped virus with a circular double-stranded DNA genome of 7-8 kbp and with a special affinity for epithelial cells [ 1 , 2 ]. Over 200 genotypes of papillomaviruses infect the skin and mucosal surfaces [ 2 ]. The most common oncogenic HPV are associated with leukoplakia and squamous carcinoma. While the majority of the HPV types have affinity to grow on skin, oral lesions, genitals, anal and larynx [ 1 ]. Some HPV types are considered as high risk; most notably 16, 18, 31, 33, 35, 39, 45, 52, and 58 and have been shown to be a necessary cause for Cervical Cancer development [ 3 ]. Cervical cancer is a major public health problem around the world; in some developing countries it is the most frequent female cancer, as well as the main cause of cancer related death among women [ 4 , 5 ]. The potentially oncogenic HPVs has been associated with oral squamous cell carcinoma [ 6 , 7 ]. Some evidence has linked them to orogenital contact with the transmission of papillomavirus from the genital zone to the oral cavity [ 8 ]. It is suggested that oral HPV infection frequency is different from cervical infection and associated with age [ 9 ]. Other studies have detected the presence of HPV in the epithelium of the oropharynx in women with genital HPV by using a cytological examination and the Papanicolaou technique [ 10 ]. These studies alert the possibility of a natural reservoir of HPV at a locus outside of the genital region, which could serve as a reinfection focus [ 10 ]. Additionally the HPV is rarely present in the vagina of virgin women even with the use of tampons or digital penetration [ 11 ]. Despite the recognition of a HPV-associated oral malignancy, it is unclear to what extent cervical HPV infection is translocated to oral HPV infection. Previous studies suggested that oral HPV infection analogously to cervical infection is associated with sexual behavior and immunosuppressant [ 12 , 13 ]. The majority of the studies of buccal HPVs made in the past have explored the relationship between HPV and the development of oral cancer; these studies detected HPV in DNA extracted from the oral cavity of patients with oral lesions and/or abnormalities [ 14 - 18 ] shown that HPVs that infect the genital area can also infect the oral cavity [ 19 ]. In healthy Japanese; HPV in oral cavity was present in 0.6% of the population [ 20 ]. Other studies have proposed that mothers may serve as the source of infant HPV infection which suggested the possibility of a non-sexual transmission of the virus [ 21 ]. In the few studies in which oral and anogenital HPV infections were analyzed, oral HPV infection frequency appeared to be lower than anogenital infection [ 16 , 17 , 22 , 23 ]. However, oral and cervical HPV prevalence were similar in a small group with high prevalence of oral or anogenital condylomata [ 16 ]. These studies were performed in a high risk population therefore the relationship between HPV infection in cervix and oral cavity related to oral sex practices remains unexplored. The aim of this study was to determine the frequency of HPV in the oral cavity of a Mexican women population with histopathological diagnosis of cervical lesions and to describe the viral infection in relation to oral sex practices and habits to share personal objects such as toothbrush.
Methods Patients and sampling This study was performed on summer 2008. All women had a previous CIN diagnosis between six months previous to the study. The Inclusion criteria were any women that attend to "Clínica de Displasias" for any cervical associated problem, no matter their origin or place of living (Mexico or USA), sign an informed consent and fill a questionnaire about their sex habits. The exclusion criteria were: smoking and alcoholism habits. All women studied were above eighteen years old. The full protocol was approved by the Ethics Committee of the Universidad Autonoma de Ciudad Juarez and the Clinica de Displasias. Women were asked to autosampling with a cotton swab the two cheeks and with other cotton swab the palate/gum (P/G). In both cases tissues were rubbed for a minute. Cotton swabs were immersed into 15 mL tube containing 1 mL of transporting media (10 mM Trizma, pH 8.8; 1 mM EDTA; 0.01% Sodium azide; 50 μg/mL Ampicillin; 1 μg/mL Proteinase K) and stored at -20°C in between 24 h after collection. DNA extraction and PCR for HPV Samples were unfreeze, centrifuged at 3,500 rpm in a clinical centrifuge, cotton swab removed and the transporting media was transferred to a 2 mL microtube that was centrifuged 5 min at 3,500 rpm at 4°C. 300 μL of clear supernatant were taken into a new tube and 25 μL of 5 M sodium acetate and 1 mL of isopropanol were added consecutively and centrifuged at 14,000 rpm at 4°C for 5 min. Pellet were washed with 1 mL of 70% ethanol and dried overnight at room temperature. Pellet was dissolved with 100 μL rehydration solution (Promega A7963) and incubated at 65°C for 20 min. PCRs for generic HPVs were assembled with 5 μL DNA, 12.5 μL 2X GoTaq Green Master Mix (Promega). 5 μL MY11/MY09 primer mix at 2.5 μM each, and 2.5 μL water. PCR conditions were 40 cycles at 94°C for 60 s, 55°C for 60 s, 72°C for 60 s, with an initial denaturation at 94°C for 5 min and a final extension of 72°C for 7 min [ 33 ]. PCR products were examined in 2% agarose gels using base pairs standard (Promega G7521). PC04 and GH20 primers for human betha-globin gene were used as internal controls. For HPV16 and HPV18 specific primers were used as described elsewhere [ 34 , 35 ]. Statistical analysis Questionnaire answers, previous histopathological results and buccal HPVs results were analyzed with SPSS statistical software version 11 and Mann Whitney U Test [ 42 ].
Results This study included 46 voluntary non-smoker and non-alcoholic women that attend to the "Clínica de Displasias del Sector Salud" in the City of Juarez, México. The female subjects attend to this clinic because they presented previous CIN alterations. After they filled out the inform consent and the questionnaire; the patients were instructed by imitation for autosampling the oral cavity. Once in the laboratory and after DNA extraction the human β-globin gene was amplified as an internal control of human DNA as well as a DNA with quality for PCR. Those cases that failed in the β-globin gene amplification were excluded rather than those samples tested positive for β-globin were further used for generic HPV that amplify a wide range of HPV types. After generic HPV positivity the specific PCR for HPV16 and HPV18 were performed. Two regions were sampled in the oral cavity; the first one was the buccal mucosa and the second one was the palate and gum altogether (P/G). All women studied carried generic HPV either in mucosa or P/G, this gives a frequency of 100% for the buccal cavity. However, if the mucosa and P/G are considered separately (Table 1 ) the percentages of HPV infection were as follows: for generic HPV in the buccal mucosa was 86%, and for P/G was 88% and for HPV16 23% for mucosa and 16% for P/G. The higher frequency of HPV16 in woman having oral sex would suggest a higher risk, however number are too small to see any significant risk. Sixty-three percent (63%) of the studied female subjects were married, 19% were in common law marriage and 7% were singles. The mean age was 35 years old ranging between 19 to 63 years old (not shown in tables). Table 2 shows the results of the applied questionnaire. Generic HPV was detected in all cases either in palate-gum, buccal mucosa, or both. However, HPV16 was detected in 35% of all patients and 73% of the patients stated to practice oral sex frequently. The association between the presence of HPV16 and the frequently practice of oral sex was not observed. From the total women, 53% had oral sex mutually (fellatio and cunnilingus, Table 2 ). This group had the highest frequency of HPV16 (53%) among those patients stated to practice oral sex. Interestingly the only three women that practice fellatio but did not received cunnilingus (7%) were generic HPV positive but HPV16 negative. All women were CIN diagnosed along the previous six months (according to Bethesda classification) because of that; it is clearly evident the association between oral HPV16 positivity and CIN progression 51% (Mann Whitney U test, p = 0.023), followed by 28% with inflammatory alterations; and finally 21% did not present cervical alterations (Table 2 ). Two women that evolved positively to the treatment at that time of gynecological visit were observed. They had no cervical alterations but were HPV16 positive. The majority of the patients stated not to use condoms while practicing oral sex, and 60% of them were buccal HPV16 positive. Apparently the use of condoms during oral sex would prevent infection of the oral mucosa (Table 2 ). The biggest group of patients (47%) declared to have only one sex partner, 23% had two partners and 2% had more than two partners (Table 2 ). The majority (53%) of the women stated not to share spoons, toothbrush or candies (Table 2 ). But an important fraction (40%) recognized to share those objects occasionally (Table 2 ).
Discussion The prevalence rate of HPV in normal oral mucosa has been reported to vary greatly because of differences in types of samples, collection, detection methods, level of sensitivity, PCR primers used, and PCR inhibitors [ 10 , 15 , 22 - 32 ]. A previous study showed a high prevalence of oral HPV infection (81%) among healthy people in Japan [ 20 ]. In this study, we performed the HPV detection by PCR using the MY09/MY11 primer pair; which, it is widely used on epidemiological studies and showed to be equally sensitive than confirmatory nested PCR with GP5+/GP6+ primers, with a correlation of 94% [ 33 - 35 ]. Generic HPV in our sampled population had a frequency of 100%. These women that attend to the "Clinica de Displasias de Ciudad Juárez" can be considered as high risk population since they came to this clinic attending some cervical abnormality. It is notably interesting that all these women were positive and taking into account that they had some cervical problem; this number became comparable to the 81% found in oral cavity of healthy Japanese and the 90% buccal HPV among positive genital HPV in Brazilian women [ 7 , 20 ]. Buccal presence of HPV16 in our study was 35% (Table 2 ). However, we have observed differences by the anatomic region of the oral cavity sampled that was 23% for buccal mucosa to 16% on P/G. To sum up, this HPV point prevalence was higher than those of other normal oral HPV studies as well as in high risk population [ 20 , 26 , 30 ]. The reasons of such a high HPV prevalence in the normal oral cavity in women of the City of Juarez are not yet explored. Further studies are needed to clarify this as well as the differences in HPV infection at each normal oral site and the use of condoms along with preventive vaccination as a strategy to prevent infections of the oral mucosa. We evaluated HPVs only in buccal samples. Although in cancers of the upper aerodigestive tract HPV has been detected predominantly in the oropharynx and tonsil [ 29 , 35 ]. A risk factor for the occurrence of HPV-infected oral and for presence of HPV in the normal cervix is an increase in the number of sexual partners [ 10 , 20 , 28 ], but in our study the number of sex partners did not seem to increase the risk of HPV-16 oral infection. Oral sex practices may affect oral and cervical sites similarly. Local factors that have been found to influence the persistence of HPV at the cervix may affect the natural history of oral HPV infections. These factors include coinfection by Chlamydia trachomatis [ 30 , 31 ] or herpes simplex virus [ 32 ], smoking [ 36 ], age [ 37 ], HPV type [ 38 ], and the use of hormones and contraceptives [ 23 , 36 , 39 ]. However, all these reports did not implement any questionnaire to identify any factors behind the transmission of HPVs. We have collected information about sexual behavior of the patients, to examine the relationship between cunnilingus and incidence of mouth HPV infection. In our study HPV16 in mouth was significantly associated with genital CIN progression (Mann Whitney U test, p = 0.023), suggesting that women with HPV16 persistent infections and progression to advanced genital lesions have higher risk of HPV16 detection in the oral mucosa. We found HPV16 infection in a media of 37 years. As this type of HPV is related to cancer; it is also considered as high-risk mucosal type virus [ 40 ] and this phenomena should be known by the dentists and pathologists. The role of men as possible vectors of HPV has been discussed previously [ 41 ]. Our results suggest that transmission of HPV occurs not only via sexual contact but also through oral contact. The fact that women that stated not to receive cunnilingus were HPV16 free; this probably reflects that their partner is not translocating the virus from the women genitals to the mouth but probably the autoinoculation is more frequent that we think. In a cross-sectional study of Fahkyr [ 23 ], oral HPV infection was found to be less prevalent than cervical HPV infection in both HIV-positive and -negative women. Oral HPV infections were detected in approximately 25% of HIV-positive women and 9% of HIV-negative women. Our results showed a much higher prevalence in oral cavity of women with or without cervical lesions then the study of Fahkyr [ 23 ]. These data suggest that the oral cavity may be a reservoir of HPV infection with a sufficiently high prevalence to affect the dynamics of HPV transmission between the populations. Several aspects for the relationship among cervical- and oral-HPV remain poorly described. A prospective study to clarify the interrelationship between HPV infections at both sites and to understand possible differences in incidence and factors affecting clearance or persistence in the oral cavity and the cervix is warranted. These include differences by anatomic site in HPV prevalence and HPV type distribution. Also unknown are the prevalence of single and multiple oral or cervical infections and whether multiple infections are type concordant. Our data hint, it is that oral cavity should be considered as a potential reservoir for HPV and may not be entirely independent of the cervical reservoir as well as the partner genitalia. In this study we did not sample the partner's genitalia but we assume that both genitalia are infected since these are considered as sexually transmissible infections. Thus the presence of the virus in the oral cavity should be explained not only by oral contact with genitalia but also by autoinoculation. We are aware about the different prevalence data in oral HPV DNA detection ought to different populations and methodologies. Consequently comparative studies will be required in parallel studies of the relationship between incident squamous epithelial lesions and persistence oral HPV. This will help to understand the involvement of HPV in the development of oral cancer as well as the role of oral cavity in the cervical infection.
Conclusions In conclusion, this study suggests that women with genital HPV infection have also some kind of HPV infecting their oral cavity and HPV16 detection in the mouth is associated to HPV16 persistence in the genital tract and CIN progression.
Background Previous studies have either investigated the relationship of HPV with oral cancer or the prevalence of HPV on the oral cavity. The purpose of this investigation was to study the prevalence of HPV in oral cavity of women with oral sex practices and cervical lesions. Methods Forty six (46) non-smokers and non-alcoholic patients attended the "Clínica de Displasias" of "Ciudad Juarez" were sampled. This population had a CIN diagnosis sometime between the previous six months. On previous consent they filled out a questionnaire related to their oral sex practices. Afterwards one swab from cheeks and another from palate/gum were taken; PCR was used to determine generic HPV, HPV16 and HPV18. Results Seventy two percent (72%) of the patients stated to have oral sex practices regularly which all of them were positive to HPV either in oral mucus, palate/gum or both. The total of the given results showed that 35% had HPV16; among those distributed in 26% with regular oral sex practices and 9% stated as never practiced oral sex. An association was found between oral HPV16 positivity and progression to cervical CIN advanced lesions. On the other hand HPV18 was not detected. The frequency of HPV16 was higher in buccal mucosa (23%) versus palate/gum (16%). Conclusions This study suggests that buccal HPV16 infection is associated with CIN progression.
Competing interests There are no conflicts of interest by any of the authors involved neither in this study publication nor for the previous 2 years. Authors' contributions LS carried out the molecular genetic studies, participated in the sampling, and questionnaire design, analysis and drafted the manuscript. CD carried out the selection of patients, sampling, questionnaire application, and analysis of data. AM conceived of the study, and participated in its design, statistical analysis, writing and coordination of the study. All authors read and approved the final manuscript.
Acknowledgements This study was supported with funds of UACJ, "Clínica de Displasias" and the United Sates-Mexico Minority Health Interdisciplinary Research Training Program, http://www.nimhd.nih.gov/our_programs/mhirt.asp . We acknowledge also to Vanessa Mendez Galindo and Ana Gabriela Padilla Galindo for the proofreading. Part of this work was possible with stipends for internships to students by the United Sates-Mexico Minority Health Interdisciplinary Research Training Program, http://www.nimhd.nih.gov/our_programs/mhirt.asp . All authors acknowledge to UACJ and the Clínica de Displasias, Ciudad Juarez, Mexico.
CC BY
no
2022-01-12 15:21:36
Infect Agent Cancer. 2010 Dec 4; 5:25
oa_package/e8/42/PMC3014881.tar.gz
PMC3014882
21126337
Background Two surgical options have been described for treatment of colon injuries and each one has advantages and disadvantages; (a) those that include any type of fecal diversion, known as two stage management and (b) primary repair. Based on surgical experience in the Second World War, two stage procedure remained standard treatment for the next 35 years [ 1 ] in spite of insufficient scientific evidence. In late 1970 s, Stone and Fabian [ 2 ] performed first prospective randomized controlled trial using primary repair for colonic injuries in selected cases. They defined the so called "Stone and Fabian" exclusion criteria for primary repair of colonic injuries. These criteria have been questioned and modified by Flint and Vitale [ 3 ] in 1991, when more liberal attitude for primary repair emerged, based on substantial improvements of intensive care and data from non selected, randomized controlled trials. In 1999, Curran and Borzotta [ 1 ] reviewed 5400 cases of civilian colon injuries where more than a half of patients received primary repair. Exclusion criteria were re-evaluated again, leading to the conclusion that most previous reports were based on highly subjective surgical estimation of risk factors, so primary repair could be performed in consecutive number of patients without any exclusion criteria [ 4 , 5 ]. Prospective randomized trials performed in period 1995-96, compared results of primary repair with two stage procedure without using exclusion criteria [ 6 , 7 ]. They found that mortality and morbidity from abdominal sepsis were either similar or slightly lower in primary repair group, leading to the conclusion that only Penetrating Abdominal Trauma Index (PATI) > 25 is associated with slightly higher complication rate. In studies of nonselective randomized approach, Gonzales [ 7 , 8 ] concluded that all civilian injuries should be treated by primary repair. Numerous observational (Class 2) and retrospective (Class 3) studies [ 9 - 11 ], found better results of primary repair compared to two stage procedure, but there is a lack of randomized, class one studies. The problem of extensive colon injuries and the criteria for the method of repair remains controversial [ 12 ]. The aim of this study was to investigate the possibility of expanding indications for primary repair of colon injuries using nonselective approach.
Methods This study was designed as retrospective and prospective evaluation of two stage procedure and primary repair in colon trauma management. Two groups of patients, one treated with selective approach and second treated with primary repair were analyzed in order to compare morbidity and mortality. The study was approved by the ethics committee of the Clinical Centre of Montenegro. Due to the severity of injuries and the need for urgent surgical treatment, it was not possible to seek informed consent from the patients, as a result, written informed consent was sought from a next of kin for participation in the trial. RS group included 30 patients (25 males and 5 females) with colon injury, treated in Clinical Centre of Montenegro, (CCM), Podgorica, in period 1995-2000. All patients in this group had war injuries and in all cases selective approach was used for the decision about the method of repair. PR group included 33 patients (29 males and 4 females) managed by same surgical team, in period 2000-2005. In this group, exclusion criteria were not used, with intention for primary repair in every case. The mean age in RS group was 36.8 years (SD 14.61, SE 2.66), and in PR group 41.3 years (SD 12.18; SE 2.17) with no statistical difference (T = 1.39; p > 0.05). Etiology of colon injuries varied between two groups (Figure 1 ). Iatrogenic injuries were more frequent in RS group (χ2 = 3.997), while stab wounds were more frequent in PR group (χ2 = 3.967), but overall distribution remained balanced (p > 0.05). Isolated abdominal wounds were most common in both groups, 20 (66.8%) in RS group, and 19 (57.9%) in PR group followed by combined abdominal and chest injuries 6 (20%) in RS group and 7 (21.2%) in PR group. Overall distribution of concomitant injuries was similar in both groups (χ 2 = 1.047; p > 0.05). In RS group main selection criteria were: a. Trauma severity scores and indexes: - Trauma score (TS) [ 13 ] - Injury Severity Score (ISS) [ 14 ] -Penetrating Abdominal Trauma Severity Index (PATI) [ 15 ] b. Evaluation of general condition and abdominal findings at laparotomy: - Stone and Fabian [ 2 ] criteria for primary repair of colon injury (SF) - Flint grading [ 3 ] - contraindications for primary repair of colon injury (Fl) Selection criteria were used for decision making regarding primary repair or diversion procedure in each case. In period 2000-2005, based on encouraging experience from the RS group, all patients with colon injury were treated with primary repair, without any selection criteria except advanced peritonitis and multisegmental injuries of colon with impaired blood supply which are generally accepted as contraindications for primary repair. The procedure of trauma management: after initial diagnostic and resuscitation procedures, patients were operated without any delay. In cases with associated multiple injuries treatment was conducted according to priority. The policy of primary repair included direct suture or resection with primary anastomosis. Antibiotic prophylaxis with 3 rd generation Cephalosporines and Metronidazole was standard part of the procedure. Patients were discharged from hospital after restoring digestive function, and abdominal wound healing, usually 12 th to 14 th postoperative day. In all lethal cases autopsy was performed. Trauma severity scores and indexes were calculated according to the methods described in the literature. Standardized statistical tests were used to determine group variables. For comparison between groups, T-frequency comparison test, Pearson's χ 2 test, Fisher's, and Test of Variances were applied.
Results The mean time between injury and admission to surgery (Latent time) in RS group was 3.1 hours (SD 3.41; SE 0.6) and in PR group 1.38 hours (SD 1.18; SE 0.24), revealing significant difference (T = 8.31; p < 0.01) in favor of PR group. Trauma severity index showed no statistical difference between groups (p > 0.05) as shown in Table 1 . There was statistical difference in PATI score between groups in category of three and four injuried abdominal organs (Table 2 ) in favour of PR group (T = 3.983 and 3.645). There was no statistical difference between RS and PR group in Stone/Fabian criteria. Flint grading was higher in PR group in category of four organs injuried (T = 3.124; p < 0.05). However, overall balance in indexes of local trauma remained similar in both groups (χ2 = 1.378, P > 0.05). Distribution and severity of colon injuries was balanced between groups (RS vs. PR): ascending colon (8:6); transverse colon (5:6); and sigmoid colon (5:5). Most frequent associated injuries (RS vs. PR) were: small intestine (13 : 9, χ2 = 1.83, p > 0.05); spleen (2:9 χ2 = 4.62 p < 0.05); kidney (5:7); liver and diaphragm (5:5); retroperitoneal hematoma (4:3) and stomach (4:2). The incidence of injury of duodenum, pancreas, urinary bladder, ureter, caval vein) ranged from 1to 2, and overall distribution in both groups remained balanced (T = 0.53, p > 0.05). There was no difference between RS and PR group in number of contraindications for primary repair procedure (F = 1.924 p > 0.05). Primary repair was more frequent in PR group (F = 6.115, p < 0.05). In two cases, complications of primary repair in RS group needed conversion to two stage procedure, resulting with two deaths. In PR group there were no anastomotic complications necessitating relaparotomy, but number of deaths in the subgroup of primary repair was higher (χ2 = 1.145 p > 0.05) as shown in Table 3 . The outcome of primary repair showed no statistical difference between two groups (χ2 = 1.034 p > 0.05). The same was found regarding two stage procedures (χ2 = 1.287 p > 0.05), as well as overall success rate of both procedures (χ2 = 0.22 P > 0.05). Significantly more common use of primary repair in PR group (n = 33; χ2 = 8.27; P < 0.05) resulted in higher succes rate (χ2 = 4.487 p < 0.05). There were two anastomotic leakages in RS group necessitating relaparotomy. Bipolar colostomy (exteorisation) was performed in first and Hartmann procedure in second case. In PR group, two patients had to be reoperated due to complications of associated abdominal injuries (local abscess after pancreatic resection, and pararenal abscess), both with no signs of anastomotic leakage and with favorable outcome. Surgical procedures and results are shown in Table 4 . Postoperative mortality was higher in PR group (Fisher's test = 0.045, p < 0.05). In this group, all deaths were caused by complications of associated injuries without signs of anastomotic leakage. One death in RS group was caused by anastomotic leakage after right hemicolectomy.
Discussion Concerning civil colon injuries, in 1993 Keighley [ 16 ] stated "... in experienced hands, using a very selective policy in low risk patients, repair of single laceration in two layers, after excising any irregular edges, appears to be optimal surgical approach" thus supporting the policy of primary repair of right colon and diversion procedure for left colon injuries. Nowdays, there is a definite trend toward increased use of primary repair in management of all penetrating colon injuries, independently of their localisation [ 17 ]. Numerous prospective randomized trials compared primary repair to diversion procedure, and demonstrated no significant difference in complication rates between groups [ 9 , 18 ]. Several recent reviews [ 19 - 21 ] analyzed the role of primary repair in treatment of colon injuries and pointed out that in conditions of similar intensity of general and local trauma, and similar intraoperative findings, primary repair had better results regarding complications, deaths and final outcome. Controversy remains only in cases of destructive colon injuries requiring resection, whether they should be treated with or without diversion procedure. According to AAST results of prospective multicenter trial [ 10 , 19 ] three risk factors for intraabdominal septic complications, independent from the method of repair were identified as: severe fecal contamination, transfusion of more than 4 blood units and single antibiotic prophylaxis. However, the concept of "severe fecal contamination" has not been clearly determined yet. The same author [ 10 ], comparing data from other reports, could not strongly support even these 3 criteria and stressed that there are only two main indications for performing two stage procedure: severe colon edema (whatever the cause) and questionable colon blood supply [ 19 , 20 ]. In this study, nonselective approach in favor of primary repair was used with very limited contraindications for primary repair. The mean latent time was shorter in PR group, which could be accounted for more favorable results. However, short latent time could also contribute for two early deaths in the PR group, because in case of longer delay they would not reach surgical service at all, due to severity of associated injuries. Etiology of colon injuries was quite similar in both groups with differences in categories of iatrogenic injuries and stab wounds (Figure 1 ). In most cases, these injuries were similar in terms of severity of local and general trauma, so overall data were balanced. The intensity of general trauma and distribution to other body regions and organs were similar in both groups. Severity abdominal trauma indexes (Table 2 ) were essentially similar in both groups, as well as number of injured organs. PATI score was slightly higher in PR group in the category of three and four organs injured (in both groups PATI > 25). Flint grading was higher in the category of four organs injured. Segmental distribution of colon injuries, as well as wound severity (Table 2 and 3 ), were equal. According to Stone/Fabian criteria, both groups were equal (Table 3 ). Primary repair was performed in 60% of cases in the RS and in 90.9% in the PR group. Higher success rate of primary repair in the PR group (F = 6.034 p < 0.05), was mainly because S/F criteria was ≥3. This was probably the result of more liberal use of primary repair in higher categories of SF criteria, which is also supported by recent literature [ 21 ]. There was no significant difference regarding percentage of attempted and successful primary repairs in lower categories of S/F criteria between groups. The incidence of primary suture is the same in both groups (χ 2 2.56), but there are more resections with primary repair in PR group, thus achieving overall success in 25 of 30 attempted cases (F = 7.124 p < 0.05). Two severe complications were registered in each group but in RS group they required conversion to two stage procedure. In RS group there was one more conversion procedure with lethal outcome. Complications in PR group were caused by associated injuries not requiring conversion procedure and ended favorably. Mortality was higher in PR group (p = 0.045). There were 3 early postoperative deaths (two in category of one stage and one in category of two stage procedure) caused by severe injuries of other organs. There were also 3 late postoperative deaths, but none of them caused by colon injury. Analyzing unsuccessful cases together (complications and deaths), there was no statistical difference between two groups (χ 2 = 0.859 P > 0.05).
Conclusions According to our experience, we believe that the policy of primary repair of colon injuries can be applied more liberally in majority of patients with high success rate.
Background This study was designed to determine the role of primary repair and to investigate the possibility of expanding indications for primary repair of colon injuries using nonselective approach. Methods Two groups of patients were analyzed. Retrospective (RS) group included 30 patients managed by primary repair or two stage surgical procedure according to criteria published by Stone (S/F) and Flint (Fl). In this group 18 patients were managed by primary repair. Prospective (PR) group included 33 patients with primary repair as a first choice procedure. In this group, primary repair was performed in 30 cases. Results Groups were comparable regarding age, sex, and indexes of trauma severity. Time between injury and surgery was shorter in PR group, (1.3 vs. 3.1 hours). Stab wounds were more frequent in PR group (9:2), and iatrogenic lesions in RS group (6:2). Associated injuries were similar, as well as segmental distribution of colon injuries. S/F criteria and Flint grading were similar. In RS group 15 primary repairs were successful, while in two cases relaparotomy and colostomy was performed due to anastomotic leakage. One patient died. In PR group, 25 primary repairs were successful, with 2 immediate and 3 postoperative (7-10 days) deaths, with no evidence of anastomotic leakage. Conclusions Results of this study justify more liberal use of primary repair in early management of colon injuries. Trial registration Current Controlled Trials ISRCTN94682396
Competing interests The authors declare that they have no competing interests. Authors' contributions RL consultant surgeon, conceived of the study, and participated in its design and coordination and helped to draft the manuscript GB conceived of the study, and participated in its design and coordination and drafted the manuscript ZK participated in the study design and helped to draft the manuscript All authors read and approved the final manuscript. Pre-publication history The pre-publication history for this paper can be accessed here: http://www.biomedcentral.com/1471-230X/10/141/prepub
CC BY
no
2022-01-12 15:29:27
BMC Gastroenterol. 2010 Dec 2; 10:141
oa_package/83/9e/PMC3014882.tar.gz
PMC3014883
21122145
Background In early 2005 we conducted a study in the two most populous regions of Québec province (Montréal and Montérégie) which examined the association between prevailing models of primary healthcare (PHC) and population-level experience of care [ 1 ]. This study followed the launching of two reform policy initiatives by the Québec's Ministry of Health and Social Services: the creation of Family Medicine Groups (FMG) and the establishment of Local Services Networks (Local Networks) under the governance of Health and Social Services Centres [ 2 ]. FMGs were established to increase accessibility and continuity of care while Health and Social Services Centres (Local Centres) aimed at better coordinating and integrating services by creating territorially-defined Local Networks. Although these policies were respectively proposed in 2002 and 2004, implementation was only begun, for the most part, in 2005, coinciding with the conduction of the aforementioned study. Four years later both reforms are well-established, and the question arises of how PHC models have evolved, what factors have promoted the evolution of PHC organizations, and how this evolution has translated into measurable effects at the population level. The decision-makers of the two regions have approached our research team to explore these questions. The study we conducted at the early phase of implementation of these reforms will provide us with a reference point for assessing the evolution of PHC organizations over a five year period. The study's goal is to assess the evolution of PHC organizations through the reform, identify factors associated with this evolution, and evaluate its association with the performance of PHC organizations and Local Networks. The knowledge generated by this study will help to further PHC reorganization efforts in various jurisdictions by better understanding factors that can promote organizational change and by better understanding the impact of this change on population-level experience of care. Our project team includes researchers and decision-makers engaged in the co-production of relevant information in order to guide PHC reforms and optimize PHC service provision. By providing sound evidence for decision-makers and clinicians regarding factors related to the transformation of PHC organizations, we aim at supporting the implementation of PHC reform efforts and thus improve the performance of the healthcare system in addressing healthcare needs of Canadians. The current reform of PHC organization in Québec Health and Social Services Centres (Local Centres) have been created by law [ 3 ], merging acute care hospitals, long-term care hospitals and Local Community Services Centres (CLSC) on a geographical basis. Their main objective is to lead to the implementation of Local Networks and to increase collaboration among PHC organizations through the creation of these networks [ 4 ]. The Local Networks are composed not only of the facilities merged under Local Centres but also of all other health and social services providers, including privately owned medical clinics. There are 95 Local Centres and Networks in Québec, 12 in Montréal and 11 in Montérégie. Local Centres and Networks vary in composition since some have acute care hospitals while others don't. In addition, Local Centres benefit from a large autonomy in the planning and organization of services and activities. The FMG policy consists mostly in developing a contractual agreement between PHC clinics and the provincial government. PHC organizations receive complementary funding in exchange of complying with certain organizational requirements identified in the FMG policy (e.g. extended opening hours). In addition, each FMG has a contractual agreement with Local Centres that enables them to benefit from the presence of a nurse. A FMG consists of 6 to 10 physicians who work together with nurses to provide services for registered members of the group, on a non-geographical basis (usually around 10,000 to 20,000 people per FMG). A FMG provides services both by appointment and on a walk-in basis. It aims at being accessible 24 hours a day, 7 days a week, through opening hours that extend into the evening (until 9:00 p.m.) and weekends (at least 4 hours), and through a regional on-call system (Info Health line) for vulnerable patients when the clinic is closed. The target established at the start of the reform was to implement 300 FMGs in the province. As of March 2009, there were 181 accredited FMGs in Québec, 42 in Montréal and 55 in Montérégie. A complementary model of organization currently being implemented in the regions under study is the Network Clinic. These clinical settings are more specifically targeted to ongoing and integrated management of clients, particularly those considered "vulnerable", and to provide access to basic technical support, such as radiology, blood tests, and specialists [ 5 ]. Their creation was initiated by the Montréal Regional Health Agency as a complement to FMGs, in response to requests by the regional medical association. A clinic can concurrently have the status of FMG and Network Clinic, thus benefiting from two sources of funding. As of March 2009, there were 36 Network Clinics in Montréal, among which twelve had both FMG and Network Clinic status. A recently completed research project We recently completed the research project Accessibility and Continuity of Care: A Study of PHC in Québec which was conducted in two regions in the province--Montréal and Montérégie [ 1 , 6 ]. It looked at organizational models of primary healthcare and their influence on accessibility and use of health services by the population, as well as the experience of users of these services. The main objective of the study was to identify organizational models of PHC that are best adapted and most likely to meet the population's needs and expectations. The research included three components: 1) a survey of the population designed to measure utilisation of health services as well as users' perception of the accessibility, continuity, comprehensiveness, responsiveness and perceived results of services received [ 7 ]; 2) a study of PHC clinics that aimed to describe the PHC organization models in the regions studied [ 8 ]; 3) a contextual analysis that sought to describe Local Networks [ 9 ]. We identified five models of PHC organizations. Four were professional models (one was a single-provider model, one was a contact model (walk-in clinics), and two models were coordination models, one being integrated and the other non-integrated in the overall healthcare system), while one was a community-oriented model. Overall, the integrated coordination and single provider models were associated with better patient experience of care, followed by the community-oriented model. The contact professional model was associated with the worst experience of care across all measures [ 1 ]. What does the literature tell us about PHC organizations? Recent studies have focused on models of care, or ways to organize clinical services, that promote more accessible, coordinated, patient-centered care with emphasis on health promotion and disease prevention [ 10 , 11 ]. Models of care such as the medical home and the chronic care models, among the most often cited, have shown a great potential for achieving such results [ 11 - 15 ]. However, researchers have paid much less attention to the structure and processes developed at the organizational level, in which these models of care can be implemented and which require certain organizational conditions for their successful implementation [ 16 ]. Several organizational attributes have been associated with a better performance of PHC organizations [ 17 ]. For example, physician payment modalities have a determining effect on their practice. Fee-for service is associated with greater productivity but less continuity of care when contrasted with per capita prepayment which encourages more continuity and prevention [ 18 , 19 ]. Although it is possible to identify the effect of individual attributes of organizations on various process or outcome indicators, it remains more difficult to understand how these attributes relate to each other in actual organizations and systems. However, studies that focused on comparisons between different types of PHC organizations or systems (e.g. Kaiser or Veterans Administration) have provided enlightening results [ 20 , 21 ]. Although differences between types of organizations could be due to specific organizational attributes, understanding the effect of various organizational characteristics in a systemic perspective remains a challenge [ 22 ]. Hence, there is a need for a more holistic view in the study of healthcare organizations and systems. The configurational approach, which views an organization as a whole rather than a set of independent attributes, is instructive in this regard [ 23 , 24 ]. This view seems to best meet the representation held by decision-makers of what an organization really is [ 25 ]. "In essence, a configurational approach suggests that organizations are best understood as clusters of interconnected structures and practices, rather than as modular or loosely coupled entities whose components can be understood in isolation" [ 26 ]. Configurations are "represented in typologies developed conceptually or captured in taxonomies derived empirically" [ 23 ]. Taxonomies are generally derived from cluster-analytic methods, thus forcing similar organizations to form homogeneous groups [ 26 - 29 ]. A complementary measure is a deviation score [ 30 ]. In this case, the researcher defines an ideal-type of attributes based on theoretical considerations and then calculates a score of conformity to this ideal-type, based on empirical observations [ 26 ]. One way to conceptualize various organizational models derived from the configurational approach is to consider them as a system for organized action defined by four sets of attributes: vision, resources, structure and practices [ 31 ]. As it applies to PHC organizations, vision corresponds to the values and representations shared by the actors [ 1 , 16 ]. Structure refers to the interaction and regulation among actors, such as interprofessional collaboration, and governance. Resources are defined by the type and level of various resources (human and material) and their arrangement. Finally, practices comprise mechanisms for offering services, developing multidisciplinarity and ensuring follow-up of patients. This approach has been used in our previous work. In a recent policy synthesis, we derived a taxonomy of four models: two professional and two community models [ 16 ]. Following the same methodological approach, but using data on PHC organizations in two regions, we derived another taxonomy that is very consistent with the policy synthesis. We found only one community model, but four professional ones: the single provider, the contact, the coordination and the coordination integrated [ 1 ]. In order to contrast models from a normative standpoint, we also constructed an index of conformity to an ideal-type, based on the literature on group practice and on the various policy documents on new emerging forms of PHC organizations (such as the FMG). Not only do these models or archetypes provide an holistic view of an organization, compared to other forms of organizations derived from the same taxonomy, but they also permit the assessment of change over time, when an organization passes from one archetype to another [ 23 , 25 , 30 ]. Comparing archetypes or models specific organizations belong to at different points in time is thus a sensitive measure of organizational change. What does the literature tell us about factors associated with PHC organizational change? Institutional theory of organization has become widely used to explain organizational change [ 32 - 34 ]. According to this theory, the environment exerts a determining influence on organizations that tend to take a similar form within an organizational field (the sharing of common norms and values) leading to a certain degree of homogeneity called isomorphism [ 32 , 35 , 36 ]. In the public sector, geographically defined territories such as Local Networks can exert such an influence [ 37 , 38 ]. Environmental pressures exerted on organizations are of three types: coercive, normative and mimetic [ 36 ]. Coercive pressures refer to laws, regulations and state policies. As Scott [ 38 ] points out, the state has the definitive ability to apply these kinds of pressures either by law or by introducing strong incentives in financing publicly-supported organizations. The two measures introduced by the Québec Government to create FMGs and Local Centres are essentially of this kind. Normative pressures are very prevalent in an environment of professional organizations such as the healthcare system. They refer to values and norms held by professional associations that tend to permeate organizational boundaries [ 33 , 39 , 40 ]. Hence, local professional associations and leaders have normative influences on PHC organizations through their links with professionals in these organizations [ 38 , 39 ]. Finally, mimetic pressures stem from organizations considered as examples by others that tend to imitate them. FMGs and Network Clinics can be seen by other clinics as model PHC organizations, thus generating mimetic pressures on these clinics. Although organizations within an organizational field tend to converge to some form of isomorphism in response to these pressures, they do not react exactly in the same manner [ 38 ]. There are intrinsic characteristics of organizations mainly related to dominant values held by their professionals and the role played by influential actors that make them more or less sensitive and receptive to these pressures [ 38 ]. For instance, clinics that already collaborate with other clinics may have a higher propensity to respond to mimetic or normative pressures [ 38 ]. These three types of pressure do not necessarily act in the same direction and they can even neutralize each other's influence. This was the case in the implementation of CLSCs (Local Community Services Centres) in Québec. The Government policy aimed to establish a public health and social services organization (coercive pressure) was opposed by professional medical associations which encouraged their members not to practice in CLSCs (normative pressure) and reactively developed a network of privately owned group practice clinics (mimetic pressure) [ 4 , 41 ]. The opposition and reaction of the medical organized medicine to the CLSC project was a major obstacle in making CLSC the point of entry into the system. This illustrates the point that in order to yield maximum organizational change these pressures need to align in the same direction. What does the literature tell us about the effects of PHC organizations in the context of reforms? The contribution of PHC in achieving health objectives has been largely documented [ 42 , 43 ]. Systems based upon well-organized PHC are better performing in many aspects, namely experience of care (continuity, accessibility, comprehensiveness, responsiveness) [ 42 , 44 ]. They also report a more appropriate use of services, as reflected by a lower use of hospital and emergency care [ 45 ]. Reforms of PHC organizations and local organization of healthcare services have been the subject of various evaluative studies in Canada [ 46 ]. Studies in Québec, Ontario, Manitoba and British Columbia have highlighted the positive impact of new forms of PHC organizations integrating desirable attributes of experience of care [ 7 , 41 , 47 - 52 ]. Studies have focused on understanding the process of organizational changes using a case study approach [ 16 , 53 ], linking experience of care and use of services provided by a limited number of organizations [ 48 , 54 ], using administrative data files or population surveys. None of these studies have nominally linked services users with their regular source of care [ 55 - 57 ]. Overall, these studies have highlighted some benefits of emerging models of PHC in various provinces, with community-oriented models and those promoting coordination of care showing the best results regarding the experience of care of patients and regarding professional collaboration and satisfaction. The gap in knowledge and need for evaluating PHC reforms Ongoing or recently completed studies in Québec focus on various aspects of organizational performance [ 1 , 48 , 53 ]. One study explored factors associated with the implementation of FMGs [ 53 ]. A multiple case study approach found a positive association between nurse-physician collaboration and experience of care [ 58 ]. An ongoing study using a cross-sectional design is looking at the relationship between types of PHC organizations and experience and quality of care [ 59 ]. A study currently underway adopts a longitudinal perspective to look at the implementation of Local Centres and the impact on utilization and experience of care [ 60 ]. To our knowledge, no studies have assessed the evolution of PHC organizational models, identifying factors that can explain changes, and their impact on population-level indicators. In addition, we did not find studies that have assessed the impact of PHC reforms on the level of inter-organizational collaboration. Our study includes all PHC organizations in two large regions, a sample of the population with representativeness at the Local Network level and nominal linkage with the regular source of care. This evaluation of the evolution of models of PHC and of its population-level impact is required to guide the continuation and completion of the PHC reform and assess the improvement in capacity to respond to needs and expectations of populations. Such knowledge is crucial given the difficulties of reforming PHC in pluralistic contexts, such as Canada, and the relatively high costs that such reform demands. Decision-makers need to understand what promotes organizational change and how change and its benefits may be sustained. Conceptual framework Our conceptual framework is presented in figure 1 . According to this framework, organizational models (OM) of PHC and the inter-organizational collaboration (OC) between PHC organizations influence the organizational performance (OP) of PHC systems. In addition, certain factors have an impact on the evolution of PHC organizational models and on inter-organizational collaboration through a period of transformation (Time 1 and 2). These factors relate both to the policies established by the Governments and to more implicit organizational environments. The implementation of Local Centres and Networks is seen as exerting a coercive influence on the evolution of PHC organizations. We expect the integrating influence of Local Centres will increase networking as expressed by inter-organizational collaboration among all organizations within the territory. Specific interventions or regulations can in fact influence the ways PHC settings organize various aspects of care. Examples of such interventions can include the funding of specific initiatives by local health authorities, development of specific organizational projects under the impetus of coordinating bodies or modification of relationships between organizations because of restructuring services at various levels of Local Centres and Networks. The introduction of a new organization policy has a direct effect on the implementation of emerging forms of PHC such as FMGs through explicit policies aimed at promoting change in the way care is organized. The implementation of new forms of organizations can also have a mimetic influence on the other forms of PHC organizations and the inter-organizational collaborations in place. In addition to these contextual influences, some characteristics and attributes of PHC organizations make them proactive or more receptive towards change. These attributes can be related to the presence of a designated team leader, or their organizational culture (e.g. concordance between dominant organizational values and current proposals of reform). Professional influence relates to the presence of leaders and professional organizations that apply pressure on PHC organizations towards accepting or opposing changes. These influences include elements such as the official position of medical representatives regarding specific policies or the presence of a local champion promoting a specific model of PHC organization. These changes are expected to translate into an increased organizational performance at two levels: first, at the level of the clientele of these organizations and second at the level of the populations of each Local Network. We use performance here in a very broad sense to include various indicators of effects of PHC organizations [ 61 ]. We expect that change towards new forms of organizations at the level of Local Networks will be associated with improved population coverage (e.g. affiliation with regular sources of care and unmet needs for care), process of care (utilisation of services and patients' experience of care such as accessibility, continuity, comprehensiveness, responsiveness) and outcomes of care (e.g. perceived results of care, reception of preventive services, preventable hospitalizations and emergency room consultations) (see Additional file 1 for details of measures). Study objectives The goal of this research project is to understand the evolution of PHC organizational models and their relative performance through the process of PHC reform, and assess factors, at the organizational and contextual levels, associated with the transformation of PHC organizations and their performance. More specifically, the objectives are: 1. to assess the magnitude and direction of organizational change and migration among models of PHC, between 2005 and 2010, at the PHC organization and Local Network levels as expressed by: 1) the prevalence and local configuration of PHC organizational models; 2) conformity of PHC organizations to a normatively defined ideal-type of organizational characteristics; and 3) the degree of collaboration between PHC organizations within and outside the Local Network; 2. to determine the association of these organizational changes of PHC with factors related to the implementation of Local Networks and policies aiming at promoting new forms of PHC organization, as well as factors related to the receptivity of PHC organizations and the influence of professional associations; 3. to examine the association between these organizational changes and various indicators of PHC performance (coverage, process and outcomes of care), both at the organizations' clientele and the Local Networks' population levels.
Methods/Design Overall study design This study employs a mix of cross-sectional and retrospective longitudinal design methods. It is also hierarchical in nature with nested levels of observation: individuals being affiliated to PHC organizations, which are located within specific Local Networks. This study will draw from four different sources of data to address the identified research questions. These four sources of data consist of: 1) individual-level data from a population survey of people's utilisation and experience of PHC; 2) individual-level data from administrative databases; 3) organizational-level data from a survey of PHC clinics; 4) contextual-level information from a survey of Local Centres (cf. figure 2 ). Data collected during the period of PHC reform ranging from 2005 and 2010 will be used. The organizational and population-level data from 2005 will come from our previously conducted study of the impact of PHC organization models on experience of care of populations [ 1 , 6 ]. New organizational and population-level surveys will be conducted in 2010 as part of this research project to reassess organizational models and configurations as well as population-level coverage, processes and outcomes five years into the reform. Retrospective administrative data covering the reform period and a survey of Local Networks will complement these data sources. Additional file 2 summarizes the research themes, data sources, measurement tools and methods. Sources of data An organization survey questionnaire will be mailed to all 665 PHC organizations in the selected regions, in 2010. We will use a previously developed survey of organizations (Additional file 3 ) focusing on their vision, material, financial and human resources, current organizational structures, and organizational practices supporting service delivery as well as inter-organizational collaboration [ 8 ]. Strong input from the research and decision-maker team members will help promote a high response rate from PHC organizations. A total of 473 organizations participated in the study conducted in 2005, for a response rate of 71% (66% in Montréal and 81% in Montérégie) [ 1 ]. The various types of private and public PHC organizations were well represented (solo, group, CLSC, family medicine units, and FMG) in that survey. We will conduct, in 2010, a contextual appraisal of Local Networks (n = 23) using a survey tool developed in collaboration with another currently funded research team [ 62 ]. This tool will assess the Local Network's characteristics with regards to interventions aiming at promoting organizational change and inter-organizational collaboration at the PHC level. Key informants selected on a purposeful basis in each Local Centre will include a management level decision-maker, and a local representative of medical associations. This survey will be complemented with information coming from the organization survey pertaining to the clinics' perceptions about various aspects of their organizational context and the roles played by Local Centres in the reconfiguration of PHC (questions in Other Application Materials section) (see Additional file 4 ). Concurrently with these two surveys, we will conduct a telephone population survey of randomly-selected community-dwelling individuals aged 18 and over in the 23 Local Networks of Montréal and Montérégie regions (400 respondents in each Local Network; total sample of 9200 respondents) using the random-digit dialling method. This survey of a representative sample of the population will enable us to measure people's affiliations with PHC organizations, utilization of healthcare services and unmet needs for care, selected attributes of people's experiences of care (accessibility, continuity, responsiveness, comprehensiveness), as well as perceived outcomes of care. We will use a previously developed questionnaire (Additional file 5 ) including validated indices of experience of care [ 7 ]. Based on our previous work, we can expect good rates of participation in the survey, with response rates of 63% in Montréal and 66% in Montérégie (Pineault et al., 2004; Pineault et al., 2009). In order to link persons with their associated organizational model of care, we will ask participants in the population survey to identify their usual source of care using a previously developed algorithm based on validated lists of PHC organizations in the two surveyed regions (this methodology has been validated in our previous survey). To complement the information available through population surveys, we will use administrative databases comprising information regarding medical services (RAMQ), hospital-based services (Med-Echo), pharmaceutical prescriptions (Pharmacare), admission in long-term care facilities and death registry. The information gathered will cover the full population of the two regions and the complete span on time ranging from 2005 to 2010. The list of indicators is provided in Additional file 1 . Analytic theme 1: Assessing the magnitude of organizational change and collaboration (Objective 1) The definition of "organization" used in this study refers to organizational entities that include one or several general practitioners offering general medical services. Therefore, private single-doctor offices are regarded as "organizations". Offices and clinics with more than one physician are also considered "organizations" whether or not physicians share a minimum number of resources (rooms, secretarial services or archives), and regardless of their degree of integration. To assess the magnitude and direction of organizational change between 2005 and 2010 at the PHC organization and Local Network levels, we will use the organizational measurement tool developed as part of a previously funded project [ 8 ]. Using a hierarchical classification program applied in the previous project, we will construct an organizational taxonomy based on 2010 data and we will allocate all the organizations into models of this taxonomy through the classification component of this program [ 27 , 28 ]. This will provide us with a sensitive measure of organizational change. We will then assess in 2005 and 2010: 1) the prevalence and local configuration of PHC organizational models; 2) conformity of PHC organizations to a normatively defined ideal-type of organizational characteristics; and 3) the degree of collaboration between PHC organizations within and outside the Local Network. The distribution of organizations on all variables of change will be compared in 2005 and 2010, globally for the two regions, and for each Local Network territory. To assess the migration of organizations from a model of organization to another between 2005 and 2010, two-level regression models with organizations nested within territories will be constructed, adjusting for 2005 results. The dependent variable corresponding to the taxonomy of the organizations will be dichotomous or multinomial, depending on the focus of analysis (single models vs multiple models comparisons). In addition, regression models will be developed to predict the change in conformity score and level of collaboration (continuous dependent variables) at the two times of the study. Two-level linear models (nj = 23; nk = 450) will be built for both categorical and continuous dependent variables. The hierarchical models will be developed by the predetermined introduction of blocks of variables related to the three levels of analysis. Empty models will be developed to assess the level of variance comprised at each level of analysis. Intra-class correlations and proportion of variance explained at each steps of model building will be calculated to guide the selection of the most appropriate models. The modelling strategy will include fixed as well as random effect models. Bootstrapping methods could be employed to develop robust estimates of effect. Appropriate statistical packages will be employed to conduct descriptive and multilevel analyses (HLM; SAS; STATA). Analytic theme 2: Identifying organizational and contextual factors associated with organizational change (Objective 2) To determine the influence of factors associated with the implementation of Local Centres and new PHC forms, as well as receptivity of PHC organizations and the influence of professional associations, on the changes assessed in Analytic theme 1, we will draw on information from the organization questionnaire, as well as from the questionnaire addressed to Local Centres' key informants. Local Network level information and organizational level covariates will be added to the two-level regression models described in Analytic theme 1, using these variables as predictors for change in PHC organization at the local level and in inter-organizational collaboration. As in Analytic theme 1, our analysis will comprise all PHC organizations (approximately 450) and all Local Networks of the two regions (23) in 2010, paired with organizations and Local Networks in 2005. Current knowledge about hierarchical modelling suggests that these sample sizes will provide sufficient statistical power to assess the association of factors with organizational changes [ 63 - 65 ]. The same model building strategy as in Analytic theme 1 will be employed. Analytic theme 3: Assessing the impact of organizational change on the performance of PHC models (Objective 3) To address objective 3, aiming at examining the association between these organizational changes and various indicators of PHC performance, we will use data from the organizational and population components of this study. From the population questionnaire, we will calculate indicators of affiliation with a primary care provider, indicators of utilisation of healthcare services and indices of PHC experience, as validated in our previous study. Using the administrative databases of the entire studied population, we will calculate indicators of utilisation and outcomes of care, such as hospitalisation for ambulatory-care sensitive conditions (see Additional file 1 for details on the indicators). These various indicators from the population survey and administrative databases will be used to contrast the level of performance of different models of PHC organizations in 2005 and 2010 as well as comparing performance at the Local Network level during this period. Hierarchical models will be constructed to identify the organizational factors associated with better results regarding these indices of performance of PHC at the two different times of the study, controlling for age, gender, economic status and morbidity, and for the nesting of individual observations in organizational settings and of organizations in Local Networks settings (three-level models). These models will include a time indicator (2005 vs 2010) as well as same- and cross-level interactions to test magnitude and correlates of change in performance of PHC. Particular attention will be given to the relationship between these indicators and sociodemographic and socioeconomic characteristics such as gender and vulnerability. The sample size will include more than 18,000 persons corresponding to the pooling of the two independent samples of population surveys in 2005 and 2010. The same model building strategy as in Analytic theme 1 will be employed. Power calculation Power calculation always poses challenges in multilevel modelling. While it is well known that multilevel modelling is one of the most efficient statistical analysis when data structure involves nesting of data between levels, precise power calculation methods remain under development. However, some general rules guiding decisions regarding sampling and analyses in nested design exist. In this study, three levels of nesting are present. Power is influenced by the smallest sample size in the hierarchical structure. In this study, some analyses will use the 23 local health networks. However, most analyses will analyse the data taking the organisational level as the smallest sampling size. This level will include more than 300 observations with an average of 11,6 respondents per organisations. Despite variations in the number of respondents inside each organisations, this sample is equivalent to the suggested standards in the literature [ 63 - 65 ]. In addition, our design enables us to provide a power calculation based on our most demanding analysis, which is the multilevel analysis of theme 3, where patients are nested in Local Networks. To calculate the statistical power, we adopted the method of Snijders and Bosker [ 64 ], who proposed to divide the size of the sample by the design effect to obtain the size of the effective sample. Analyses can then be conducted as T-test differences for two independent samples with the size of the effective sample. Since in 2005 around 900 subjects were in the least frequent category of organizational model and the design effect was 1.48 (1.34 and 1.66 in each of the regions), the effective sample would be between 450 and 900 subjects for the least used organizational model. This allows us to detect a difference between 0.13 and 0.19 unit of standard deviation, with an α of 0.05 and a power of 80%. According to Cohen [ 66 ], this difference can be considered as a weak effect. As our calculation is based on comparisons involving the smallest numbers of subjects, our method of calculation remains conservative. Study limits and strengths As with any study using respondents' perceptions (population survey, organizational survey), this study could suffer from perception bias and desirability bias, individuals being reluctant to be critical of their PHC clinic services and organizational respondents giving a biased portrait of their organization's characteristics. However, this bias should affect each type of organization in a similar way and be conservative. In addition, we will benefit from information coming from administrative databases and will be able to compare perceptual information with harder data collected through these databases. Another limitation is that the information will come from a single province and will assess specific aspects of performance of PHC organizations. Other aspects such as economic productivity, technical quality of care or impact on health outcomes are not easily measured by population and organizational surveys. However, this survey will provide the first in-depth analysis of a PHC system, providing population coverage. In addition, our knowledge translation plan includes a national advisors meeting to discuss the applicability of our results to other Canadian contexts. This study also benefits from specific strengths. First, the use of both taxonomic approaches and single characteristics assessments will enable the researchers to assess the impact of organizations in light of their complexity as well as identifying certain key characteristics that can have a specific impact on their performance. Furthermore, its longitudinal design and nominal link between users and their PHC organizations will enable the research team to assess the directionality of associations being measured, something missing from many cross-sectional surveys of PHC organizations performance. Finally, the explicit conceptual framework used in this study will enable the research team to test appropriate hypotheses with a clear explanatory framework to guide the co-creation of knowledge between decision-makers and researchers. Knowledge translation and exchange plan Our knowledge translation and exchange (KTE) strategy is based on the conceptual formulation presented by Klein [ 67 ], who distinguishes three types of evidence: scientific, organization and political. Scientific evidence is produced by researchers. Organizational evidence concerns the feasibility of solutions emerging from scientific research. Finally, political evidence looks at the desirability of these solutions. Each type of evidence addresses different target audiences for KTE activities: the research community; the decision/policy makers, including politicians and pressure groups; and the general public. In addition to presentation during scientific meetings targeting exchange in knowledge among the scientific community, our KTE activities will target four specific audiences. The first audience is the regional and Local Network levels, particularly the two regional health agencies, that participate in the coproduction and the financing of this research. The experience acquired in the preceding project and the links we have established with the decision-makers of the two agencies will facilitate our task in KTE activities. First, one of the decision-makers (D. Roy) is principal co-investigator of the project. Timeliness is an important condition for the use of research results by decision-makers [ 68 ]. Consequently, we will respond to invitations from the two agencies to present our preliminary findings, as soon as they become available. Our experience in the preceding project has taught us that decision-makers are mainly interested by descriptive data of the experience of care of their population and of their PHC organizations. We intend to repeat that strategy that will extend to Local Centres, as we will provide them with a picture of their territory. A second audience is PHC clinicians. As in the preceding project, we will seize any opportunity to participate in the regional meeting of the Regional Department of Medicine of the two regions and to meet with the local medical associations. This proved to be very fruitful in the past, as we have established solid links with the medical leaders. In addition, at the end of the project, a feedback report will be sent to each participating clinic, showing the performance of the models of the taxonomy and to which model they belong. This feedback was greatly appreciated in the previous project and prepared the ground for future participation. A third audience is the Ministry of Health and Social Services, that has expressed great interest in and support for our project. In May 2009, we are invited (JFL and RP) with Bill Hogg, to attend a one-day consultation meeting on the organization of PHC in Québec. We will meet on a regular basis with the persons responsible for the implementation of FMGs and evaluation of services, to keep them informed of our findings, as soon as they are produced. A fourth audience is the general public. We can count on the support of our two institutions, the Institut national de santé publique du Québec and the Direction de santé publique (Public Health Department) de l'Agence de la santé et des services sociaux de Montréal, that have a great deal of expertise and experience in publicizing research to the general population. In addition, we have established strong links with the media in previous projects that will benefit this aspect of our KTE plan. We expect that our KTE plan, by targeting these different audiences, will have a major impact. Knowledge translation activities will revolve around the collaboration with established groups (GETOS, GRGT, INSPQ) and use their established links and networks of knowledge exchange. Presentations to the various collaborating agencies will occur and scientific publications will be accompanied with policy-oriented timely documents. These documents will include descriptive reports related to the experience of care and organization of PHC at the local and regional levels, methodological reports related to the various components of the study and thematic reports focusing on policy-relevant subjects (e.g. unmet needs, PHC affiliation, access for vulnerable populations). The expertise of many of our team members on this front will ensure effective knowledge translation and exchange. Examples of reports recently produced by the research team are available in Other Application Materials section. Finally, just before we produce the final report of this research, we will organize a national meeting on PHC, where we will share with researchers, decision/policy makers, and representatives of the public, the results of our research, along with those of other research teams, namely the Ottawa team, with which we have established collaborations. In preparation for this event, we will prepare a synthesis of findings produced by researchers in Ontario, Québec, and bC. This will be done following the methodology that we adopted in conducting a research collective and specifically in integrating the decision-makers' viewpoints in producing a synthesis [ 69 - 71 ].
Background The Canadian healthcare system is currently experiencing important organizational transformations through the reform of primary healthcare (PHC). These reforms vary in scope but share a common feature of proposing the transformation of PHC organizations by implementing new models of PHC organization. These models vary in their performance with respect to client affiliation, utilization of services, experience of care and perceived outcomes of care. Objectives In early 2005 we conducted a study in the two most populous regions of Quebec province (Montreal and Montérégie) which assessed the association between prevailing models of primary healthcare (PHC) and population-level experience of care. The goal of the present research project is to track the evolution of PHC organizational models and their relative performance through the reform process (from 2005 until 2010) and to assess factors at the organizational and contextual levels that are associated with the transformation of PHC organizations and their performance. Methods/Design This study will consist of three interrelated surveys, hierarchically nested. The first survey is a population-based survey of randomly-selected adults from two populous regions in the province of Quebec. This survey will assess the current affiliation of people with PHC organizations, their level of utilization of healthcare services, attributes of their experience of care, reception of preventive and curative services and perception of unmet needs for care. The second survey is an organizational survey of PHC organizations assessing aspects related to their vision, organizational structure, level of resources, and clinical practice characteristics. This information will serve to develop a taxonomy of organizations using a mixed methods approach of factorial analysis and principal component analysis. The third survey is an assessment of the organizational context in which PHC organizations are evolving. The five year prospective period will serve as a natural experiment to assess contextual and organizational factors (in 2005) associated with migration of PHC organizational models into new forms or models (in 2010) and assess the impact of this evolution on the performance of PHC. Discussion The results of this study will shed light on changes brought about in the organization of PHC and on factors associated with these changes.
Competing interests The authors declare that they have no competing interests. Authors' contributions Understanding the evolution of PHC organizational models and their relative performance through the reform of PHC and assess the factors, at the organizational and contextual levels, associated with the transformation of PHC organizations and their performance, requires a diversity of skills and experience. JFL, designated PI, will lead the project and be involved in all steps of the study, including knowledge translation and exchange (KTE). His experience as co-PI on project Accessibility and Continuity of Care: A Study of PHC in Québec (CHSRF-funded) will ensure continuity between T1 and T2 of the research. Through his experience in policy making and research in PHC, he has acquired unique expertise and skills to coordinate a research team of researchers and decision-makers. He read and approved the final manuscript. DAR, as Principal Decision-maker, will take part in the overall conduct of the study and strategic planning of research and KTE activities with the principal investigators. RP, co-PI, has led project Accessibility and Continuity of Care: A Study of PHC in Québec with Dr Levesque and will serve as senior investigator and mentor for the team. He will bring an important contribution to all components of the research, including KTE. He read and approved the final manuscript. PT is co-PI. As senior researcher and expert in disease surveillance, administrative database studies and population-based surveys, he will lead the databases indicator development component of this project (Theme 3). He read and approved the final manuscript. The co-investigators will be involved more specifically in different components (Analytic themes) of the project. JLD holds a Research Chair from CIHR and CHSRF (cadre program) aiming at improving knowledge base and KTE with regards to the analysis of organizational change. He will contribute more specifically to Theme 2. PL has extensive experience in the analysis of healthcare policy and systems. His experience will be useful for the conduct of the organizational component of this research (Theme 1). SP has expertise in public-health surveillance, clinical preventive practices and surveys. She will be involved in the different components of this project and more specifically in Theme 3. She read and approved the final manuscript. MDB is the current holder of the Chaire Sadok-Besrour in Family Medicine. The expertise she brings to the team includes in-depth understanding of PHC organization and experience of care (Themes 1 and 3). DF will contribute to Theme 3 with her experience in outcome measurement, access to services, and use of administrative databases for health research. JH holds a Canada Research Chair on the population impact of primary healthcare organizations. Her extensive knowledge in measurement of patients' experiences with PHC will be contributive in Theme 3. DR was involved in the project Accessibility and Continuity of Care: A Study of PHC in Québec as co-PI with Dr Levesque and Pineault. Her extensive knowledge and research experience in qualitative research will be a great asset in Theme 2. JC has acquired a vast experience in integrated care for specific populations and will bring the viewpoint of nursing in the provision of PHC (Theme 1). MF is statistician. His expertise will be required in quantitative analysis throughout the project. MH was co-investigator and coordinator of the project Accessibility and Continuity of Care: A Study of PHC in Québec. She will ensure the link between T1 and T2 of the research, more specifically in Theme 1. AC will bring her assistance in the overall coordination of the project and will be involved more specifically in Theme 1. She read and approved the final manuscript. RBDS and MB are postdoctoral trainees who will be involved in organizational and contextual analyses respectively. They read and approved the final manuscript. Co-decision-makers on the project will bring substantive support in KTE activities in their specific professional sphere: FG in the medical community; MD in public health; and JR in the linkage between physicians and decision-makers. LC is active in the implementation of new forms of PHC in Montréal region. His strategic position will facilitate KTE activities among top level decision-makers. Finally, in order to achieve a bi-directional flow of information between investigators and decision-makers, we will establish an advisory committee composed of collaborators coming from clinical, management and policy fields. Regional and provincial bodies will also be part of this advisory group. This advisory committee will also benefit from the participation of recognized researchers in PHC from other Canadian provinces as well as national associations in order to broaden the scope of the study and ensure a larger transferability of the results. An innovative feature of our research program is the longitudinal nature of the program of research. We plan to complement the aforementioned areas of studies with inquiries suggested by the advisory committee throughout the program of research. This will enable the advisory committee to influence the ongoing processes of data collection and analyses. In return, this body will provide organizational insights into the findings. Pre-publication history The pre-publication history for this paper can be accessed here: http://www.biomedcentral.com/1471-2296/11/95/prepub Supplementary Material
Acknowledgements We aknowledge the contribution of the following collaborators and researchers associated with the project : D. Roy, J.L. Denis, P. Lamarche, M.D. Beaulieu, D. Feldman, J. Haggerty, D. Roberge, J. Côté, M. Fournier, M. Hamel, F. Goulet, M. Drouin, J. Rodrigue, L. Côté, and Odette Lemoine. The study benefits from the financial contributions of the Canadian Institutes of Health Research, the Fonds de recherche en santé du Québec, the Agence de la santé et des services sociaux de Montréal and the Agence de la santé et des services sociaux de la Montérégie.
CC BY
no
2022-01-12 15:21:36
BMC Fam Pract. 2010 Dec 1; 11:95
oa_package/ca/cd/PMC3014883.tar.gz
PMC3014884
21129180
Background Diabetes mellitus and hypertension cause a significant burden of disease in Barbados [ 1 - 3 ]. Audits of primary care have shown numerous deficiencies in the quality of hypertension and diabetes care [ 4 - 6 ]. In an attempt to produce a higher quality of care the Commonwealth Caribbean Medical Research Council now called the Caribbean Health Research Council (CCMRC and CHRC respectively) developed and distributed practice guidelines. Managing Diabetes in Primary Care was produced in 1995 [ 7 ] and Managing Hypertension in Primary Care in the Caribbean in 1998 [ 8 ]. Subsequent audits [ 4 , 9 , 10 ] showed only limited improvement. Updated versions of the CHRC guidelines were released in 2006 and 2007 [ 11 , 12 ]. Strong implementation strategies did not accompany release of any of the CCMRC/CHRC guidelines. They consisted of a pair of workshops attended by some health care practitioners. In addition in 2001 the Ministry of Health of Barbados developed the Protocol for the Monitoring, Surveillance and Management of Diabetes Mellitus in Barbados [ 13 ], which was implemented by seminars directed at public sector health professionals. When acted upon, guidelines have been shown to have potential to improve both the process of care and patient health outcomes [ 14 , 15 ]. However, the actual value of guidelines has seldom been assessed through formal evaluation procedures [ 16 - 18 ], and when they are evaluated, they are often found to fall short of expectations [ 14 , 17 , 19 ]. A great deal rests on the quality of the implementation strategies. Didactic lecture-based CME and mailed unsolicited materials are weak methods, while audit and feedback delivered by peers or opinion leaders, reminder systems and academic detailing are strong methods. Multiple simultaneous interventions are the strongest implementation method [ 20 ]. An assessment of the needs and barriers faced by practitioners and patients is valuable in assisting in the design of an effective implementation strategy. Before a guideline can affect patient outcomes it must first affect practitioner knowledge, then attitudes and finally behaviour. Practitioners need to first become aware of the existence of the guideline, and then familiar with its recommendations. Attitudes required include agreement with the recommendations, self-efficacy (the belief that one can perform the required behaviour), outcome expectancy (the expectation that a given behaviour will produce a particular outcome) and motivation to change current practice [ 21 ]. Even with the correct attitude, barriers which could be guideline, patient, practitioner, system or society related may prevent guideline adoption. The objectives of this study were to evaluate the knowledge, attitudes and practices, and the barriers faced by primary care practitioners in Barbados concerning the recommendations of available diabetes and hypertension guidelines and protocols by means of focus groups.
Methods Setting Eight publicly funded polyclinics strategically located around the island provide free comprehensive primary care, while in 2005 at least 89 private general practitioners provided service for a fee. At public sector polyclinics patients are often seen by a nurse before the consultation with the general practitioner. A dietician and podiatrist are available at each clinic on specific days. All polyclinics have a pharmacy. Most private practitioners work in solo or small group practices, and do not employ a nurse. Robust data is not available but it has been estimated that primary care is approximately equally split between the public and private sectors [ 22 ]. Patients in both sectors are provided, at no cost under the Barbados Drug benefit Service, with an appropriate range of medications to treat diabetes, and hypertension. Otherwise private patients pay for all services. Focus group recruiting-public practitioners Focus group sessions were held onsite at all 8 polyclinics during the afternoon when the workload was more likely to be light. All physicians and other practitioners providing diabetes and hypertension care, who were present in the polyclinic at the time of the session and were not occupied with essential tasks were invited to attend. Including non-physician practitioners reflected the team approach to diabetes care found in Polyclinics. Focus group recruiting-private practitioners Private sector primary care physicians were selected from a previously validated list containing 89 names, and private sector dieticians, podiatrists/chiropodists, pharmacists and nurses identified from the yellow pages of the telephone book were contacted by telephone. The focus group process and goals of the study were described, and then the health care worker was invited to participate in a focus group to be held in the evening at a hotel or at the main hospital. Persons agreeing to participate were reminded on the day of the meeting. Focus group process Following standard focus group methodology [ 23 ], a moderator's manual was prepared to meet the objectives of the study. It focussed on the practitioners' knowledge of the CCMRC diabetes and hypertension guidelines [ 7 , 8 ] and the Ministry of Health diabetes Protocol [ 13 ]; practices while caring for patients with diabetes and hypertension; attitudes to guidelines, the diseases diabetes and hypertension and to patients with these diseases; the barriers faced when trying to follow guidelines and in treating patients with these diseases; and recommendations for changes within and outside the health care sector that would help both practitioners and their patients to achieve better care and better health. Props for the focus groups included copies of the 1995 and 1998 CCMRC guidelines, the 2001 Ministry of Health diabetes Protocol, and drafts of the new CHRC guidelines which at the time were being developed (released subsequent to this study in 2006 and 2007). The moderator's manual was pilot tested on a convenience sample of private and public sector practitioners and adjustments were made as necessary. Once finalized, the manuals were followed closely in each focus group but flexibility was allowed if new concepts or problems arose during a focus group session. Focus group sessions were conducted in 2005. Two investigators, a facilitator (AOC) and a recorder (OPA), attended each session. On arrival participants were presented with a written sheet describing the focus group process, the goals and objectives of the study, and explaining that sessions would be taped but participants would remain anonymous. Participants' questions were answered, they were asked to sign a consent form, and then to complete an anonymous short questionnaire concerning their demographic details. The facilitator then started the session by reiterating the goals and the focus group process and explaining again the use of the tape recorder. Participants were again given an opportunity to ask questions. The facilitator then started the session questions, following the manual. The recorder took notes of the discussion. When the moderator's manual questions were completed, participants were thanked for their contribution and asked if they had any additional comments that they wished to make. The session then ended. Following each focus group session, a debriefing session was held to summarize findings, identify any problems and develop plans for future sessions. No revisions or additions needed to be made in the moderator's manual after sessions had started. Data analysis Transcriptions of the tapes were made, and then the text was divided into sections dealing with each topic of interest. If there was difficulty understanding the tapes, the notes taken at the session were consulted. Each comment was given content codes to designate the content issues contained in the comment. Information from focus groups of private physicians and polyclinic practitioners were analysed separately and compared, and a summary was then done. Ethical approval Approval was obtained from the Institutional Review Board of the University of the West Indies, Cave Hill Campus and the Ministry of Health, Barbados.
Results Thirteen focus group sessions were held. Each lasted approximately 2 hours and was attended by 2 to 10 practitioners. No attendee refused to participate. Attending the 8 polyclinic sessions were 63 persons: 17 physicians (29% male, mean age 36), 34 nurses (all female, mean age 49), 3 dieticians (all female, mean age 34), 3 podiatrists (all female, mean age 39), 5 pharmacists (all female, mean age 34), and one female dental assistant. Of these, 10 (3 physicians) reported also having a private practice. The mix of polyclinic practitioners attending the focus groups was representative of polyclinic providers with a preponderance of nurses. Professional employee lists indicate that approximately 49% of the public sector doctors and 25% of the nurses were sampled [ 24 ]. Attending the 5 private provider sessions were 20 persons: 12 physicians (7 males, 5 females, mean age 45), 1 female nurse, 3 dieticians (all female, mean age 46), 2 podiatrists (all female, mean age 35), 2 pharmacists (all male, mean age 37). Four of the physicians in this group also worked in the polyclinics; the remaining participants were strictly in private practice. Physicians were representative of private practitioners (58% male, compared to 68% male on the list of physicians) with a somewhat older age and a higher proportion of males than in the polyclinics. Knowledge CCMRC 1995 diabetes and 1998 hypertension guidelines, and the Ministry of Health 2001 diabetes protocol had been seen by 38%, 32% and 78% respectively of polyclinic practitioners, 67%, 83%, and 33% of private physicians, and 25%, 0% and 38% of non-physician private practitioners. Attitudes and Practices Most private physicians had read the CCMRC guidelines but did not follow them because they were outdated, not patient centred, difficult to remember, and did not give advice on how to tackle barriers. "I may follow guidelines but not necessarily give good care because the focus must be on the patient not the diabetic." Polyclinic practitioners also thought that the guidelines were not sufficiently patient centred. However many who had read the guidelines found them helpful. Private physicians were more likely than polyclinic physicians to say they followed the WHO, American Diabetes Association and JNC guidelines. They preferred them because they were available online, updated regularly and promoted at conferences. The diabetes protocol was felt to be too long. Most polyclinic practitioners had attended sessions during which the diabetes protocol was presented, and felt this method of introducing the guidelines was useful but criticized a lack of copies for non-physicians and for physicians arriving after the initial distribution. This protocol was not promoted to private practitioners. Non-physician private practitioners, like some of their public colleagues, follow their own professional guidelines and were generally unaware of the more general ones issued by CCMRC and the Ministry of Health. A reason given was that the guidelines were not circulated to them. The idea of general guidelines for all healthcare providers was rejected because of the concern that physicians would use the small amount of information concerning the non-physician practice in the guidelines and usurp the role of non-physicians; a lack of involvement in their development; and because guidelines are quickly outdated, use valuable resources and are hard to remember. Patient brochures put out by drug companies were widely used and appreciated by pharmacists. Practitioners strongly recommended that any new guidelines be heavily and repeatedly promoted (possibly by individual detailing), be kept short by using algorithms, updated regularly with loose-leaf additions, and have CD-ROM and online versions. Polyclinic practitioners recommended that there be a sign-in sheet at sessions promoting guidelines and attendance be mandatory. A patient oriented version of the guidelines was welcomed by all. Patient materials should address local dietary habits and footwear. Versions aimed at healthy children and young adults stressing primary prevention were also recommended. Enthusiasm for patient oriented materials came despite complaints that patients do not read brochures or watch videos being presented in the waiting room. Patients were said to be only interested in material that was colourful, in large print, consisted mainly of pictures and showed people, foods and clothes/footwear that come from Barbados. Participants felt that materials should stress the importance of self-management, as well as complications of the diseases but make clear that life with disease could still be happy and healthy if you followed advice. Practitioners also thought that a copy of the medical record could be given to patients, allowing them to see if they were achieving treatment goals, and to know what tests were due so that they budget accordingly if paying for tests in the private sector. This record could be taken to any practitioner visited allowing all practitioners to be fully informed of the patient's status. This "diabetes and hypertension passport" could be part of a patient version of the guidelines. Special programmes for patients Of the 8 polyclinics 3 had diabetes clinics, but only one a hypertension clinic. Of the remaining clinics 5 and 2 respectively had diabetes and hypertension educational programmes with one of these having an exercise programme. Staff at clinics without specific diabetes and hypertension clinics, expressed a desire to have them, and felt that they would improve care, but thought that a lack of resources, primarily human, was the barrier to having them. The team approach, with all members of the team respecting and understanding the role of all others, and all caregivers giving the same, or at least, non-conflicting information, was recommended by non-physician private practitioners. "It does not help to have tension in the team". It was felt that doctors did not communicate well with non-physicians. They were often unapproachable, did not understand the role of other professionals and often did not refer to non-physicians even when their services were needed, and often tried to usurp their role. Non-physician practitioners felt that they spent more time with patients than doctors do, so may develop more rapport. Patients were more likely to divulge non-adherence to them, and this allowed the practitioner to better deal with misinformation and confront denial and fear. One polyclinic practitioner warned of patient confusion caused by the uncoordinated team approach. "The dietician may say to them that they need to eat more of a particular thing but the doctor or podiatrist may give different advice". Another said, "I believe that only the dietitian should be giving nutritional advice because that is my role and what I'm an expert in". Barriers All polyclinic practitioners felt that they provided good quality care. Many of the reasons given for outcomes being less than ideal by polyclinic practitioners were related to patient factors despite the best efforts of practitioners. Patient factors 1. Patient denial which was greater for diabetes than for hypertension, because of the greater associated stigma. Denial and stigma can cause patients to hide their disease. By not allowing others to see them taking pills, using special diets etc. they were less likely to do these things despite the educational efforts of practitioners. "You have diabetics in a home and the other relatives don't even know that the individual is a diabetic". One practitioner concluded "I think it's a denial thing, taking medication means I am sick, what does it solve when there ain't nothing wrong with me, I really don't have any diabetes". 2. A lack of understanding by patients about the disease or treatment modalities. This especially included knowledge on how to cook the right foods, the sources of salt in the diet, the medication regimen, the belief that medication is only needed when symptomatic, and the belief that medication caused side effects which in fact had other causes such as the disease itself and aging. 3. Patients being unable to afford the prescribed treatment and monitoring e.g. diet (particularly fruit and vegetables), exercise facilities or equipment, laboratory tests in the private sector and home BP and self blood glucose monitoring (SBGM) equipment. 4. Patients' forgetfulness and confusion primarily concerning medication regimens. Confusion might occur when changes were made to the regimen, the patient attended different doctors and had different regimens prescribed, and when the pharmacist made changes because of being low or out-of-stock of a prescribed medication. 5. Side effects such as impotence, frequency of micturition and fatigue caused by medications, particularly those used to treat hypertension. Patients in Barbados felt they must keep the practitioner happy and falsely report taking medication without problems rather than report side effects and requesting a change in regimen. 6. Patients' religious beliefs or belief in alternative medicine , which lead them to believe that they will be cured without treatment or through alternative treatment. Such beliefs were often not reported to practitioners. 7. Patients being "incapable of reversing 40 years of bad habits". 8. Patients' lack of time for cooking proper foods and the environment not being conducive to exercising. 9. Some patients' fear of sticking themselves prevented them from doing SBGM and/or from using insulin. 10. Late presentation by patients due to ignorance or denial, or non-adherence with advice following earlier diagnoses It was stressed that non-adherence was more prevalent for hypertension than diabetes because patients were more likely to be asymptomatic, the sequellae of the disease seem to worry patients less, and the medications have more side effects. "A patient's blood pressure may be sky high and looking to have a stroke any minute and they simply continue on their merry way without a clue. So if you don't feel ill why would you go to the doctor? And that is the biggest barrier we face". "Some of them feel that hypertension carries a symptom and that they can feel it in their head, and some feel that their blood pressure is too low and that it slows them down". However, the stigma of hypertension is less than for diabetes so patients could accept the diagnosis with less difficulty. Private sector physicians placed less blame for poor outcomes on lack of patient adherence, and more emphasis on the need to communicate with patients so that they could accept and understand their disease. "This is my job to make sure they are motivated-to do this I must understand the patient". Emphasis was placed on financial barriers in accessing care from podiatrists, dieticians and ophthalmologists, laboratory testing, "healthy" foods, exercise facilities and drugs not provided free e.g. those for hyperlipidemia. They particularly blamed long waiting times for appointments with public sector consultants and lack of feedback after the patient had been seen as a barrier to care for those who could not afford additional private care beyond the family physician. Results of diagnostic tests from the public system often took too long to be available. Free medication was not valued, so patients did not take them properly. Some felt that a very small charge would result in patients valuing them more. However, it was also felt patients preferred to take free and easily available medication rather than try to change lifestyle. System barriers 1. Lack of access to needed investigation tools and teaching aids in the polyclinic setting such as haemoglobin A1c reagents, blood tubes required to carry out various tests, large and children's sized blood pressure cuffs, equipment for the podiatrists, and educational videos and models used by dieticians in teaching. In the private sector the cost of tests and care provided by dieticians, podiatrists and others was a problem. 2. Lack of a reliable supply of medications . Even though medication was free, and one month's medication should be dispensed at a time, patients were often given less than this at polyclinic pharmacies because of supplies running low. This required the patient to return for more medication before the end of the month, confusing the patient and contributing to non-adherence. It was reported that medications required to control the most resistant cases were not covered by the formulary and most patients could not afford them. The process to obtain them free was too burdensome. On the other hand, completely free medications could lead to patients undervaluing them. 3. Lack of human resources to deal adequately with the volume of patients presenting to polyclinics. This leads to inadequate interaction with and education of patients, unacceptable waiting times and an inability to visit patients in the community. Barriers faced by patients When asked about barriers that patients face when trying to maintain their health, practitioners listed the following: 1. Difficulty obtaining time off work or from daily responsibilities to attend clinic. This, combined with long wait times for both polyclinic visits and pharmacy supplies, means that patients often do not have the time to take in the education offered and may leave without medication, or may not return when medication should be renewed, leading to non-adherence. 2. Cultural barriers such as the typical Barbadian footwear; a diet that involves large meals (which are interpreted as a sign of love), and is high in salt and fat; the stigma of chronic disease (leading to patients hiding their disease and eating all foods to pretend they do not have disease); cultural attitudes to medication ("you only take it when you feel sick", "bush tea is good medication"); cultural attitudes to exercise (only for the young and must be a long hard workout or it is not exercise); cultural attitudes to obesity (men like fat women, if you are thin you have AIDS); high alcohol consumption by men; societal acceptance of the medical model where patients do not take responsibility for maintaining their health. Overeating at certain times such as Christmas and of seasonal fruits e.g. mangoes was seen as a problem for diabetes care. 3. A lack of support e.g. living alone without the means or energy to cook properly for oneself, or relying on an unsympathetic family cook who is unwilling to change the standard fare or cook a separate meal for the patient. 4. Patients were well aware of the side effects of medication and anticipated them. Medication package inserts, which overemphasized side effects, contributed to this problem. How the health care system could help providers improve the health of those with diabetes and hypertension Suggestions included educating both the public and persons with the condition, screening programmes, providing free home monitors, and staffing issues. 1. Education campaigns should be similar to that for AIDS. Education on diet, exercise and obesity should be done in schools, and through the media (television especially). Messages should aim to get fruits and vegetables into the diet. Messages could be incorporated into popular songs, school skits, cartoons and other sites such as billboards and posters that appeal to young people. Group education/support programmes for new patients should be developed. Content should include patient responsibility for self-management including monitoring, and tackle the denial, stigma and avoidance of disclosure issues. 2. Screening programmes in malls, worksites and other public places can reduce the incidence of late diagnosis. 3. Home monitors should be available free to those who have been taught how to use them, to encourage responsibility and allow immediate feedback of the effects of diet and exercise. 4. The team approach should be encouraged by setting up case conferences involving all caregivers for problem cases, redesigning clinic space so team members can work together, bringing in resources to augment the team such as behaviour change experts and exercise experts. 5. Human resources should be increased and also better utilised so that more time could be spent with patients. By freeing nurses from tasks such as phlebotomy they would be able to do the tasks they were trained for (e.g. community outreach and home visits). 6. Physicians should be required to participate in continuing medical education (CME). There should be improved training for nurses in chronic disease management, with those involved in patient care and not managers getting priority to attend courses. In addition to many of the recommendations above, private physicians thought that improving communication between the public and private sectors, facilitating timely care in the public sector for services patients could not afford privately, and providing financial support for patients who could not afford required tests, monitors and drugs would be helpful. Rapid access of private patients to the public system for diagnostic tests, dieticians, podiatrists, internists, ophthalmologists and cardiologists should be allowed. When patient care is shared there should be improved communication by the polyclinics and public sector specialists. Patients should not be scolded for crossing between the systems because, if they are, they will hide visits to one from the other, leading to poor care. The patient passport, or a centralized information system with data on each patient accessed by all caregivers would solve many problems. The latter was highly recommended if resources are available. There should be improved access to specially authorised drugs (drugs that are not automatically available free). The passport would help empower the patient to be the coordinator of his or her own care. One participant suggested more educational emphasis on the simpler, cheaper drugs for hypertension to counter the promotion of the expensive ones. The money saved could be used to fund the other recommendations. How wider society could help providers improve the health of those with diabetes and hypertension Suggestions involved educational outreach to promote family support in managing the condition (cooking, encouraging exercise, giving insulin); a greater role for volunteer groups and retired persons in providing education, support, exercise groups and screening programmes; starting associations for hypertension and hyperlipidemia similar to the diabetes association or perhaps one association for all three conditions; the provision by the government of sidewalks and cycling lanes for safe exercise; healthy food choices at schools and work places; a tax on unhealthy fast food and an attempt to bring down the cost of healthy food by the government; a requirement that fast food outlets provide healthy alternatives; labelling of all food to include fat, salt and calorie content; encouraging a kitchen garden programme; time off by employers to attend appointments; and prominent persons with the disease should speak out to reduce stigma, and give hope that a good life can be had while living with chronic disease. In addition false and/or unhealthy messages given by the media on alcohol, cigarettes, candy, fast foods, herbal medicines, weight loss and the use of automobiles should be countered. It was suggested that the proceeds of a "fat tax" on unhealthy food could go to health care and to reduce the price of healthy foods.
Discussion This study showed that many practitioners did not use regional guidelines. It also showed that the family physicians, nurses, dieticians, podiatrists and pharmacists who comprise the health care team responsible for most primary diabetes and hypertension care have much to contribute both to guideline development and implementation. Practitioners can become aware of guidelines by diffusion (distribution of information with unaided adoption) as in the case of CCMRC guidelines, by dissemination (communication of information to improve knowledge and skills) as was the case of the diabetes protocol for polyclinic practitioners, or by implementation (active dissemination including strategies to overcome barriers) [ 20 ]. In many cases the lack of use of guidelines would have been due to a lack of awareness that they exist, or to a lack of familiarity with the contents. Other reasons included a lack of agreement with the guidelines (may not give good care by following the guidelines), guideline related barriers (too long, and hard to remember), and patient, physician, and system barriers. Practitioners thought that they provided a good quality of care. Actual practice as revealed by a chart audit conducted at the same time as this study revealed significant deficiencies in the frequency with which recommended processes of care were performed, and with the attainment of control targets [ 9 , 10 ]. Physicians often overestimate the quality of care they provide [ 25 - 27 ]. Without the benefit of an audit with feedback, practitioners may not realise how far short they fall from meeting targets set by the guidelines, and have no direction in how to improve their care. The creation of a patient version of the guidelines may also assist as patients will be in a better position to monitor their care and request interventions. Physicians may attribute a less than ideal outcome primarily to patient non-adherence [ 27 ]. It has however been suggested that a major obstacle to better BP control is the failure of the physician to intensify therapy [ 28 , 29 ]. Higher levels of communication, guidance and familiarity with behaviour change techniques by physicians may be necessary to overcome patient factors such as denial; poor understanding of the condition, and treatment; the limitations of alternative medicine and religious beliefs; ingrained habits and fears; and lack of time management skills necessary for appropriate exercise and food preparation. There might possibly be a lack of self-efficacy (the belief that one can adequately perform the required process of care) or outcome expectancy (the process will produce the desired outcome) with regards to interventions aimed at these patient factors. Understanding behaviour change models such as the transtheoretical model of behaviour change may help practitioners to focus less on patient failure, and more on the importance of matching interventions and expectations to the patient's readiness to change [ 30 ]. The understanding that sustained behaviour change is not usually a discrete single event but a gradual shift requiring varying amounts of time might lower physician frustration during the change process, and increase self-efficacy. Private sector physicians placed less emphasis on patient adherence, and a greater emphasis on a need to communicate with patients than polyclinic practitioners. It is possible that differences in patient characteristics and system factors including time and incentives might account for some of this difference between practitioners. Practitioners identified system barriers, which centred on the availability, timely access and cost of the necessary resources for achieving good care. Implementation of guidelines would involve putting measures in place to correct these barriers. Some deficiencies might be easier to correct than others. Supplying an adequate number of appropriate size blood pressure cuffs should be simple, but to improve care, staff would have to use them. Free laboratory tests for private patients would be more costly, but even a modest decrease in the cost to patients might increase the frequency with which certain tests are done. Cost effective prescribing could save money. It is likely that physicians are exposed far more frequently to detailing of the more expensive drugs by pharmaceutical representatives, than to strictly academic presentations stressing evidence based use of lower cost drugs. Some of the barriers faced by patients would be beyond the scope of the individual practitioner to influence. However some issues such as difficulty taking time off work could be minimised by having specific appointment times in polyclinics and a better customer focus at pharmacies. Systems could be put in place to monitor waiting times and maximum waiting time targets set. The interdisciplinary team approach with special clinics and programmes appeared to be valued by polyclinic practitioners. On the other hand private non-physicians did not think that their skills were properly utilised by physicians, and some feared that if physicians were more knowledgeable they would use their services even less. The integrated team approach as a model for the treatment of chronic illness is not a new concept. It may be more effective in improving outcomes than traditional face-to-face physician visits, but adoption requires a shift in how providers view their roles and relationships, both with patients and with professionals in other disciplines [ 31 , 32 ].
Conclusions Barbados must take a societal approach if it is to change sufficiently to improve the health of citizens living with diabetes and/or hypertension. The health care system alone cannot be expected to change lives and behaviours. However, the health care system must also change to improve the care of those with these diseases. This change can start with an intensive, multi-pronged implementation of new guidelines, including compulsory educational sessions for health care providers, academic detailing and audit and feedback techniques. But it must go beyond such efforts to include improved public and patient education, system improvements in the health care system and improved patient access to essential care.
Background Audits have shown numerous deficiencies in the quality of hypertension and diabetes primary care in Barbados, despite distribution of regional guidelines. This study aimed to evaluate the knowledge, attitudes and practices, and the barriers faced by primary care practitioners in Barbados concerning the recommendations of available diabetes and hypertension guidelines. Methods Focus groups using a moderator's manual were conducted at all 8 public sector polyclinics, and 5 sessions were held for private practitioners. Results Polyclinic sessions were attended by 63 persons (17 physicians, 34 nurses, 3 dieticians, 3 podiatrists, 5 pharmacists, and 1 other), and private sector sessions by 20 persons (12 physicians, 1 nurse, 3 dieticians, 2 podiatrists and 2 pharmacists). Practitioners generally thought they gave a good quality of care. Commonwealth Caribbean Medical Research Council 1995 diabetes and 1998 hypertension guidelines, and the Ministry of Health 2001 diabetes protocol had been seen by 38%, 32% and 78% respectively of polyclinic practitioners, 67%, 83%, and 33% of private physicians, and 25%, 0% and 38% of non-physician private practitioners. Current guidelines were considered by some to be outdated, unavailable, difficult to remember and lacking in advice to tackle barriers. Practitioners thought that guidelines should be circulated widely, promoted with repeated educational sessions, and kept short. Patient oriented versions of the guidelines were welcomed. Patient factors causing barriers to ideal outcome included denial and fear of stigma; financial resources to access an appropriate diet, exercise and monitoring equipment; confusion over medication regimens, not valuing free medication, belief in alternative medicines, and being unable to change habits. System barriers included lack of access to blood investigations, clinic equipment and medication; the lack of human resources in polyclinics; and an uncoordinated team approach. Patients faced cultural barriers with regards to meals, exercise, appropriate body size, footwear, medication taking, and taking responsibility for one's health; and difficulty getting time off work to attend clinic. Conclusions Guidelines need to be promoted repeatedly, and implemented with strategies to overcome barriers. Their development and implementation must be guided by input from all providers on the primary health care team.
Competing interests The authors declare that they have no competing interests. Authors' contributions OPA, AOC participated in the conception and design of the study; the acquisition, and interpretation of data; and drafting and revising the manuscript critically. Both authors have read and approved the final manuscript. Pre-publication history The pre-publication history for this paper can be accessed here: http://www.biomedcentral.com/1471-2296/11/96/prepub
Acknowledgements This research was funded by a Caribbean Health Research Council grant.
CC BY
no
2022-01-12 15:21:36
BMC Fam Pract. 2010 Dec 3; 11:96
oa_package/2f/6e/PMC3014884.tar.gz
PMC3014885
21143879
Introduction Metabolic syndrome (MetS), a clustering of cardiovascular risk factors such as insulin resistance, hypertension, glucose intolerance, hypertriglyceridemia, and low high-density lipoprotein cholesterol (HDL-C) levels, is a major worldwide public health problem. MetS increases the risk of atherosclerotic disease, diabetes [ 1 , 2 ], and cardiovascular disease (CVD) [ 3 ]. MetS affects 13.3% to 24.4% of Japanese men ≥ 30 years of age [ 4 , 5 ]. With the continuous increase in obesity prevalence in Japan, MetS may become even more common. Recent data support the concept that high-sensitivity C-reactive protein (hsCRP) is an inflammatory marker and independent predictor reflecting the early stage of CVD [ 6 ]. Several studies have demonstrated that hsCRP is induced by cytokines produced by accumulated adipocytes, and then increases in subjects with MetS [ 7 , 8 ]. Serum Gamma-Glutamyl Transferase is an enzyme present on cell surfaces and in serum that contributes to the extracellular catabolism of glutathione (GSH), but most serum GGT is derived from the liver [ 9 ]. Gamma-Glutamyl Transferase (GGT) is also a clinical marker of several factors: alcohol consumption, body fat content [ 10 ], plasma lipid/lipoproteins [ 11 , 12 ] and glucose levels [ 12 - 14 ], blood pressure [ 12 , 14 ], and metabolic syndrome [ 14 , 15 ]. It is also associated with CVD [ 14 , 15 ] and CVD mortality [ 14 - 16 ]. In addition, Taki et al. [ 17 ] reported that GGT showed a significant correlation with hsCRP, suggesting a possible interaction between these two key makers. However, there are few reports on the relationship between CRP, GGT and MetS in Japan. The aim of this study was to determine whether increased hsCRP and GGT levels are interactively associated with MetS, and we examined cross-sectional data from Japanese community-dwelling participants.
Methods Subjects Participants were recruited at the time of their annual health examination in a rural town located in Ehime prefecture, Japan. Participants were recruited at the time of their annual health examination in a rural town with a total population of 11,136 (as of April 2002) and located in Ehime prefecture, Japan, in 2002. Among the 9,133 adults (4,395 male) aged 19 to 90 years in this population, a random sample of 3,164 (34.6%) subjects was recruited. Other characteristics such as smoking and alcohol habits, and medication, were investigated by individual interviews that were conducted using a structured questionnaire. The final study sample included 1,919 eligible persons. All procedures were approved by the Ethics Committee of Ehime University School of Medicine and each subject gave informed consent to participate. Evaluation of Risk Factors Information on demographic characteristics and risk factors was collected using the clinical files. Body mass index was calculated by dividing weight (in kilograms) by the square of the height (in meters). We measured blood pressure with an appropriate-sized cuff on the right upper arm of the subjects in a sedentary position using an automatic oscillometric blood pressure recorder (BP-103i; Colin, Aichi, Japan) while they were seated after having rested for at least 5 min. Smoking status was defined as the number of cigarette packs per day multiplied by the number of years smoked (pack year), and the participants were classified into never smokers, past smokers, light smokers (<30 pack year) and heavy smokers (≥30 pack year). The daily alcohol consumption was measured using the Japanese liquor unit in which a unit corresponds to 22.9 g of ethanol, and the participants were classified into never drinkers, occasional drinkers (<1 unit/day), light drinkers (1-1.9 unit/day), and heavy drinkers (≥2 unit/day). Total cholesterol (T-C), triglycerides (TG), HDL-C, fasting blood glucose (FBG), creatinine (enzymatic method), uric acid, immuno-reactive insulin (IRI), plasma high molecular weight (HMW) adiponectin (FUJIREBIO, Tokyo, Japan), hsCRP, and GGT were measured during fasting. Plasma hsCRP concentration was measured using a Behring BN II nephelometer (Dade Behring Inc., Marburg, Germany) and this inter- and intraassay coefficient variations was 3.2 and 6.7%, respectively. Serum GGT concentration was assayed with an automatic analyzer (TBA-c6000, TOSHIBA, Tokyo) and this intraassay-coefficient of variation was 0.87 to 2.11%. Low-density lipoprotein cholesterol (LDL-C) level was calculated by the Friedewald formula. Participants with TG levels ≥400 mg/dL were excluded. Estimated GFR was calculated using the following equation: eGFR = 194×Cr -1.094 ×Age -0.287 ×0.739 (if female) [ 18 ]. Participants with an eGFR <30 mL/min/1.73 m 2 were excluded. Homeostasis of model assessment of insulin resistance (HOMA-IR) was calculated from FBG and IRI levels using the following formula: {FBG (mg/dL) × IRI (mU/mL)}/405 [ 19 ]. Insulin resistance was defined as a HOMA-IR ≥2.6. Metabolic Syndrome We applied condition-specific cutoff points for MetS based on the modified criteria of the National Cholesterol Education Program's Adult Treatment Panel (NCEP-ATP) III report [ 20 ]. Metabolic syndrome was defined as subjects with at least three or more of the following five conditions: 1) obesity: BMI ≥25.0 kg/m 2 according to the guidelines of the Japanese Society for the Study of Obesity (waist circumference was not available in this study) [ 21 , 22 ]; 2) raised BP with systolic blood pressure (SBP) ≥130 mmHg and/or diastolic blood pressure (DBP) ≥85 mmHg, and/or current treatment for hypertension; 3) Hypertriglyceridemia with a TG level ≥1.69 mmol/L (150 mg/dL); 4) low HDL cholesterolemia with a HDL-C <1.04 mmol/L (40 mg/dL) in men and 1.30 mmol/L (50 mg/dL) in women; and 5) IFG with a FBG level ≥6.1 mmol/L (110 mg/dL) or current treatment for diabetes mellitus. Statistical Analysis Data are presented as the mean ± standard deviation (SD) unless otherwise specified, and in the cases of parameters with non-normal distributions (TG, IRI, FBG, HOMA-IR, and GGT) the data are shown as median (interquartile range) values. In all analyses, parameters with non-normal distributions were used after log-transformation. As several background differences between men and women were demonstrated by previous studies [ 2 , 16 , 22 ], statistical analysis was performed according to sex using PASW Statistics 17.0 (Statistical Package for Social Science Japan, Inc., Tokyo, Japan). The differences among groups categorized by sex and presence of MetS were analyzed by Student's t-test for continuous variables or the χ 2 -test for categorical variables. Correlations between various characteristics and HOMA-IR were determined using Pearson's correlation. Subjects were divided into three groups based on tertiles of serum GGT and hsCRP within sex and then combined to avoid the gender differences. Multiple logistic regression analysis was used to evaluate the contribution of confounding factors for MetS and each component of MetS. The synergistic effect of CRP and GGT was evaluated using a general linear model adjusted for the following parameters: age, smoking status, alcohol consumption, uric acid, and estimated glomerular filtration rate. In addition, we demonstrated whether the ORs for MetS and insulin resistance dose-dependently increased in relation to hsCRP and GGT in subgroups of confounding factors which effected on MetS and insulin resistance (e.g., age, alcohol consumption, uric acid, medication, HMW adiponectin). A p -value < 0.05 was considered significant.
Results Characteristics of subjects The characteristics of the study participants in relation to sex are illustrated in Table 1 . The study included 822 men, aged 61 ± 14 (range, 20-89) years, and 1,097 women, aged 63 ± 12 (range, 21-88) years. Smoking status, alcohol consumption, history of CVD, DBP, TG, uric acid, FBG, hsCRP, and GGT were higher in men than in women, but age, HDL-C, LDL-C, presence of antilipidemic medication, IRI, HOMA-IR, and HMW adiponectin were higher in women than in men. There was no inter-group difference in BMI, SBP, presence of antihypertensive medication, eGFR, and diabetic medication. Association between various characteristics, and MetS and insulin resistance Table 2 shows the risk of MetS and abnormalities of its components in relation to hsCRP and GGT among 822 men and 1,097 women. Of these, 141 men (17.2%) and 170 women (15.5%) had MetS. As shown Table 2 , BMI, SBP, DBP, TG, LDL-C, FBG, HOMA-IR, presence of diabetic medication, hsCRP, and GGT showed a higher level in participants with MetS than those without in both genders, but HDL-C and HMW adiponectin showed a lower level in those with MetS. Age, presence of antilipidemic medication, and uric acid were higher only in women with MetS, but eGFR was lower in women with MetS. In men, the HOMA-IR increased significantly in correlation with an increase in BMI, SBP, DBP, presence of antihypertensive medication, TG, LDL-C, presence of antilipidemic medication, uric acid, FBG, presence of diabetic medication, hsCRP, and GGT, but decrease in age, HDL-C, and HMW adiponectin. In women, the HOMA-IR increased significantly in correlation with an increase in age, BMI, SBP, DBP, presence of antihypertensive medication, TG, LDL-C, presence of antilipidemic medication, uric acid, FBG, presence of diabetic medication, hsCRP, and GGT, but decrease in HDL-C, eGFR, and HMW adiponectin. The adjusted odds ratio for MetS. its components, and insulin resistance in relation to tertiles of hsCRP and GGT As shown Table 3 , after adjustments for age, smoking status, alcohol consumption, uric acid, and eGFR, the prevalence rate of MetS increased significantly in relation to hsCRP and GGT in both genders. In men, the ORs (95% CI) for MetS across tertiles of hsCRP and GGT were 1.00, 1.69 (1.01-2.80), and 2.13 (1.29-3.52), and 1.00, 3.26 (1.84-5.78) and 6.11 (3.30-11.3), respectively. In women, the ORs (95% CI) for MetS across tertiles of hsCRP and GGT were 1.00, 1.54 (0.92-2.60), and 3.08 (1.88-5.06), and 1.00, 1.70 (1.04-2.79) and 2.67 (1.66-4.30), respectively. In men, the ORs of hsCRP were significantly high for MetS components of obesity, low HDL cholesterolemia, and impaired fasting glucose, and the ORs of GGT were significantly high for obesity, raised blood pressure, hypertriglyceridemia, and impaired fasting glucose. In women, the ORs of hsCRP were significantly high for MetS components of obesity, hypertriglyceridemia, low HDL cholesterolemia, and impaired fasting glucose, and the ORs of GGT were significantly high for obesity, hypertriglyceridemia, and impaired fasting glucose. The ORs for HOMA-IR ≥2.6 also increased significantly in relation to hsCRP and GGT in both genders. Synergistic effect of GGT and hsCRP on mean accumulating number of MetS components and insulin resistance In addiction to their direct associations, we observed a synergistic effect between hsCRP and GGT (Figure 1 ). In Figure 1 subjects were divided into three groups (tertiles) according to CRP and GGT levels within sex. We assessed the statistical significance of the synergistic relationship using a general linear model with the following confounding factors: age, smoking status, alcohol consumption, uric acid, eGFR (Table 4 ). The interaction between increased hsCRP and GGT was a significant and independent determinant for MetS and HOMA-IR ≥2.6 in both genders. Association between hsCRP and GGT levels, and metabolic syndrome and HOMA-IR, within selected subgroups Next, to control potential confounding by MetS and insulin resistance, the data were further stratified by age, alcohol consumption, uric acid, HMW adiponectin, and medication (e.g., antihypertensive, antilipidemic, and diabetic medication) (Table 5 ). The ORs for MetS and HOMA-IR ≥2.6 increased significantly in relation to hsCRP and GGT in almost all the subgroups.
Discussion In 1,919 community-dwelling subjects, we determined the prevalence rate of MetS, as defined by modified NCEP-ATPIII criteria [ 20 ], and examined the association between hsCRP and GGT, and MetS and its components. MetS was common, occurring in 17.2% of men and 15.5% of women. In both men and women, the prevalence rate of MetS increased significantly in relation to hsCRP and GGT, even after adjusting for age, smoking status, drinking status, uric acid, and eGFR. The OR of MetS increased dose-dependently with increasing tertiles of hsCRP and GGT. In addition, we demonstrated that there is an interaction between increased hsCRP and GGT. The ORs of MetS and HOMA-IR ≥2.6 were significantly increased in relation to hsCRP and GGT in almost all the subgroups stratified by age, alcohol consumption, uric acid, HMW adiponectin, and medication. To our knowledge, this is the first study to indicate these associations of CRP and GGT with MetS and insulin resistance in about 2,000 community-dwelling subjects. Systemic inflammation is closely associated with the pathogenesis of MetS. Several previous studies have demonstrated that elevated CRP was associated with increased odds of MetS after adjusting for potential confounding factors [ 23 - 26 ]. In a rural Chinese population, compared with subjects without components of MetS, those with 1, 2, 3, 4, and 5 components of MetS had ORs of 1.39, 1.08, 1.84, 2.65, and 1.21 for elevated CRP in men and 1.91, 2.06, 3.10, 4.06, and 6.01 in women, respectively [ 27 ]. In our study, higher GGT levels were also positively associated with MetS, independent of other confounders. Similar results have been reported in recent studies [ 28 , 29 ]. Nakanishi et al [ 28 ] demonstrated that serum GGT may be an important predictor for developing MetS in 2,957 metabolic syndrome-free men and 3,260 nondiabetic men aged 35-59 years. After adjustments for age, family history of diabetes, BMI, alcohol intake, smoking status, regular physical activity, and white blood cell count, increased serum GGT was related to the risk of developing MetS, even among individuals with normal GGT concentrations, this is a finding consistent with previous prospective reports looking at GGT. Among a total 3246 Korean adults, the number of MetS components, prevalence of MetS, and insulin resistance (HOMA-IR) increased closely as the quartile of serum GGT increased [ 30 ]. André et al also demonstrated that serum GGT is an important predictor for developing MetS in 1,659 men and 1,889 women without MetS at baseline [ 31 ]. Moreover, in a pooled logistic analysis after adjustments for age, alcohol intake, smoking status, physical activity, alanine aminotransferase, fasting insulin and HOMA-IR, high baseline GGT concentrations predicted future development of MetS defined by the IDF and AHA/NHLBI criteria after 4 y of follow-up [ 32 ]. In a community-based cohort study of 9,148 Korean adults that included 1056 men, the risk of MetS occurring increased across the baseline GGT quartiles, independent of age plus the time elapsed from visit 1 to visit 2, baseline MetS, uric acid, regular exercise, alcohol consumption, and smoking, and even after further updating GGT values during the follow-up [ 33 ]. We also suggested that higher serum GGT was significantly associated with MetS and its components in the same population and this association was related with insulin resistance, independent of other confounding factors [ 34 ]. A unique point of this result is that hsCRP and GGT were independently and synergistically associated with MetS, its components and insulin resistance. The mechanisms by which hsCRP and GGT reflect the risk for MetS are not completely understood. However, systemic inflammation is closely involved in the pathogenesis of MetS, and thus, both elevated hsCRP and GGT may also reflect inflammation, which impairs insulin signaling in the liver, muscle, and adipose tissues [ 35 ]. Fat accumulation in the liver or adipose tissues can induce inflammatory cytokines such as tumor necrosis factor-α, interleukin-6, and interleukin-8 [ 36 ]. These cytokines produced by adipocytes stimulate the hepatic synthesis of CRP, which is an acute-phase protein, and influence insulin resistance, lipid and glucose metabolism. [ 37 ]. Moreover, high GGT is strongly associated with higher CRP levels [ 38 ], suggesting that this enzyme represents the expression of subclinical inflammation, and has a role in cellular stress [ 39 ]. On the other hand, we have been also reported that increased hsCRP and decreased high molecular weight (HMW) adiponectin are synergistically associated with the accumulation of metabolic disorders [ 40 ], however, both hsCRP and GGT were associated with insulin resistance also in subgroups stratified by HMW adiponectin. Furthermore, It has been demonstrated that increased GGT levels might be an antioxidant marker (defensive response) to oxidative stress or a direct marker of oxidative stress, which is involved directly in the generation of reactive oxygen species (ROS), especially in the presence of iron or other transition metals, inducing lipid peroxidation in human biological membranes [ 41 , 42 ]. There are some limitations to this study. First, our cross-sectional study design does not eliminate potential causal relationships between CRP, GGT and MetS. Second, the prevalence rate of MetS, GGT, and hsCRP categories is based on a single assessment of blood, which may introduce a misclassification bias. Third, we used BMI ≥25 to classify individuals with visceral obesity because waist circumference measurements were not available, which might have caused an under or over estimation of the effect of visceral obesity on MetS [ 43 ]. In fact, the prevalence rate of MetS in women was higher than those of the general reports in Japanese [ 1 , 44 ]. Fourth, serum GGT levels are various for same alcoholic consumption, and the possible association of the fatty liver with the presence of MetS, and with the elevated GGT could not be accurately explored. Thus, we demonstrated that GTT are independently associated with MetS after adjustment for alcoholic consumption and also in subgroup of drinker and non-drinker, suggesting drinking status does not dramatically affect the usefulness of γ-GTP as a biomarker for MetS risk. Fifth, the presence of viral hepatitis must be considered, but examinations of hepatitis B surface antigen and antibody to hepatitis C were not performed. In addition, the rate of taking antilipidemic medication in men with Mets was a little low; however we cannot explain that background. Therefore the demographics and referral source may limit generalizability.
Conclusion In conclusion, the present study showed that hsCRP and GGT levels are strongly associated with MetS or its components in the general population. The underlying mechanism behind this relationship is unclear, but seems to be independent of traditional cardiovascular risk factors such as age, smoking status, alcohol consumption, uric acid, or renal function. For community-dwelling healthy persons, prospective population-based studies are needed to investigate the mechanisms underlying this association to determine whether intervention, such as effective lifestyle modifications or medication (e.g., antihypertensive, antilipidemic, and diabetic medication) that decrease hsCRP and GGT in adults [ 45 ], will decrease the risk of MetS.
Background Metabolic syndrome (MetS) is associated with an increased risk of major cardiovascular events. Increased high-sensitivity C-reactive protein (hsCRP) levels are associated with MetS and its components. Changes in gamma-glutamyl transferase (GGT) levels in response to oxidative stress are also associated with MetS, and the levels could be modulated by hsCRP. Methods From a single community, we recruited 822 men (mean age, 61 ± 14 years) and 1,097 women (63 ± 12 years) during their annual health examination. We investigated whether increased hsCRP and GGT levels are synergistically associated with MetS and insulin resistance evaluated by Homeostasis of model assessment of insulin resistance (HOMA-IR). Results Of these subjects, 141 men (17.2%) and 170 women (15.5%) had MetS. Participants with MetS had a higher hsCRP and GGT level than those without MetS in both genders, and the HOMA-IR increased significantly in correlation with an increase in hsCRP and GGT. In men, the adjusted odds ratios (95% confidence interval) for MetS across tertiles of hsCRP and GGT were 1.00, 1.69 (1.01-2.80), and 2.13 (1.29-3.52), and 1.00, 3.26 (1.84-5.78) and 6.11 (3.30-11.3), respectively. In women, the respective corresponding values were 1.00, 1.54 (0.92-2.60), and 3.08 (1.88-5.06), and 1.00, 1.70 (1.04-2.79) and 2.67 (1.66-4.30). The interaction between increased hsCRP and GGT was a significant and independent determinant for MetS and insulin resistance in both genders. Conclusions These results suggested that higher CRP and GGT levels were synergistically associated with MetS and insulin resistance, independently of other confounding factor in the general population.
Competing interests The authors declare that they have no competing interests. Authors' contributions RK, YT, and KK participated in the design of the study, performed the statistical analysis and drafted the manuscript. NO, TaK, and ToK contributed to acquisition of data and its interpretation. ST and MA contributed to conception and design of the statistical analysis. TM conceived of the study, participated in its design, coordination and helped to draft the manuscript. All authors read and approved the manuscript.
Acknowledgements This work was supported in part by a grant-in-aid for Scientific Research from the Foundation for Development of Community (2009).
CC BY
no
2022-01-12 15:21:36
Cardiovasc Diabetol. 2010 Dec 9; 9:87
oa_package/05/e1/PMC3014885.tar.gz
PMC3014886
21143934
Introduction Mitral annular disjunction consists of a perceptible separation between the left atrial wall-mitral valve junction and the top of the left ventricle wall (Figures 1 , 2 and 3 .). Originally described more than 20 years ago by Hutchins et al [ 1 ], these authors found a strong association between floppy mitral valve and mitral annular disjunction. They further suggested that the disjunction of the mitral annulus fibrosus could play a role in the development of the pathological features of myxomatous valve disease through the mechanical stress incited by the excessive mobility of the mitral apparatus [ 1 ]. This abnormality was forgotten until a Canadian surgical group recently stated the relevance of its recognition prior to mitral valve repair [ 2 ]. In these patients, a modification of the surgical technique seems necessary to avoid prosthetic valve replacement, and to guarantee an optimal and long-lasting result of the repair. Aside these surgical considerations, mitral annulus disjunction has received little attention, and the clinical and transthoracic echocardiographic characteristics of these patients are largely unknown. For a long time, at our echo laboratory, we have been confronted with the recognition of this structural abnormality which we have empirically associated with distinct clinical features, namely a high incidence of arrhythmias. The aims of our study were to determine the prevalence of echocardiographically-recognized mitral annular disjunction in patients with myxomatous valve disease, and to compare the clinical profile and echocardiographic features of patients with and without this abnormality.
Methods Study population For the purpose of this study, we reassessed the clinical and echocardiographic data from all patients with myxomatous mitral valve disease who underwent a transthoracic echocardiographic examination in our laboratory between July 2003 and September 2006. Myxomatous mitral valve was defined as the presence of excess leaflet tissue and leaflet thickening greater than 5 mm, resulting in a prolapse greater than 2 mm into the left atrium on parasternal long axis view [ 3 ]. Overall, 38 patients were included and there were no exclusion criteria. Echocardiographic examination Comprehensive two-dimensional Doppler echocardiography All patients underwent a complete transthoracic 2D, M-mode, and Doppler examination using commercially available systems (Powervision 7000, Toshiba; Acuson Sequoia 320, Siemens; Vivid 7, General Electrics). Image acquisitions and measurements were carried out by senior echocardiographers in accordance with the European Association of Echocardiography recommendations [ 4 ]. The left ventricle was evaluated using left ventricular end-diastolic and end-systolic diameters (LVEDD, LVESD), fractional shortening (FS), LV end-diastolic and end-systolic volumes (LVEDV, LVESV), and ejection fraction (EF) from the modified biplane Simpson's method. Left atrial (LA) size was measured by M-mode. Pulmonary artery systolic pressure (PASP) was estimated from the tricuspid regurgitation jet peak velocity according to the modified Bernoulli equation. Mitral regurgitation (MR) severity was evaluated by colour Doppler combined with the width of the vena contracta and the calculation of the effective regurgitant orifice area by the proximal flow convergence (PISA) method whenever feasible. MR severity was graded according to the European Association of Echocardiography recommendations [ 5 ]. Detection and measurement of mitral annular disjunction and annular diameters The length of annular disjunction was measured from the left atrial wall-mitral valve posterior leaflet junction to the top of the LV posterior wall during end-systole (Figures 1 and 3 ). Mitral annular function was evaluated by measuring the mitral annular diameter during end-systole and end-diastole, on a parasternal long axis view. The difference between these two measurements was considered positive whenever the end-systolic diameter was smaller than the end-diastolic diameter, as is usually seen in normal mitral valve kinetics. 24-Hour Holter monitoring In order to evaluate the arrhythmic profile, a subset of 21 patients not submitted to mitral valve surgery was further studied with 24-hour Holter monitoring. The frequency of ventricular tachycardia was quantified by the sum of beats in ventricular tachycardia. Statistical analysis All data presented are shown as mean ± standard deviation or absolute number (percentage). To compare the differences between groups, we used Student's t test for continuous data and the chi-square test or Fisher's exact test for categorical data, as appropriate. The Mann-Whitney test was used to compare differences between continuous non-parametric variables. The ANOVA method was used to compare differences between more than two groups. Receiver Operating Characteristic (ROC) curve analysis was performed to determine the disjunction length that more accurately predicted the occurrence of non-sustained ventricular tachycardia. Inter and intra-observer correlations were evaluated by Pearson correlation coefficient analysis. Statistical significance was accepted for a two-tailed P < 0.05. SPSS for Windows version 13 (SPSS Inc, Chicago, Ill) and MedCalc for Windows version 9.3.8.0 (MedCalc Software, Mariakerke, Belgium) were used to perform the statistical analyses.
Results Baseline characteristics Clinical and echocardiographic characteristics of the population are shown in tables 1 and 2 , respectively. The majority of patients were symptomatic with a mean NYHA class of 1.3 ± 0.9. Three patients (8%) had a NYHA class greater than 2. Every patient had some degree of mitral valve regurgitation, which was severe in 25 (66%). Eleven patients had already undergone mitral valve surgery (valve repair in four, valve replacement in five, and valve repair followed by valve replacement in two). Mitral annular disjunction was seen in 21 (55%) patients and on average measured 7.4 ± 8.7 mm. Inter and intra-observer correlations were 0.97 and 0.94, respectively. The most severely affected patient had a disjunction length of 30 mm. This particular patient had involvement of the entire annular circumference, and this feature was well documented by transoesophageal echocardiography (Additional file 1 ). Patients with mitral annular disjunction were more often females (62% vs 38%; p = 0.047). Chest pain showed a trend to be more prevalent among patients with than without mitral annulus disjunction (43% vs 12%; p = 0.07). There were no differences between groups regarding NYHA functional class, mitral regurgitation severity or left ventricular ejection fraction (Tables 1 and 2 ). The transthoracic features of this abnormality are shown in Additional files 2 , 3 and 4 . Mitral annulus function We found an association between the presence of disjunction and mitral annulus dysfunction (Table 2 ). In the disjunction group, we observed a paradoxical increase of the mitral annulus diameter during systole. The diastolic-to-systolic mitral annulus diameter difference was -4.6 ± 4.7 mm in this group vs 3.4 ± 1.1 mm in the group without mitral annular disjunction (p < 0.001) (Figure 4 and Additional file 5 ). Arrhythmic profile We observed a high prevalence of atrial fibrillation in the study population. Six patients (16%) were in permanent atrial fibrillation, and another 4 patients (11%) had at least one episode of paroxysmal atrial fibrillation. There were no differences regarding atrial fibrillation frequency between the group with and without annular disjunction (Table 1 ). A subset of 21 patients not submitted to mitral valve surgery was further studied with 24-hour Holter monitoring (Table 3 ). There was no record of sustained ventricular arrhythmia during Holter monitoring. The group with annular disjunction had an increased frequency of ventricular extra beats and non-sustained ventricular tachycardia (NSVT), though this relation wasn't statistically significant. Nevertheless, we found that the wider the magnitude of the disjunction, the higher the incidence of NSVT (Figure 5 ). A disjunction greater than 8.5 mm was a reasonable criterion to predict the risk of NSVT with a sensitivity of 67% and a specificity of 83% (Odds ratio = 10; 95% CI: 1.28 -78.1).
Discussion Mitral annular disjunction has been scarcely mentioned in the literature. In a paper based on a review of 900 random histological mitral annulus exams at necropsy, Hutchins et al [ 1 ] describes a wide range of normal anatomic variation for this region. These authors observed mitral annulus disjunction in 65 (7%) hearts, 23 of them in association with floppy mitral valve. This abnormality was also seen in association with isolated calcified mitral annulus, and with otherwise normal hearts. Because patients with isolated annular disjunction were younger than those with associated floppy mitral valve it was suggested that the disjunction could play a role in the pathogenesis of myxomatous valve disease, through the increased mechanical stress induced by the excessive mobility of the mitral leaflets [ 1 ]. As for our results, the prevalence of annular disjunction in the setting of myxomatous valve disease is significantly high (55%). Although impressive, this proportion is nevertheless smaller than the 92% prevalence found by Hutchins et al [ 1 ], and the 98% prevalence found by Eriksson et al [ 2 ] with transoesophageal echocardiography in patients with advanced forms of myxomatous mitral valve disease. Both the reduced sensitivity of a transthoracic examination and the use of different diagnostic criteria may account for these discrepancies. Real-time three-dimensional (3D) and 3D reconstruction transoesophageal echocardiography affords a better accuracy in patients with complex MV pathology when compared with 2D transoesophageal echocardiography.[ 6 , 7 ] The complex structure of the mitral annulus makes it particularly suited to 3D assessment. MV may be viewed from either atrial or ventricular perspectives (en face) which permit a full length view of the mitral annulus contrasting with the segmental view offered by 2D assessment. This feature can enhance the sensitivity for the detection of mitral annular disjunction. However, the image quality using transthoracic echocardiography is sometimes poor weakening its advantage. To the best of our knowledge, this is the first study where the recognition of mitral annular disjunction is described by transthoracic echocardiography. On intraoperative transoesophageal echocardiography, Eriksson et al described a significantly higher rate of mitral annular disjunction in patients with advanced versus mild or moderate mitral valve degeneration (98% vs 9%)[ 2 ]. In our series, there wasn't any relation between annular disjunction and other specific echocardiographic features, namely the degree of mitral valve regurgitation, atrial or ventricular enlargement and ventricular function. Mitral annulus contractility contributes significantly to mitral valve function. Shortening of the annulus diameter during systole facilitates coaptation of the mitral leaflets [ 8 - 10 ]. Impairment of mitral annulus function is known to be associated with mitral regurgitation associated with myxomatous mitral valve disease, and has recently been implicated as a cause for valve repair failure [ 10 , 11 ]. In the presence of annular disjunction, the valve insertion in the "atrial wall" is responsible for an increased diameter of the mitral valve circumference during systole, and hence impaired annular function due to coaptation deficit. Underestimating this abnormality during mitral valve repair can result in recurrent mitral regurgitation, since paradoxical systolic enlargement will persist [ 2 ]. The risk of sudden death is increased in patients with mitral regurgitation due to myxomatous mitral valve disease. Prior to surgery, the incidence of sudden death is 1.8% per year, accounting for one-fourth of the causes of death. Patients with severe symptoms, atrial fibrillation, and reduced LV systolic function are at higher risk. However, even asymptomatic patients in sinus rhythm with normal LV function are not exempt of risk, which occurs with an incidence of 0.8% per year [ 12 - 15 ]. Increased frequency of ventricular arrhythmias in myxomatous mitral valve disease may result from abnormal excessive traction on papillary muscles, generated by the parachuting closure of the mitral valve [ 16 ]. As proposed by Hutchins et al, the annular disjunction may, for itself, increase the tension over the mitral apparatus [ 1 ]. This mechanism could hypothetically predispose to ventricular arrhythmias. In our study, despite the absence of sustained ventricular arrhythmias, there was an increased frequency of ventricular extra beats and of non-sustained ventricular tachycardia in patients with greater lengths of annular disjunction. The limitations of this study are its retrospective nature, the rather small population, and the performance of 24-hour Holter monitoring in a limited subset of patients. Larger and prospective studies are needed to validate our findings. Nevertheless, the novelty of the subject and the implications for both surgical management and for arrhythmic events remain of paramount importance.
Conclusions Mitral annular disjunction is a common finding in patients with myxomatous mitral valve disease, easily detected and measured by transthoracic echocardiography. We found an association between this abnormality and several clinical features namely chest pain, annular contractile dysfunction, and non-sustained ventricular tachycardia in cases of wider magnitude of the disjunction. It proved to be relevant for the success of mitral valve repair. Further and larger studies are needed to completely understand the clinical implications of annular disjunction, particularly its association with malignant arrhythmic events.
Background Mitral annular disjunction (MAD) consists of an altered spatial relation between the left atrial wall, the attachment of the mitral leaflets, and the top of the left ventricular (LV) free wall, manifested as a wide separation between the atrial wall-mitral valve junction and the top of the LV free wall. Originally described in association with myxomatous mitral valve disease, this abnormality was recently revisited by a surgical group that pointed its relevance for mitral valve reparability. The aims of this study were to investigate the echocardiographic prevalence of mitral annular disjunction in patients with myxomatous mitral valve disease, and to characterize the clinical profile and echocardiographic features of these patients. Methods We evaluated 38 patients with myxomatous mitral valve disease (mean age 57 ± 15 years; 18 females) and used standard transthoracic echocardiography for measuring the MAD. Mitral annular function, assessed by end-diastolic and end-systolic annular diameters, was compared between patients with and without MAD. We compared the incidence of arrhythmias in a subset of 21 patients studied with 24-hour Holter monitoring. Results MAD was present in 21 (55%) patients (mean length: 7.4 ± 8.7 mm), and was more common in women (61% vs 38% in men; p = 0.047). MAD patients more frequently presented chest pain (43% vs 12% in the absence of MAD; p = 0.07). Mitral annular function was significantly impaired in patients with MAD in whom the mitral annular diameter was paradoxically larger in systole than in diastole: the diastolic-to-systolic mitral annular diameter difference was -4,6 ± 4,7 mm in these patients vs 3,4 ± 1,1 mm in those without MAD (p < 0.001). The severity of MAD significantly correlated with the occurrence of non-sustained ventricular tachycardia (NSVT) on Holter monitoring: MAD›8.5 mm was a strong predictor for (NSVT), (area under ROC curve = 0.74 (95% CI, 0.5-0.9); sensitivity 67%, specificity 83%). There were no differences between groups regarding functional class, severity of mitral regurgitation, LV volumes, and LV systolic function. Conclusions MAD is a common finding in myxomatous mitral valve disease patients, easily recognizable by transthoracic echocardiography. It is more prevalent in women and often associated with chest pain. MAD significantly disturbs mitral annular function and when severe predicts the occurrence of NSVT.
Abbreviations MAD: Mitral annular disjunction; LV: Left ventricular; NSVT: Non-sustained ventricular tachycardia; Competing interests The authors declare that they have no competing interests. Authors' contributions PC and MJA were responsible for study design, analysis and interpretation of data, manuscript drafting and critical revision. CA contributed to analysis and revised the manuscript for important intellectual content. RR contributed to study design. RG and JAS have supervised and commented the manuscript. All authors read and approved the final manuscript. Supplementary Material
CC BY
no
2022-01-12 15:21:36
Cardiovasc Ultrasound. 2010 Dec 9; 8:53
oa_package/fc/51/PMC3014886.tar.gz
PMC3014887
21126365
Background Several studies have reported that laser printers are significant sources of ultrafine particles [ 1 - 6 ]. Workers with long time exposure to toner dust showed a significantly higher prevalence of radiographic lung abnormalities in a cross sectional study [ 7 ]. Also a significant higher prevalence of temporary coughing and sputum production has been reported [ 8 ]. In general nanoparticles (NP) will play a fundamental role in the future and risk assessment seems to be a relevant issue [ 9 ]. Office printers were detected to emit carbon nanoparticles (CNP) in a variable extend [ 5 ]. Granulomatous pneumonitis and mediastinal lymphadenopathy has been reported in a case of photocopier toner dust exposure [ 10 ]. Inhaled 99m technetium-labeled CNP can be transported with the human blood circulation and deposited in other organs [ 11 ]. We present a female open office worker with toner dust exposure and CNP deposits in the peritoneum.
Case discussion Particle emissions by office printers show differences between printer types, printers of the same type and a significant increase during working times [ 5 ]. The average diameter of emitted particles from different printers was found to be in between 40-76 nm and well fitting to our results. The study checked up 62 different printers, but the type used in our case was not included [ 5 ]. About a possible respiratory uptake of CNP in human office workers no systematic morphological investigations exist. A study about the respiratory health of workers handling printing toners showed a higher prevalence rate of thoracic radiographic abnormalities and a strong tendency towards a decline of lung function in long time exposed persons [ 8 ]. In a case of granulomatous pneumonitis and mediastinal lymphadenopathy with photocopier toner dust exposure containing copper, this metal was detected in the tissue investigated by SEM and EDX [ 10 ]. Metal oxides were detectable on the surface of toner particles in our case as well, but deposition in tissue has not been seen. In cases of anthracosilicosis dust deposits in the liver have been reported [ 13 ]. This demonstrates that particle transport of inhaled dust via the blood stream with deposition in other organs can be found in humans [ 11 , 13 ]. Ultra fine carbon particles cause a strong down-regulating effect on the cytochrome P450 1B1 protein in monocytes. These data suggest that the induced reduction of gene expression may interfere with the activation and/or detoxification capabilities of inhaled toxic particles. In primary bronchial epithelial cells this effect showed remarkable inter-individual differences, which emphazises the role of polymorphisms [ 14 ]. In the case reported here there were no obvious repiratory symptoms. Clinical studies revealed negative effects on respiratory health after toner exposure [ 7 , 8 , 10 ], therefore further studies concerning morphology, genetics and clinical consequences are needed.
Conclusion We have shown that workers with toner dust exposure from laser printers can develop submesothelial deposition of CNP in the peritoneum. Transport of CNP via lymphatic and blood vessels after inhalation in the lungs has to be assumed. Impact of toner dust exposure on the respiratory health of office workers, as suspected in other studies, has to be evaluated further.
Inhalation of carbon nanoparticles (CNP) from toner dust has been shown to have impact on the respiratory health of persons exposed. Office printers are known emitters of CNP. We report about a female open office worker who developed weight loss and diarrhoea. Laparoscopy done for suspected endometriosis surprisingly revealed black spots within the peritoneum. Submesothelial aggregates of CNP with a diameter of 31-67 nm were found by scanning and transmission electron microscopy in these tissue specimens. Colon biopsies showed inflammatory bowel disease with typically signs of Crohn disease, but no dust deposits. Transport of CNP via lymphatic and blood vessels after inhalation in the lungs has to be assumed. In this case respiratory symptoms were not reported, therefore no lung function tests were done. We have shown that workers with toner dust exposure from laser printers can develop submesothelial deposition of CNP in the peritoneum. Impact of toner dust exposure on the respiratory health of office workers, as suspected in other studies, has to be evaluated further.
Case presentation A 33-year old female was suffering from intermittent appearing abdominal pain, weight loss and diarrhoea since three months. Biopsies taken during two coloscopies revealed no changes according to the first interpretation at another institution. Therefore her gynecologist suspected endometriosis as a possible cause and admitted her to hospital for laparoscopy. In spite of suspected endometriosis, black spots within the peritoneum were seen and biopsies were taken for histological evaluation. Further history revealed that the patient was working fulltime as an employee in an open-plan office and has been exposed to a laser printer on her personal desk since three years. Up to 70 sheets were printed each working day. Eight laser printers of the same type were installed at other working places in the same office. Respiratory symptoms have not been reported by the patient, therefore lung function tests were not done. Peritoneal bipsies were fixed in 3.5% buffered formaldehyde and stained conventionally (haematoxylin-eosin, Elastica van Gieson, Prussian blue) for light microscopy (LM). For further analysis of the composition of the black spots formalin fixed paraffin-embedded tissue was cut into slices of 10 μm thickness by a microtome (Microm, Walldorf, Germany), mounted on polyvinylchloride foil and examined by scanning electron microscopy (SEM; ESEM Quanta 400 FEG, FEI, The Netherlands) and energy dispersive X-ray analysis (EDX; EDAX EDS Genesis 4000, Ametek, Germany) as previously described [ 12 ]. For analysis of cellular reactions transmission electron microscopy (TEM; Zeiss EM 901A, Oberkochem, Germany) was done using reembedded tissue after adequate processing. Toner material of the office printer was taken for comparison and examined by SEM and EDX. LM of the peritoneal tissue revealed submesothelial deposits of black material with foreign body reaction (Figure 1 ). SEM showed submesothelial aggregates of granular material (Figure 2A ) consisting of NP with a particle size ranging from 31 to 67 nm (Figure 2B ). By EDX no other elements than carbon were found in these aggregates. TEM revealed an inflammatory reaction as in LM. In macrophages phagolysosomes of variable diameters with NP inside were seen (Figure 3 ). NP showed a similar appearance as in SEM. Toner material was composed of round particles with a diameter of 5-9 μm, with some small elevations on the surface consisting of metal oxides. Because of the lack of endometriosis colon biopsies were reinvestigated by one of the authors. Histological alterations typical for Crohn disease were found and dust deposits were not seen. Consent Written informed consent was obtained from the patient for publication of this case report including images. A copy of the written consent is available for the Editor-in-Chief of this journal. List of abbreviations CNP: carbon nanoparticles; EDX: energy dispersive x-ray analysis; LM: light microscopy; NP: nanoparticles; SEM: scanning electron microscopy; TEM: transmission electron microscopy Competing interests The authors declare that they have no competing interests. Authors' contributions DT did LM of the peritoneum (second opinion) and TEM, supervised SEM and EDX and wrote the manuscript. SB performed SEM and EDX, SP has done LM of the peritoneum and colon and initiated further investigations. OA discussed the clinical background and revised the manuscript. All authors read and approved the final version.
Acknowledgements We thank Mrs. Gabriele Ladwig for technical preparation of the specimen for TEM and assistance taking images and the patient for giving further informations about exposure.
CC BY
no
2022-01-12 15:21:36
Diagn Pathol. 2010 Dec 2; 5:77
oa_package/45/a9/PMC3014887.tar.gz
PMC3014888
21143902
Introduction Uterine leiomyoma, the most common neoplasm of the female genital tract, probably occurs in the majority of women by age 50 and is responsible for significant morbidity in patients [ 1 - 3 ]. Symptoms include pelvic pressure, pelvic pain, abnormal uterine bleeding, infertility, and miscarriage [ 4 , 5 ]. Uterine leiomyoma represents a major indication for hysterectomy among women in the United States, accounting for one-third of about 600,000 hysterectomy procedures performed annually [ 6 , 7 ]. Not only is hysterectomy associated with morbidity and mortality, but it also has a huge economic impact on healthcare systems [ 1 ]. The scientific literature contains a large body of information concerning the epidemiology, hormonal influence, genetics, and molecular alterations in uterine leiomyoma. Risk factors include early menarche, nulliparity, obesity, African-American ethnicity, and temoxifen use [ 8 - 13 ]. Many of these factors are associated with increased levels of estrogen and progesterone. Estrogen and progesterone act through the mediation of estrogen receptor and progesterone receptor, respectively. The majority of literature revealed higher concentrations of estrogen and progesterone receptors in leiomyoma than in normal myometrium [ 3 ]. Leiomyoma of the uterus also overexpresses various growth factors including transforming growth factor, fibroblastic growth factor, epidermal growth factor receptor, and platelet-derived growth factor [ 3 ]. Inherent abnormality of myometrium in patients has also been implicated in the development of leiomyoma since the myometrium in the uterus harboring leiomyoma shows a significantly higher level of estrogen receptor than that without tumor [ 14 ]. Leiomyoma of the uterus has been shown to be monoclonal by studies using X-linked glucose 6-phosphate dehydrogenase isozymes [ 15 ], X-linked androgen receptor [ 16 , 17 ], and X-linked phosphoglycerokinase [ 18 ]. Cytogenetic studies have identified several chromosomal alterations, including t(12;14), del(7q), 6p21, and trisomy 12 (3). However, it is unclear whether the genetic alterations occur before the genesis of leiomyoma or they are secondary events. Despite numerous studies concerning the molecular and genetic changes in uterine leiomyoma, the mechanisms of development remain unknown. Further work is needed to elucidate the pathogenesis that would lead to the discovery of effective prevention and treatment of the tumor. Selenium, an essential trace element, has been shown to have an anti-cancer effect. Many reports have described a relationship between insufficient selenium intake and increased risk of cancer [ 19 - 21 ]. The anti-cancer action of selenium is thought to be mediated by selenium-binding protein 1 (SELENBP1), a 56 kDa intracellular protein, that binds covalently to selenium. The gene of SELENBP1 is located at chromosome 1q21-22 [ 22 ]. The expression of SELENBP1 has been shown to be decreased in several tumors including cancers of the prostate, lungs, colon, and ovary [ 23 - 26 ]. However, little information exists concerning the role of SELENBP1 in tumorigenesis of uterine leiomyoma. In this study, we examined the expression of SELENBP1 in uterine leiomyoma and normal myometrium.
Materials and methods The study consisted of 20 consecutive specimens of hysterectomy performed for leiomyoma at our institution in July 2004. We recorded the number and size of leiomyoma as well as the endometrial pattern in each patient. Using a monoclonal antibody against human SELENBP1 (Medical Biological Laboratory International Corporation, Watertown, MA), we evaluated the expression of SELENBP1 by Western Blot and immunohistochemistry. For Western Blot, 100 mg sample was taken from each leiomyoma of an unfixed uterine specimen. We selected areas of leiomyoma without degenerative changes. Also sampled was 100 mg of tissue from normal myometrium in the same uterus. The sample was immediately placed in 1 ml radioimmunoprecipitation assay buffer containing 50 mM Tris-HCl (pH7.4), 150 mM NaCl, 1% Triton X-100, 1% sodium deoxycholate, 0.1% SDS, 1 mM PMSF, 1 mM EDTA, 5 ug/ml aprotinin, 5 ug/ml leupeptin, 1 mM Na 3 VO 4 , and 5 mM NaF. After being cut into smaller pieces with scissors, the sample was homogenized on ice with a motor-driven tissue Tearor for 5 times, each for 10 seconds. The homogenate was placed on an orbital shaker at 4°C for 30 minutes and then centrifuged at 4°C with 14,000 × g for 15 minutes. The supernatant was collected in a fresh tube and stored at -80°C for later use. Protein concentration was determined by a Bradford protein assay (Bio-Rad, Hercules, CA), using bovine serum albumin as standard. We heated each sample at 95°C for 5 minutes after mixing it with Laemmli sample buffer containing 62.5 mM Tris-HCl (pH6.8), 20% glycerol, 2% SDS, 0.01% bromophenol blue, and 5% β-mercaptoethanol. Equal amount of protein (25 μg) from every sample was separated on 12% sodium dodecyl sulfate polyacrylamide (Tris/glycine) gel and transferred to polyvinylidene difluoride membrane. After being stained with Ponceau S, the membrane was blocked in phosphate-buffered saline containing 5% nonfat dry milk for 30 minutes, and incubated with antibodies against SELENBP1 at a dilution of 1:400 and β-actin (Santa Cruz Biotechnology, Santa Cruz, CA) at a dilution of 1:200 for 1 hour. After three washes in phosphate-buffered saline with 0.1% Tween 20, the membranes were incubated with horse radish peroxidase-conjugated anti-mouse immunoglobulin G (Medical Biological Laboratory International Corporation, Watertown, MA) at a dilution of 1:10,000 for 1 hour, followed by enhanced chemiluminescence detection (Thermo Scientific, Waltham, MA) and exposure to an X-ray film. Relative abundance of protein was determined by quantitative densitometry using the National Institutes of Health image program (available at http://rsb.info.nih.gov/nih-image/ ). Molecular weights of proteins were determined by extrapolation from the relative mobility of known molecular weights. All Western Blot densitometry data on SELENBP1 were normalized to β-actin (a house keeping protein). The relative level of SELENBP1 was then normalized by the mean level of SELENBP1 in normal myometrium. To verify the result of Western Blot analysis, we performed immunohistochemistry in archival tissue. Five micron sections were taken from paraffin blocks of leiomyoma and normal myometrium of the same uterine specimens used for Western Blot analysis. As in Western Blot, we selected sections of leiomyoma that did not show degenerative changes such as infarction or hyalinization. The sections were routinely deparaffinized and hydrated through a gradient of ethanol. Antigen retrieval was achieved by incubating the sections with citrate buffer (pH 6.1) at 95°C. Sections were immersed in 3% H 2 O 2 at room temperature for 10 minutes to block any endogenous peroxidase activity. They were incubated with SELENBP1 antibody at a dilution of 1:200 at room temperature for 35 minutes. The sections were then incubated with a secondary antibody previously conjugated to horseradish peroxidase-labeled polymer at room temperature for 35 minutes. After the sections were incubated with diaminobenzidine and H 2 O 2 at room temperature for 8 minutes, they were counterstained with hematoxylin and cover-slipped. In each run of immunohistochemistry, we included several controls: (1) a negative reagent control (Medical Biological Laboratory International Corporation, Watertown, MA), used to substitute the primary antibody, (2) a positive tissue control with a section of normal fallopian tube known to be positive for SELENBP1, and (3) a negative tissue control with a section of high-grade ovarian serous carcinoma known to be negative for SELENBP1. The specificity of immunostaining was confirmed by a positive stain in the mucosal epithelial cells of fallopian tube but a negative stain in high-grade ovarian serous carcinoma and on the sections that were stained with a negative reagent control. The immunostains were scored using a 4-point scale (0-3+) system, based on the number of positive cells and the intensity of staining; no staining was recorded as "0", weak staining in fewer than one-third of cells as "1", moderate staining in one-third to two-thirds of cells as "2", and strong staining in more than two-thirds of cells as "3". The immunostaining scores of SELENBP1 in leiomyomas were correlated with vascular count on hematoxylin and eosin stained sections of leiomyomas originating from the same paraffin blocks used for SELENBP1 immunostain. Vascular count was defined as the number of blood vessels in 5 consecutive low power microscopic fields because each section of leiomyoma contained a minimum of 5 low power fields for evaluation. To assess proliferation index relative to SELENBP1 expression, the proliferation index was determined in sections of leiomyomas using MIB-1 antibody to the Ki67 antigen (Dako, Carpinteria, CA). When evaluating the results of Ki67 immunostaining, we chose the tumor area with the highest density of positive nuclear staining. A minimum of 200 cells on each section of leiomyoma were analyzed. The proliferation index, represented by the percentage of positive nuclei, was calculated by dividing the number of positive stained cells by the total number of cells in the areas examined. Pearson correlation coefficient analysis was used to evaluate the relationship between vascular count and SELENBP1 immunostaining score as well as proliferation index and SELENBP1 immunostaining score. When the p value was smaller than 0.05, the relationship was considered significant. Wilcoxon Matched-Pair Signed-Ranks test was used to compare the expression of SELENBP1 between leiomyoma and normal myometrium. For cases showing more than one leiomyoma, the average SELENBP1 level of multiple tumors in the same patient was used to compare with the SELENBP1 level of normal myometrium. In order to evaluate whether patient's age was related to the level of SELENBP1, we divided patients into two arbitrary age groups: <45 years and ≥45 years. The abundance of SELENBP1 in the two age groups was compared with Mann-Whitney test. To determine whether the level of SELENBP1 was related to the size of leiomyoma, tumors were divided into four arbitrary size groups: ≤2 cm, 2.1-5 cm, 5.1-8 cm, and ≥8 cm. The levels of SELENBP1 in the four size groups were compared by Analysis of Variance. Analysis of Variance was also used to compare the abundance of SELENBP1 among patients that showed proliferative, secretory, and atrophic endometrium. The difference was considered significant when the p value was smaller than 0.05.
Results Patient characteristics are shown in Table 1 . The patient age ranged from 34 to 58 years, with a mean of 44.3 years. There were 8 patients with proliferative endometrium, 7 with secretory endometrium, and 5 with atrophic endometrium. Two patients displayed solitary leiomyoma, and eighteen patients showed 2 to 5 tumors. The size of leiomyoma varied from 1 to 15.5 cm, with a mean of 4.3 cm. We performed Western Blot analysis on one sample of normal myometrium and one sample of each leiomyoma in any patient, with a total of 20 samples of normal myometrium and 73 samples of leiomyoma. Western Blot analysis using anti-human SELENBP1 recognized a single band at 56 kDa in all samples examined. The intensity of bands in leiomyoma was about 4-fold lower than that in normal myometrium (examples in two patients are shown in Figure 1A ), and the difference was statistically significant (Figure 1B ). Although there was a trend for a decreased expression of SELENBP1 with increasing tumor size (Figure 2 ), no statistical difference was seen among four arbitrary size groups: ≤2 cm, 2.1-5 cm, 5.1-8 cm, and ≥8 cm. The levels of SELENBP1 did not differ between patients younger than 45 years and older patients in either normal myometrium or leiomyoma (Figure 3 ). However, the level of SELENBP1 was significantly lower in leiomyoma than in normal myometrium either in patients younger than 45 years or in older patients. SELENBP1 expression did not differ among patients with proliferative, secretory, and atrophic endometrium either in normal myometrium or leiomyoma (Figure 4 ), but leiomyoma showed a significantly lower level of SELENBP1 than normal myometrium either in patients with proliferative endometrium, in patients with secretory endometrium, or in patients with atrophic endometrium. Immunohistochemistry was performed on one section of normal myometrium and one or more sections of leiomyoma in each case, with a total number of 20 sections of normal myometrium and 42 sections of leiomyoma. Normal myometrium showed diffuse and strong staining (Figure 5 ); in a scale of 0-3, the staining scores ranged from 2 to 3 (mean = 2.3). The staining scores in leiomyoma were predominantly 0 and 1 with only occasional 2 (mean = 0.8). The difference in immunostaining scores between normal myometrium and leiomyoma was statistically significant (p = 0.00357); this result confirmed the finding by Western Blot analysis. No difference in staining was seen between patients younger than 45 years and older patients in either normal myometrium (p = 0.3285) or leiomyoma (p = 0.4596). The staining did not differ among patients with proliferative, secretory, and atrophic endometrium in either normal myometrium (p = 0.2806) or leiomyoma (p = 0.4736). Because size of leiomyoma was not specified for most sections used in immunohistochemistry, we were not able to correlate tumor size with immunostaining. Among 42 available sections of leiomyomas, the vascular count varied from 24 to 96 per 5 low power fields with a mean of 38 per 5 low power fields. Statistic analysis did not reveal a significant correlation between the vascular counts and the SELENBP1 immunostaining scores (r = -0.0226, p = 0.4668). Proliferation index was similar among all leiomyomas and ranged from 0 to 6% (mean 1.2%). There was no significant correlation between the proliferation index and the SELENBP1 immunostaining score (r = -0.1562, p = 0.2668).
Discussion Our study showed a significant decrease of SELENBP1 in uterine leiomyoma than in normal myometrium. To our knowledge, this is the first study to examine SELENBP1 expression in normal myometrium and uterine leiomyoma. The presence of SELENBP1 in myometrium indicates a normal biologic function of SELENBP1 in this tissue. SELENBP1 has been implicated to play a role in toxification/detoxification processes [ 27 ], cell growth regulation [ 28 ], and intra-Golgi protein transport [ 29 ]. The decrease of SELENBP1 expression in uterine leiomyoma suggests that SELENBP1 is related to the development of this tumor. Although loss of SELENBP1 expression may be secondary to tumor development, the ability of SELENBP1 to inhibit cell proliferation and induce apoptosis in colon cancer [ 30 ] may suggest that the protein may also be involved in tumorigenesis of uterine leiomyoma. It is likely that additional genetic and molecular events contribute to tumorigenesis, but reduction of SELENBP1 expression may be a key step in the transition from normal myometrium to leiomyoma. Indeed, decreased expression of SELENBP1 in even the smallest leiomyoma (i.e., 1 cm) examined suggests that alteration in SELENBP1 expression may be an early event in the development of the tumor. The mechanisms for tumorigenesis of SELENBP1 in uterine leiomyoma are not known. The development and growth of uterine leiomyoma has been attributed in part to estrogen stimulation; leiomyoma enlarges during pregnancy when estrogen level is high and in women taking tamoxifen or receiving estrogen-replacement therapy but shrinks in patients with low level of estrogen after treatment with gonadotropin-releasing hormone. In breast cancer cells, selenium has been shown to disrupt estrogen signaling pathway by decreasing the expression of estrogen receptors, decreasing the binding of estradiol to estrogen receptor, inhibiting the trans-activating activity of estrogen receptor, and reducing the binding of estrogen receptor to the estrogen responsive element site [ 31 ]. In myometrium showing normal expression of SELENBP1, selenium may be able to disrupt estrogen signaling pathway. When the expression of SELENBP1 is reduced, however, leiomyoma may develop because selenium without adequate SELENBP1 may be incapable of inhibiting the stimulatory actions of estrogens. Sex hormones change in abundance through menstrual cycle. In women of reproductive age, estrogen dominates in the proliferative phase, and progesterone rises in the secretory phase. In postmenopausal women, estrogen and progesterone levels are low. Our study demonstrated similar levels of SELENBP1 among patients with proliferative, secretory, and atrophic endometrium in either normal myometrium or leiomyoma, indicating that SELENBP1 is not regulated by sex hormones. Also, our study did not find a difference in SELENBP1 level between patients younger than 45 years and older patients. In general, younger women are associated with a higher level of estrogen and progesterone than older women. Thus the same level of SELENBP1 regardless of age again suggests that SELENBP1 level is not under the influence of sex hormones. A negative correlation between SELENBP1 expression and Ki67 positivity has been reported in lung adenocarcinomas [ 24 ], but our study did not reveal a significant relationship between SELENBP1 expression and proliferation index in uterine leiomyomas. Our result may be explained by a similarly low proliferation index among all uterine leiomyomas examined. While no previous report has addressed the relationship between vascular count and SELENBP1 expression in any tumor, we showed that vascular count was not correlated with SELENBP1 expression in uterine leiomyomas. Medical treatment of uterine leiomyoma involves the use of gonadotropin-releasing hormone that inhibits steroidogenesis, induces chemical menopause, and therefore can reduce tumor volume with an improvement in clinical symptoms. However, the effects are short-lived, and leiomyoma tends to grow back rapidly after cessation of therapy. Lack of available effective medical therapy has made surgery the mainstay of treatment. The complications of surgery could be severe, particularly for young women who wish to preserve their fertility. Therefore, searching for novel target-based preventive and therapeutic agents has become imperative. Selenium has been implicated as an important chemopreventive and chemotherapeutic agent for several epithelial tumors, including cancers of the prostate and colon [ 32 - 34 ]. Our study showing a decreased expression of SELENBP1 in uterine leiomyoma not only indicates a role of SELENBP1 in tumorigenesis but also suggests the potential utility of selenium in prevention and treatment of uterine leiomyoma. Since the effects of selenium are mediated by SELENBP1, loss of SELENBP1 expression in leiomyoma may have a negative impact on the ability of selenium to control tumor cell growth. It has been reported, however, that treating ovarian tumor cells with a selenium compound increases SELENBP1 expression [ 35 ]. Thus the increased level of SELENBP1 after selenium treatment may facilitate the effect of selenium. In summary, our study showed a decreased level of SELENBP1 in uterine leiomyoma compared to normal myometrium and suggested a role of SELENBP1 in tumorigenesis of leiomyoma. Our findings may provide a basis for future studies concerning the molecular mechanisms of SELENBP 1 in tumorigenesis as well as the potential use of selenium as a preventive and therapeutic agent in uterine leiomyoma.
Background Selenium has been shown to inhibit cancer development and growth through the mediation of selenium-binding proteins. Decreased expression of selenium-binding protein 1 has been reported in cancers of the prostate, stomach, colon, and lungs. No information, however, is available concerning the roles of selenium-binding protein 1 in uterine leiomyoma. Methods Using Western Blot analysis and immunohistochemistry, we examined the expression of selenium-binding protein 1 in uterine leiomyoma and normal myometrium in 20 patients who had undergone hysterectomy for uterine leiomyoma. Results and Discussion The patient age ranged from 34 to 58 years with a mean of 44.3 years. Proliferative endometrium was seen in 8 patients, secretory endometrium in 7 patients, and atrophic endometrium in 5 patients. Two patients showed solitary leiomyoma, and eighteen patients revealed 2 to 5 tumors. Tumor size ranged from 1 to 15.5 cm with a mean of 4.3 cm. Both Western Blot analysis and immunohistochemistry showed a significant lower level of selenium-binding protein 1 in leiomyoma than in normal myometrium. Larger tumors had a tendency to show a lower level of selenium-binding protein 1 than smaller ones, but the difference did not reach a statistical significance. The expression of selenium-binding protein 1 was the same among patients with proliferative, secretory, and atrophic endometrium in either leiomyoma or normal myometrium. Also, we did not find a difference of selenium-binding protein 1 level between patients younger than 45 years and older patients in either leiomyoma or normal myometrium. Conclusions Decreased expression of selenium-binding protein 1 in uterine leiomyoma may indicate a role of the protein in tumorigenesis. Our findings may provide a basis for future studies concerning the molecular mechanisms of selenium-binding protein 1 in tumorigenesis as well as the possible use of selenium in prevention and treatment of uterine leiomyoma.
Competing interests The authors declare that they have no competing interests. Authors' contributions PZ carried out the Western Blot and immunohistochemistry studies, performed the statistical analysis, and participated in manuscript writing. CZ designed the study and wrote the manuscript. XW collected study samples and reviewed manuscript. FL, CJS, MRQ, and WDL participated in study design and reviewed manuscript. All authors read and approved the final manuscript.
CC BY
no
2022-01-12 15:21:36
Diagn Pathol. 2010 Dec 9; 5:80
oa_package/b9/c5/PMC3014888.tar.gz
PMC3014889
21122134
Background Although some changes seem to be taking place in the incidence trends of specific illegal drugs, heroin use is still an important health concern in Europe. In most countries heroin remains the principal drug involved in treatment episodes[ 1 ] and heroin users are at a greater risk of dying from different causes, particularly overdoses but also infectious diseases related to injection[ 2 - 4 ]. Health Related Quality of Life (HRQL) has progressively been applied in the evaluation of health status of patients, including substance users[ 5 , 6 ]. Poor HRQL has been reported among heroin users starting treatment, being comparable to other chronic disease patients[ 7 - 9 ]. As a patient centred outcome variable, HRQL has also been used to assess treatment effectiveness and in randomised trials providing evidence of HRQL improvement with opioid substitution therapies [ 10 - 13 ]. Variables that have been related to poorer HRQL in opiate users vary in different studies. The more consistent finding is poorer HRQL associated with poly-drug use, HRQL has also been related to socio-demographic variables such as age, educational level or employment status, and the presence of chronic medical conditions, including HIV infection[ 8 , 14 ]. Although gender has been associated with differences in HRQL in many different population studies, being poorer in women[ 15 , 16 ], no clear differences have been reported in studies on opiate users [ 8 , 17 , 18 ]. The influence of psychiatric diagnoses other than substance use disorders on HRQL has been explored, results being inconsistent though mainly showing impaired HRQL in subjects with dual diagnosis[ 18 - 20 ]. It is difficult to compare the various studies as they have explored different variables and used different HRQL measures. The generic HRQL measures most frequently used have been the SF-36 and the Nottingham Health Profile (NHP). The German adaptation of the Lancashire Quality of Life Profile, a questionnaire designed specifically for the mental health field, has also been used in studies with drug users [ 13 , 21 ]. Few HRQL instruments specific to the drug dependence field are available[ 22 ]. Episodes of drug overdose are frequent among heroin injectors[ 23 , 24 ] and it has been suggested that poor health may be an important overdose risk factor[ 25 , 26 ], yet we don't know of any previous study exploring the possible relation between perceived HRQL and overdose experiences which could be of interest for specific prevention. It is possible that HRQL is being affected in early phases of opiate use, however as far as we know there is little information on HRQL in young opiate users, early in their drug career. Most studies have been done after entry to treatment. The objective of the present study was to ascertain what factors were related with HRQL among young opiate users, including previous drug treatment and overdose episodes, taking gender into account.
Methods The ITINERE project cohort of current regular users of heroin aged between 18 and 30 years was assembled in outdoor settings of three Spanish cities (Barcelona, Madrid, Sevilla). Details of the methodology have been described previously [ 24 , 27 ]. To be included, subjects had to be residents in the above mentioned cities, to have used heroin within the 90 days prior to the interview, and at least 12 days over the 12 months prior to the interview; they also had to be willing to participate in and facilitate the follow-up. Exclusion criteria were language barriers and difficulties in follow-up. For recruitment, targeted sampling and nomination techniques, with different starting points mainly in outdoor locations, was used[ 28 ]. After a brief selection questionnaire, to assess fulfilment of inclusion criteria, candidates were informed about the objectives and procedures of the study, including incentives for participation (18 Euro per interview completed) and signed an informed consent. Field work was done between April 2001 and December 2003. The inception cohort baseline questionnaire was administered through a laptop assisted interview in socio-sanitary premises and included, among other variables, socio-demographic data, drug use patterns, health problems data, severity of heroin and cocaine dependence measured through the Spanish version of the Severity of Dependence Scale (SDS)[ 29 , 30 ], and a generic health related quality of life questionnaire, the Nottingham Health Profile (NHP)[ 31 ]. Interviewers were trained social science professionals (i.e.: anthropologists, sociologists,...). A non-fatal opiate overdose was defined as an episode occurring after heroin or opiate use characterized by extreme difficulty in breathing, loss of consciousness and problems waking up or recovering consciousness, and possibly bluish skin or lips. Other variables studied were having been confined to bed due to discomfort, disease or injury, on any day during the last 12 months and to have been in hospital as an inpatient during the same period. The use of two or more illegal substances during the last 12 months with a frequency of once weekly or higher was considered a proxy of poly-drug use. Alcohol consumption was measured as intake in grams/day and categorized in 4 risk categories (no use, moderate, at-risk and heavy) with different cut-points by gender (male 40 and 60 g/day, female 20 and 40 g/day). Serological tests (HIV, HBV, HCV) were done through a dried blood spot test. The ITINERE project has been approved by the ethical committee of the Instituto de Salud Carlos III. The SDS is a short, easily administered scale which can be used to measure the degree of dependence experienced by users of different types of drugs. The SDS contains five items, all of which are explicitly concerned with impaired control over drug taking and with worries and anxieties about drug use. It satisfies a number of criteria indicating its suitability as a measure of dependence[ 29 ]. It was applied to assess dependence severity (range 0, none - 15, most) for heroin (SDS-H) and for cocaine (SDS-C). The Nottingham Health Profile (NHP) is a multidimensional health status questionnaire that has been previously used in drug users[ 10 , 11 ] and found to be easy to administer in this population. It contains 38 items divided into 6 dimensions of health (energy, pain, sleep, social isolation, emotional reactions, physical mobility) each one scored from 0, best to 100, worst health state. A global NHP score was calculated taking the mean of the six dimension scores. To compare the study results to the general population we used NHP Spanish norms for ages 41 to 49. There is no normative data available for younger ages but as from HRQL studies we know that generic HRQL scores are better for younger age groups[ 31 ], if appropriate age specific reference values were to have been used, differences potentially found would have been even larger. Differences by gender were tested using chi-square test or t-test. To compare possible differences in NHP scores, non parametric tests (Mann-Witney U or Kruskal-Wallis test-with correction for ties, if necessary) were used. As large samples were analysed, for multivariate analysis the NHP global score was considered as normally distributed[ 32 ] and a multiple linear regression applied. All variables significant or marginally significant (p < 0.10) in bivariate analysis were included in three models, one for the total and one per gender, and the selection of final variables was done with a backward procedure. All analyses were done with SPSS 12.0.
Results A total of 991 young heroin users were recruited, 722 were male (73%) and 269 female. Men and women differed in all socio-demographic variables explored, but also in some general health (confined to bed at least one day in the last 12 months, HIV positive: more frequent in women) and drug use variables (a higher proportion of heavy alcohol use, and a shorter length of heroin and cocaine use among women)(table 1 ). No gender differences were observed in the proportion of those who had a previous overdose experience or had experienced an opiate overdose in the last 12 months. However, the proportion of those who had recently (12 months) experienced a non-fatal overdose (n = 80) was higher in Barcelona, among those more educated, squatters or homeless, unemployed, those who had been in hospital in the last 12 months, were anti-HCV positives, had injected in the last 12 months, or had not been in methadone treatment at any time in the last 12 months. A valid NHP questionnaire was obtained for 963 subjects, 97% of the sample. The mean global NHP score was 36.0 (sd: 23.8). Women perceived their health as worse than men in all dimensions (global score: 41.2 (23.8) vs 34.1 (23.6)) (Figure 1 ), though not statistically significant for sleep and social isolation. In all dimensions NHP scores were higher for both genders than those of the general population (NHP global score in general adult population 41-49 years old: 11.0 (sd:13.6)). NHP global score was higher in older ages with a significant positive correlation in both genders. The NHP global score showed statistically significant differences in both genders according to current employment (better), living arrangements (better among squatters) and prison experience (worse). It was also worse with longer duration of heroin use and with higher scores for SDS-H and SDS-C. Among males it was poorer in lower educational levels, those who were ever confined to bed or visited a psychiatrist during the previous 12 months, were HIV positive, had core antibodies of hepatitis B, or had ever had an overdose. Among women it was poorer with increased length of cocaine use (table 2 ). NHP global score showed statistically significant differences for poly-drug use and hospital inpatient admission in the last 12 months (worse in affirmative categories), only when considering both genders simultaneously. Having had an opiate overdose in the last 12 months, though it was not significant in bivariate analysis was included in the multivariate analysis instead of overdose ever, statistically significant in males but too remote from HRQL assessment. In males, the final multiple linear regression model, adjusted for age, showed that NHP global score was associated with socio-demographic variables (level of education, living arrangements, current employment), was impaired with some medical (ever confined to bed in the previous 12 months, HIV positive) and drug use related variables: higher scores on severity of heroin and cocaine dependence (SDS-H and SDS-C) and having experienced an opiate overdose in the last 12 months; and while it was worse in those men that had visited a psychiatrist in the previous 12 months, for those ever on methadone treatment in previous 12 months it was better (Table 3 ). Variables included in the regression explained 22.7% of the NHP global score variance. The severity of heroin dependence, as a continuous variable, showed the highest standardized beta coefficient (0.26). An increase of one point in the score of SDS-H was associated with an increase of 1.8 points in the NHP global score, while having an overdose during the previous 12 months increased it by 7 points. For females, only drug use related variables (daily alcohol intake, length of cocaine use and SDS-H and SDS-C) were independently related to global NHP score, explaining also 22.7% of the NHP global score variance. An increase of one point in SDS-H was associated with an increase of 2.1 points in the NHP global score (Table 3 ). When analysing the overall sample, all variables significant for males were included in the model plus daily alcohol intake, significant for females; however the regression involved an interaction term between gender and HIV status showing that women had worse NHP score which was not modified by their HIV status, whereas among men NHP score was impaired when HIV positive (Table 3 ).
Discussion HRQL was found to be impaired in young heroin users recruited outside the healthcare context, and severities of heroin and cocaine dependence were the variables that accounted for most of its explained variability in both genders. Women reported worse HRQL, but contrary to males having had an opiate overdose, contact with a psychiatrist or having ever been on methadone treatment during the preceding 12 months were not found to be associated with it. A large sample was assembled that allowed to study a wide set of variables and to explore characteristics among women separately. It was planned to include young users to study the course of heroin use, trying to recruit users in early phases of their drug career and, in fact, they were younger than heroin users when requesting first treatment in Spain (mean age in 2002: 31.8 years)[ 33 ], however, the final sample included young heroin users already very much involved in heroin use. As elsewhere, it is difficult to ascertain the degree of representativeness of the population of young heroin users in the three cities where the study was conducted. Even though strategies to include users from different surroundings in the cities were implemented the final sample was somewhat biased towards heavy use. Another limitation of the present study could be related to the assumption of normality of the NHP global score. However, according to Lumley et al [ 32 ] the fact of being a large sample minimizes this problem. Furthermore, only 2.5% of participants presented a score of 0, suggestive of a floor effect, which can be considered as negligible. Also, when interpreting results it is necessary to remember that the cross-sectional nature of the study precludes making causal inferences in most of the variables. The variables that explained most of the global NHP score variability were the same in both genders: the SDS-H and SDS-C accounted for 55.9% of the explained variance in women and for 52.9% in the model for men. These findings are in accordance with results observed in an equivalent sample of young cocaine users with the same instruments[ 34 ] and in contrast with some previous results where HRQL was not clearly related to some determinants of dependence, like amount and frequency of drug use[ 7 ]. Measuring severity of dependence directly with a validated instrument probably helped us to detect this relationship. Also the sample included a considerable heterogeneity of drug careers which can facilitate finding a significant result. In fact, 7% of the subjects had an SDS-H score of two or less, and for 50% it was higher than 8, also for SDS-C the corresponding figures were 35.6% and 24.4%. Women showed worse HRQL, which is in accordance with studies in many different populations independently of the instrument used. In previous opiate-user groups gender differences in generic HRQL didn't achieve statistical significance[ 8 , 18 ] or only for some aspects of the SF-36[ 7 ]. Probably the sample size of the present study has helped to underline this difference. Furthermore, the large number of women included allowed a stratified analysis to be performed and construction of a multivariate model exclusively for them in which the set of variables found to be statistically significant differs from that of men. Besides SDS-H and SDS-C, only two other drug-related variables were retained in the female's model, daily alcohol intake and length of cocaine use. When doing the analysis with the total sample an interaction between gender and HIV infection was found, indicating that positive HIV serology only had an impact on HRQL of men. Some studies have found a slower progression to AIDS among HIV positive women, and Jarrin et al say that "in settings with small gaps in gender inequality and universal access to care, HIV-infected women fare better than their male counterparts in the era of HAART"[ 35 ]. Contrary to previous studies[ 14 , 34 ] poly-drug use was not confirmed as an independent factor for HRQL, not even when considering as a continuous variable the number of illegal substances used with a frequency of weekly or higher. Even though our variable was a proxy of DSM-IV poly-drug use, thus not directly comparable with other studies, it is worth signalling that it was not found to be related in a model in which the severity of cocaine dependence was an important independent HRQL predictor, thus somewhat accounting for another substance used and where, for the total sample and for women, daily alcohol intake was an independent factor positively associated with impaired HRQL. For males, recent overdoses, another factor related to poly-drug use, was also included in the model[ 36 ]. Poor health has been suggested, among other factors, as predisposing to heroin overdose[ 25 ]. In the present study subjects, especially males, who suffered an opiate overdose in the previous 12 months had an impaired HRQL. But, as this is a cross-sectional study it is not possible to know the direction of this association. Some authors consider specific systemic diseases like HIV, liver and lung disease as predisposing factors for overdose[ 26 ]. Those systemic diseases would by themselves affect HRQL, thus it would be difficult to unravel the precise causal path in the association between opiate overdose and HRQL. However, in the present study HIV and overdose were independently associated with HRQL. As some studies have also shown that, after an overdose, drug users have subsequent episodes of impaired health[ 37 ] the opposite sense of the association between poor HRQL and overdose has to be considered and its directionality elucidated in further studies. Previous findings reported higher frequency of overdose episodes among subjects with longer heroin use and higher severity of dependence[ 23 ]. The present study provides evidence that both overdose and severity of drug use are associated with poor perceived health as independent factors. The study population was not gathered from treatment facilities, and although a large proportion of subjects had already contacted treatment services, their global NHP score was lower (better) than subjects starting treatment[ 8 ]. Nevertheless, within the study there was a gradient, subjects that had received drug treatment declared a worse HRQL than those who had not received it. Interestingly, after adjusting for all other relevant variables, subjects who in the last 12 months had received methadone treatment for their drug use, presented better HRQL. This is a remarkable finding as although more impaired subjects would be more prone to seek treatment[ 38 ], other variables explained the impaired HRQL to a point that having been in methadone treatment showed up as beneficial. This fact is consistent with the already ample evidence of methadone treatment effectiveness [ 39 - 41 ]. Other studies have proved the worth of treatment and a statistically significant improvement in HRQL has been demonstrated already after only one month in methadone maintenance[ 10 ]. We were not able to directly assess the influence of psychiatric comorbidity in HRQL, as it was not included among the variables studied at baseline. However, the fact of having received psychiatric treatment, which according to the study of a subsample of these subjects[ 42 ], was associated with psychiatric comorbidity, was one of the variables independently associated to the global score of NHP for males. This finding appears to lend further support for the relationship found in previous studies analysing psychiatric comorbidity and HRQL[ 19 , 20 ]. One socio-demographic factor related to HRQL, both in previous studies and in this group of young heroin users, was employment status, for which both males and females who worked exhibited better HRQL. However, in a cross-sectional study it is hard to say whether employment status is a consequence or a cause of impaired health. The other socio-demographic factor detected, educational level, was only significant for males and the overall sample, better-educated subjects presenting better HRQL. This is a factor that reflects inequalities in health and shows up once more in this population of young heroin users. Low educational level, one of the indicators used to assess inequalities in health, has been associated with increased mortality in different studies including intravenous drug user groups[ 43 , 44 ]. In the model for women alone it was not significantly related to HRQL probably because the distribution of this variable was more homogeneous than in men (i.e.: a lower proportion of women with primary studies not completed) and maybe to the smaller sample size.
Conclusions These heroin users were at a considerable risk of impaired health even at their young ages. HRQL was very much influenced by the severity of dependence, and improved with methadone treatment, thus specific interventions such as increasing effective drug treatment accessibility could improve HRQL of young heroin users.
Background Health Related Quality of Life (HRQL) of opiate users has been studied in treatment settings, where assistance for drug use was sought. In this study we ascertain factors related to HRQL of young opiate users recruited outside treatment facilities, considering both genders separately. Methods Current opiate users (18-30 y) were recruited in outdoor settings in three Spanish cities (Barcelona, Madrid, Sevilla). Standardised laptop interviews included socio-demographic data, drug use patterns, health related issues, the Severity of Dependence Scale (SDS) and the Nottingham Health Profile (NHP). Results A total of 991 subjects (73% males), mean age = 25.7 years were interviewed. The mean global NHP score differed by gender (women: 41.2 (sd:23.8); men:34.1(sd:23.6);p < 0.05). Multivariate analysis was implemented separately by gender, variables independently related with global NHP score, both for males and females, were heroin and cocaine SDS scores. For women, only other drug related variables (alcohol intake and length of cocaine use) were independently associated with their HRQL. HIV+ males who suffered an opiate overdose or had psychiatric care in the last 12 months perceived their health as poorer, while those who had ever been in methadone treatment in the last 12 months perceived it as better. The model with both genders showed all factors for males plus quantity of alcohol and an interaction between gender and HIV status. Conclusions Heroin users were found to be at a considerable risk of impaired HRQL, even in these young ages. A score approaching severity of dependence was the factor with the strongest relation with it.
Competing interests The authors declare that they have no competing interests. Authors' contributions ADS participated in the design of the study, performed the statistical analysis and drafted the manuscript. MTB conceived of the study, participated in its design and coordination and helped to perform the statistical analysis. GB and MJB conceived of the study and helped to draft the manuscript. FGS participated in the design of the study and helped to draft the manuscript. LF conceived of the study, participated in its design and coordination and helped to draft the manuscript. All authors read and approved the final manuscript.
Acknowledgements Work supported by FIPSE 3035/99, FIS 00/1017, CIRIT 2001SGR00405 and FIS C03/09 (RCESP) and G03/05 (RTA). The authors thank Dave Macfarlane for English revision. ITINERE Investigators include: Rosario Ballesta Gomez, Dani Lacasa, David Fernández, Sofia Ruiz Curado, Fermin Fernández Calderón, Gemma Molist, Teresa Silva, Luís Royuela, Fernando Vallejo, Montserrat Neira, Luís Sordo, Albert Sanchez-Niubó and José Pulido.
CC BY
no
2022-01-12 15:21:36
Health Qual Life Outcomes. 2010 Dec 1; 8:145
oa_package/14/bb/PMC3014889.tar.gz
PMC3014890
21126374
Background Over past decades, clinical practice and clinical research has made a concerted effort to move beyond the use of clinical indicators alone and embrace patient focused care[ 1 ]. Along this line, the evaluation of health-related quality of life (HRQoL) has great benefit in revealing how each patient views their own health state. Subjective HRQoL evaluation has particular importance amongst patient groups suffering from chronic, degenerative or terminal conditions where the aim of health interventions are to improve quality of life rather than for a curative effect[ 2 , 3 ]. It is not surprising then, that the use of generic HRQoL evaluation instruments, such as the Euroqol-5D (EQ-5D), have become increasingly popular as a primary outcome measure in clinical trials and as a primary instrument for economic evaluation through cost-utility analysis[ 4 ]. Concerns have been raised about the validity of making comparisons between HRQoL evaluations taken at different time points as change in ones understanding or perception of the HRQoL construct may occur between assessments [ 5 - 8 ]. If a respondent were to change their understanding of what components are included in the construct of HRQoL (reconceptualisation), or the relative importance of certain components of HRQoL in relation to the other components (reprioritisation) or change their internal perception of the relative value of certain health states in relation to others (recalibration), then each evaluation may not necessarily be measuring the same concept, with the same value system on the same scale despite consistent use of the same patient reported outcome [ 5 - 7 ]. This phenomenon has been given the term 'response shift.' Response shift is generally considered to be part of naturally occurring adaptive processes and may help individuals adjust to living with poor health states and thus may be a desirable coping mechanism or even the goal of some treatments[ 6 , 7 , 9 - 11 ]. However, it also threatens to invalidate comparisons of pre and post intervention assessments or assessments taken over multiple time points in the trajectory of a chronic disease, despite use of a standardised instrument[ 6 , 7 , 9 , 11 - 13 ]. For this reason a number of methods to detect response shift, such as the 'then-test' (a retrospective report of a previous health state from the respondent's current perspective)[ 5 , 8 , 11 , 14 , 15 ] and 'structural equation modelling' (mathematical modelling to detect changes in factor solutions and variance-covariance matrices over time)[ 12 , 15 , 16 ] have been developed to evaluate response shift between assessments. However, these methods can often be time consuming, complex or burdensome on patients[ 5 , 7 , 11 , 15 ]. Detailed discussion of methods to detect response shift has previously been described[ 5 , 7 , 11 , 15 , 17 ]. It may not be possible (or desirable) to eliminate adaptive processes that contribute to response shift[ 5 , 7 , 11 ]. However, a potentially preventable (and undesirable) response shift artefact may occur as a result of subjective HRQoL appraisal processes. This may occur when a respondent does not give consistent consideration to questions used to evaluate their HRQoL at each assessment point. Subjective scales dependent on brief anchor descriptions to give meaning to the scale may be particularly prone to inconsistent consideration of the instrument, as a change in consideration of one or both anchors may lead to a substantial difference in response[ 11 ]. The EQ-VAS is the health state rating scale from the popular EQ-5D generic health-related quality of life instrument. The EQ-VAS includes a 100 point visual analogue rating scale with a bottom anchor of 'worst imaginable health' and a top anchor of 'best imaginable health'[ 18 ]. The EQ-VAS has favourable empirical evidence supporting its sensitivity to change, validity and reliability[ 19 - 27 ]. However, an investigation of EQ-VAS use in rating multiple hypothetical health states found that the rating given to common moderate health states were affected by the context in which they were presented[ 28 ]. It was noted that moderate health states were assigned lower values when presented in the context of more mild (better) health states and assigned higher values when presented in the context of more severe (worse) health states [ 28 ]. This is not an isolated finding for rating scales[ 29 ]. There is also evidence from other fields that framing a question to focus on positive or negative attributes can yield different responses despite no difference in logical meaning[ 30 - 33 ]. Empirical investigations of the framing effect generally suggest respondents demonstrate preference for an option with a positive valence rather than negative[ 31 - 33 ]. A simple example includes respondents reporting ground mince as 'tastier' when labelled as 75% lean, rather than 25% fat[ 34 ]. Framing effects have been applied in a wide range of fields including politics, consumer behaviour and health[ 30 - 34 ]. Respondents completing health state rating scales (like the EQ-VAS) are generally not required to rate multiple hypothetical health states and intentional framing techniques are not routinely employed. However, a similar unintentional reference type bias may occur due to social comparisons or other life events[ 11 ]. Consider a 65 year old woman who is receiving treatment in hospital after suffering a stroke. She may rate her health at this time with reference to surrounding hospital patients who are very unwell. This patient may report her health as 60 out of 100 on the EQ-VAS immediately prior to discharge from an inpatient rehabilitation facility; after considering how much better she is than other patients in very poor health states (near the bottom of the scale). However, immediately after discharge into the care of family, this patient may report her health as 45 out of 100 on the EQ-VAS after considering how much worse her health is in comparison to healthy peers in the community (who may be near the top of the scale). An independent observer may infer that a decline in health state of 15 points has occurred (despite potentially no reduction in the patients' actual health or HRQoL). Inconsistent consideration of subjective patient reported outcomes may cause a patient to paradoxically report a change when no change has occurred, or a disproportionate change than that which has actually taken place. An inaccurate representation of change due to this type of artefact may have serious implications. In clinical practice this may complicate attempts to evaluate whether a health intervention or disease has resulted in meaningful change in a person's HRQoL. Of no less importance would be the effect that an inaccurate representation of change would have during a randomised trial if all groups were not equally exposed to stimuli prompting a response shift[ 11 ]. For example, an intervention group may be required to attend a hospital, clinic or group intervention session resulting in exposure to individuals experiencing extremely poor health states, while a control or comparator group may not be given this same exposure[ 11 ]. Despite the previous work by Krabbe and colleagues on multi-item visual analogue scale ratings,[ 28 ] there is currently no empirical evidence indicating whether an acute shift in response to a health state scale such as the EQ-VAS may result from a reference type bias when individuals are rating their own health state. The purpose of this study is to illustrate that respondents may not give consistent consideration to the health states that give meaning to the EQ-VAS, and investigate whether merely asking respondents to consider a detailed descriptors of an extremely good health state (Description-A) and extremely bad health state (Description-B) between assessments induces an acute shift in their own EQ-VAS rating. The set of descriptors used as Description A and B are presented in Additional file 1 . It was hypothesized that respondents frequently would not consider what the EQ-VAS scale anchors represent during initial completion of this scale. Furthermore, it was considered likely that many participants would change their overall HRQoL report after consideration of the extreme health descriptors (Additional file 1 ). It was hypothesized that consideration of extremely poor health descriptors would cause many respondents to increase their reported HRQoL score as they would consider their current health state to be further away from the lower end of the scale, while some would lower their reported HRQoL considering that their current health state was actually closer to lower end of the scale. In the same way after considering descriptors of an extremely good health state many would move their score lower, while some would move their score higher. It was also considered possible that an order effect may occur whereby patients' responses may be dependent not only on the extreme health state descriptors themselves, but the order in which they were provided. Previous investigations dealing with HRQoL reporting and order effects have generally found no significant order effect[ 35 - 38 ]. However, given the novel nature of this investigation in providing extreme health state descriptors between assessments, this investigation also aimed to examine whether the order in which these descriptors were provided affected the pattern of responses.
Methods Design A two group, randomized crossover design methodology trial was implemented (Figure 1 ). After completing baseline measurements, patients randomized to group one received Description-A first (this involved being asked to consider the set of good health state descriptors) then Description-B (this involved being asked to consider the set of poor health state descriptors). Patients in group two received Description-B first, then Description-A. There was no washout period between the provision of each of the two health state descriptor sets, as the order effect and effect of receiving both sets of descriptors were under investigation. Participants and setting One hundred and fifty-one patients admitted to the rehabilitation unit of a tertiary hospital in Brisbane, Australia, participated. This population was selected for this investigation for several reasons. The focus of health interventions for this patient group generally focuses on treatments and therapies aiming to maximise function and HRQoL, thus making HRQoL evaluation integral to clinical and research assessments within this type of patient population[ 3 ]. This population is also potentially at risk of changing points of reference when completing subjective patient reported outcomes due to social comparisons or life events that have lead them to be in need of hospitalisation[ 11 ]. For inclusion in the study patients were required to be able to communicate effectively in English and have basic cognitive functioning intact as indicated by a Mini Mental State Examination (MMSE) score of >23/30[ 39 ]. Measures The primary outcome measure was the EQ-VAS. This is a continuous measure of overall health state using a 100 point visual analogue scale where 0 represents the worst imaginable health and 100 represents the best imaginable health[ 18 ]. This outcome measure was used a total of three times for all participants (Figure 1 ). The EQ-VAS was first completed at baseline (VAS 1) as a control for comparison purposes, then for a second time (VAS 2) after each group had received their first set of descriptors (Description A or B depended on group). The EQ-VAS was then completed for a third time after the crossover (VAS 3) after each group received the remaining set of descriptors (Description B or A respectively). As a secondary outcome immediately after responding to the baseline EQ-VAS (VAS 1) before either set of descriptors were provided, participants were asked whether they had "considered what best (and worst) imaginable health may be like." This was recorded as a binary yes/no answer for each anchor. If participants had considered what a best imaginable or worst imaginable health state may be like for either EQ-VAS anchor they were asked to describe in words what they had considered. Their description was recorded verbatim. After receiving each set of descriptors (Description-A or Description-B), patients were also asked if the health state described was more extreme than that which they had previously considered to be the end point on the EQ-VAS (0 or 100 respectively). A dichotomous response to this question (yes/no) was also recorded as secondary outcome measure. Baseline patient demographics and their Functional Independence Measure score[ 40 ] were also collected from the medical record for the purpose of describing the sample. Intervention (Description-A and Description-B) Description-A involved asking the participant to consider a set of descriptors for an extremely good health state (Additional file 1 ). Description-B involved asking the participant to consider a set of descriptors for an extremely poor health state (Additional file 1 ). Each set of descriptors required less than one minute to read at a comfortable pace. The descriptors provided to the patient were a compilation of the respective best and worst descriptors for each health component used in the Assessment of Quality of Life (AQoL) instrument[ 41 ]. It is noteworthy that both sets of descriptors were not intended to affect the patients underlying health, and thus were health evaluation methodology interventions rather than intended as any kind of clinical intervention. The descriptors were intended to promote more careful consideration of a range of possible HRQoL attributes by the respondent immediately prior to assigning an EQ-VAS value to their own health state. Procedure Ward staff identified potential participants who were then approached by a research assistant (RA1). RA1 explained the study and sought informed written consent. RA1 was not aware of the randomisation sequence (calculated using computerised random number generation by a blinded member of the investigative team and stored in a locked filing cabinet). Consenting participants were then allocated to group (one or two) in order of the random sequence according to their participant number by a separate research assistant (RA2). Before receiving either set of descriptors, patients in both groups completed a baseline self-report of the EQ-5D questionnaire including the EQ-VAS (VAS 1), and the relevant secondary outcomes. Group one received the health state descriptor sets in the alternative order to group two (Figure 1 ). After receiving being asked to consider the first set of health state descriptors (Description A or B depending on group), participants completed the assessment measures which included a second self-report of the EQ-VAS (VAS 2) and the secondary outcome measures. Once participants had completed these assessment measures the remaining set of health state descriptors (Description B or A respectively) was immediately given and patients then completed a third and final self-report of the EQ-VAS (VAS 3) and the relevant secondary outcomes. The assessments and health state descriptors were administered in this way, only minutes apart, to eliminate the possibility of an actual change in underlying health state. This investigation was approved by the Princess Alexandra Hospital and The University of Queensland's Human Research Ethics Committees. Power analysis When examining the main effect comparison of Description-A versus Description-B on EQ-VAS scores after each set of descriptors, this experiment had 90% power to detect a conservative between-groups difference in VAS of 3 points assuming a standard deviation of 17.5 using total sample size of 150 and a two tailed alpha of 0.05. Because of the correlation of responses within patients, this sample size had >90% power to detect a similar change in VAS when examining the within-group main effect of providing both sets of descriptors between baseline (VAS 1) and the final follow-up assessment (VAS 3). Data Analysis Demographic and baseline EQ-VAS data were tabulated (Table 1 ). Raw data was checked for normality graphically and using tests for skew and kurtosis[ 42 , 43 ]. Difference between groups in baseline EQ-VAS score (VAS 1) was examined using an unpaired t-test. Three change scores for the EQ-VAS were calculated. These were the difference between the baseline EQ-VAS and the EQ-VAS completed after receiving the first set of descriptors (VAS 2 -VAS 1), the difference between EQ-VAS after the first set of descriptors and the final EQ-VAS after the second set of descriptors (VAS 3 -VAS 2) and the difference between the baseline EQ-VAS and the final VAS after the second set of descriptors (VAS 3 -VAS 1). The number (and percentage) of respondents who changed their EQ-VAS by 5 points or more (in either direction) after exposure to the good and poor health state descriptors was calculated (Table 2 ). These calculations were done in order to evaluate the effect of the health state descriptors at an individual level (as opposed to group mean differences). This analysis was considered important as analysis of group means would only reflect a systematic change (i.e. a general increase or a general decrease in EQ-VAS scores). However, some individuals may have reported positive shifts while others report negative shifts (depending on their response to the health state descriptors). If shifts in response occurred in a less uniform way such as this, these changes may cancel one another out resulting in no significant mean change. Such a finding may mask response shifts that may have been interpreted as meaningful change in a clinical setting where decisions are likely to be based on an individual patient's reported change. This is in contrast to changes in group means which are more likely to affect the interpretation of clinical trial findings. To investigate mean EQ-VAS changes two mixed 2x2 ANOVAs were also conducted. The first ANOVA investigated whether providing the good health descriptors had a different effect than providing the poor health descriptors and whether this was dependent on the order in which the descriptors were provided. To examine this, the first ANOVA investigated the main effects of Description (A versus B) and sequence (i.e. whether participants were in the group who received best or worst health descriptors first), and an interaction effect between them. This analysis examined the change between the EQ-VAS rating taken after respondents were exposed to each set of health state descriptors (after Description A or B) and the EQ-VAS rating taken immediately prior to the provision of that set of descriptors. The second ANOVA investigated whether the final EQ-VAS rating after the provision of both good and poor health state descriptors (VAS 3) was different to the baseline EQ-VAS report (VAS 1) and whether this was dependent on the order in which the descriptors were provided. To examine this, the second ANOVA investigated the main effects of total change in HRQoL (VAS 3 -VAS 1) and sequence (i.e. group), and the interaction between total change in HRQoL and sequence (i.e. group).
Results One hundred and fifty-one patients were enrolled in the study. All participants completed each assessment and were included in analysis. The groups' baseline demographics were comparable (Table 1 ) with no mean difference in baseline EQ-VAS between groups (p = 0.30). Immediately after completing their baseline EQ-VAS, 74 (49%) participants reported that they had not considered what best imaginable health (top scale anchor) may be like and 85 (66%) had not considered what worst imaginable health (bottom scale anchor) may be like. Of those participants who did think of a best imaginable health state, 59 (77%) thought the set of good health descriptors (Description-A) was more extreme (better) than the health state they had previously considered as the top scale anchor. Of those participants who did think of a worst imaginable health state, 63 (95%) thought the set of poor health descriptors (Description-B) were more extreme (worse) than the health state they had previously considered as the bottom scale anchor. The number of participants in each group who changed their EQ-VAS report by 5 points or more after exposure to each of the health state descriptors are presented in Table 2 . The majority of patients in both groups either increased or decreased their VAS score after being exposed to the good and poor health state descriptors. When comparing the final EQ-VAS score after both sets of health descriptors had been provided (VAS 3), to their baseline score (VAS 1) 106 (70%) of all participants had a final health VAS self-report that differed by 5 points or more from their baseline VAS; 51 were from group one and 55 were from group two. The first ANOVA investigating whether providing the good health descriptors had a different effect than providing the poor health descriptors revealed this main effect of Description (A versus B) was significant (df = 1,149; F = 11.88; p < 0.001). A slight difference between groups in response to the good health descriptors observed in Figure 2 (slight increase for group one, small decrease for group two) was not significant with the main effect of sequence (df = 1,149; F = 0.24, p = 0.623) and the interaction (df = 1,149; F = 0.07, p = 0.793) both non-significant. Data from both groups combined indicated that the poor health descriptor set caused a mean (SD) increase in VAS score of 4.88 (11.81) points while the good health descriptor set caused a mean (SD) decrease in VAS score of 0.35 (10.71) points when compared with the VAS score immediately prior to that set of descriptors. The second ANOVA which investigated the main effect of mean change in EQ-VAS after exposure to both sets of descriptors (VAS 3 -VAS 1), revealed that both groups' final mean EQ-VAS score was higher than their baseline EQ-VAS score (df = 1,149; F = 21.21; p < 0.001). The order in which the descriptors were received was non-significant with the main effect of sequence (df = 1,149; F = 2.11 p = 0.148) and the interaction effect (df = 1,149; F = 0.13 p = 0.723) both non-significant. The overall data from both groups combined indicated a mean (SD) difference between the final EQ-VAS (VAS 3) and the baseline EQ-VAS (VAS 1) for all participants was 4.5 (12.0) points, VAS 3 was higher. This is also illustrated in Figure 2 where no substantial difference between the mean change scores from each group at the final assessment point (VAS 3) existed.
Discussion Overall Outcome The findings from this investigation support our hypothesis that respondents frequently do not give consistent consideration to the health states which give meaning to a health state scale such as the EQ-VAS. This may have a substantial effect on how a respondent reports their HRQoL on rating scales of this nature. This investigation has been the first to demonstrate that patients' self-report of their own HRQoL can be substantially altered despite no actual change in their underlying health state occurring (Table 2 and Figure 1 ). A change in self reported EQ-VAS rating was elicited for a large proportion of individuals merely by asking respondents to consider a set of health state descriptors (Table 2 ). As one would expect, the mean baseline EQ-VAS score (VAS 1) for this hospitalised patient sample was substantially lower than the previously reported population norm of 82.5 out of 100[ 44 ]. Despite anchors of best imaginable and worst imaginable health state being present in the standard application of this instrument, participants frequently did not consider what these anchors might represent. Overall 133/151 (88%) and 148/151 (98%) of participants either reported that the descriptors of very good and very bad health states (respectively) were more extreme than they had previously considered for the respective end anchor points or that they had not considered best and worst imaginable health states at all during standard completion of the EQ-VAS. Overall 70% of participants changed their self-report of HRQoL on the 100 point scale by a margin of 5 points or more after being provided with detailed descriptors of both good and poor health states (Table 2 ). These changes were not uniform across individuals, with 79 (52%) increasing and 27 (18%) decreasing their EQ-VAS rating by 5 points or more. At the present time there is no available, published value for minimal clinically important difference on the EQ-VAS amongst this type of population. However a change of this magnitude is comparable to what has previously been identified as clinically important change on this scale amongst other patient populations[ 45 - 49 ]. Furthermore in the context of this population, a change of 5 points or greater represented a change of 8.5% or greater of the mean baseline score. Thus this amount of change in self-reported HRQoL on this scale may well have been interpreted as clinically meaningful for up to 70% of participants despite it being attributable to an acute shift in response rather than a change in underlying health. If this were observed in a clinical setting, these reports may have incorrectly been interpreted as improvement in HRQoL for individuals who increased their score, and as decline in HRQoL amongst those who decreased their score (Table 2 ). While it is unlikely that a patient will come across extreme health state descriptors between health assessments unless they are provided to them explicitly, other naturally occurring events (such as exposure to patients in an extremely poor health state while attending a hospital, watching television or elsewhere in the community) are likely to affect how a respondent completes a self evaluation of their own health state. Strengths and limitations A strength of this investigation lies in the methodology of employing a randomised crossover trial design for this novel examination of HRQoL evaluation. This has allowed for a methodologically rigorous investigation resulting in empirical evidence to support our hypothesis. This proof of concept is likely to contribute to future improvement in self-reported health evaluation methodology relevant to clinical settings, epidemiological investigations and health research utilising patient reported outcomes. However, the ability to directly generalise these results is limited by the population in this study being hospitalised older adults and the use of a single rating scale (EQ-VAS) as the primary outcome. It is possible that other populations and rating scales may have been affected to a greater or lesser extent. However, given the high use of healthcare resources by this population and the widespread use of the EQ-5D instrument, the sample and EQ-VAS were appropriate for this investigation. Comparison to prior research The metric properties and theoretical basis of visual analogue rating scales for use in evaluating health states has been the subject of much investigation and debate[ 11 , 28 , 29 , 50 - 58 ]. Previous empirical work has demonstrated that EQ-VAS ratings can be dependent on the context in which they are presented when rating multiple hypothetical scenarios[ 28 ]. While that finding has important implications regarding the use of multi-item visual analogue scales for assigning utility values to hypothetical health states,[ 28 ] this investigation has been the first to highlight the risk of a reference type bias on influencing individuals report of their own HRQoL using a rating scale such as the EQ-VAS. The novel nature of this investigation limits the direct comparisons that can be made to previous empirical investigations of the response shift phenomenon. Research investigations in the response shift field have often focused on analysis of mean scores or changes at a group level [ 59 - 62 ] as opposed to changes at an individual level[ 8 , 17 , 63 ]. While this investigation found significant effects at a group level with changes in mean EQ-VAS ratings, non-uniform response shifts across a large proportion of individuals were also observed (Table 2 ). Findings from this study are consistent with previous investigations of social comparison, framing and order effects. It has previously been identified that self-reports of quality of life and HRQoL are dependent on social comparisons[ 64 - 67 ]. It is likely that the descriptions of good and poor health states presented in this investigation may have elicited a similar effect to previously described upward or downward social comparisons respectively[ 64 , 66 , 67 ]. The resultant change in EQ-VAS that occurred after this stimuli is also congruent with investigations of the framing effect[ 30 - 33 ]. While the current investigation did not alter the wording of the EQ-VAS to give a positive or negative valence, a similar effect is likely to have been elicited by the extreme health state descriptors provided between assessments. Interestingly, the order (sequence) in which the descriptors were provided in this investigation was not statistically significant. This is consistent with previous investigations that have revealed the order of instrument administration to be inconsequential[ 35 - 38 , 68 ]. Implications and future directions The EQ-VAS instrument was used in this investigation to illustrate how variable consideration during the evaluation process can cause substantially different reports of HRQoL, despite no actual change in underlying health. Rather than an indictment of this particular instrument (which is certainly not the intention of the authors), these results indicate that caution should be exercised when using subjective patient reported outcomes such as those dependent on extreme anchors to give meaning to the value assigned to an individual health state. It is clear from the minimal amount of consideration of the anchors by the respondents during the standard administration of the EQ-VAS, and their desire to change their response after being asked to consider the health state descriptors in this study, that responses are frequently not well considered. It is possible that many respondents may have initially applied an unwritten qualifying context for the anchors, such as best or worst health 'that is possible for me,' 'that I have experienced,' 'for my age', or some other social comparator. Further investigation of what the respondents considered would be useful to support or refute this speculation. Empirical evidence of this nature would be useful to inform future improvements in HRQoL evaluation methodology. This empirical evidence could be generated through qualitative analysis of a direct think aloud approach or probing questions immediately following standard completion of the instrument[ 69 ]. Based on findings from this investigation it may be possible to promote consistent consideration of HRQoL scales by artificially creating a standardised frame of reference for an instrument. In the case of the EQ-VAS respondents may be asked to consider a broad description of an extremely good and poor health state, like those used in this study, before completing the EQ-VAS. We are not suggesting that these health descriptors represent best and worst imaginable health. Rather, they may act as stimulus for respondents to consider a spectrum of health components, and give reasonable consideration to how extreme health states can be. If this occurred at each assessment, it may promote consistent consideration of the instrument. Considering the spectrum of health components included in the health state descriptors may potentially reduce reconceptualisation and reprioritisation, while considering the extreme nature of how bad (or good) each of the health components can be may help reduce recalibration. Further investigation in this area is warranted, and would most likely require use of custom designed evaluation measures or approaches. Further research is also indicated to determine if extreme health states which give meaning to health rating scales are frequently not considered amongst other patient populations. Investigation of the issues addressed in this manuscript should also be examined amongst other patient reported outcomes including pain and fatigue.
Conclusions Subjective health state evaluations may not be well considered. An immediate significant shift in response can be elicited by exposure to a mere description of an extreme health state despite no actual change in underlying health state occurring. Caution should be exercised when interpreting change in subjective patient reported outcomes in research and clinical settings; particularly those dependent on brief extreme anchors to give meaning to assigned values.
Background Clinical practice and clinical research has made a concerted effort to move beyond the use of clinical indicators alone and embrace patient focused care through the use of patient reported outcomes such as health-related quality of life. However, unless patients give consistent consideration to the health states that give meaning to measurement scales used to evaluate these constructs, longitudinal comparison of these measures may be invalid. This study aimed to investigate whether patients give consideration to a standard health state rating scale (EQ-VAS) and whether consideration of good and poor health state descriptors immediately changes their self-report. Methods A randomised crossover trial was implemented amongst hospitalised older adults (n = 151). Patients were asked to consider descriptions of extremely good (Description-A) and poor (Description-B) health states. The EQ-VAS was administered as a self-report at baseline, after the first descriptors (A or B), then again after the remaining descriptors (B or A respectively). At baseline patients were also asked if they had considered either EQ-VAS anchors. Results Overall 106/151 (70%) participants changed their self-evaluation by ≥5 points on the 100 point VAS, with a mean (SD) change of +4.5 (12) points (p < 0.001). A total of 74/151 (49%) participants did not consider the best health VAS anchor, of the 77 who did 59 (77%) thought the good health descriptors were more extreme (better) then they had previously considered. Similarly 85/151 (66%) participants did not consider the worst health anchor of the 66 who did 63 (95%) thought the poor health descriptors were more extreme (worse) then they had previously considered. Conclusions Health state self-reports may not be well considered. An immediate significant shift in response can be elicited by exposure to a mere description of an extreme health state despite no actual change in underlying health state occurring. Caution should be exercised in research and clinical settings when interpreting subjective patient reported outcomes that are dependent on brief anchors for meaning. Trial Registration Australian and New Zealand Clinical Trials Registry (#ACTRN12607000606482) http://www.anzctr.org.au
Competing interests The authors declare that they have no competing interests. Authors' contributions All authors contributed to the conception of research idea and planning of research processes. SM (and research assistants) contributed to data collection. SM and TH contributed to data analysis. SM prepared the manuscript. All authors contributed to manuscript review, appraisal and editing. Supplementary Material
Acknowledgements None
CC BY
no
2022-01-12 15:21:36
Health Qual Life Outcomes. 2010 Dec 2; 8:146
oa_package/e6/67/PMC3014890.tar.gz
PMC3014891
21122151
Introduction Hereditary hearing loss is a genetically heterogeneous disorder in humans, with an incidence rate of approximately 1 in 1000 children [ 1 ]. Nonsyndromic deafness accounts for 60-70% of cases of inherited hearing impairment and involves 114 loci and 55 different genes with autosomal dominant (DFNA), autosomal recessive (DNFB), X-linked (DFN), and maternal inheritance patterns [ 2 ]. The most common causes of nonsyndromic autosomal recessive hearing loss are mutations in connexin 26, a gap-junction protein encoded by the GJB2 gene [ 3 - 10 ]. To date, more than 150 mutations, polymorphisms, and unclassified variants have been described in the GJB2 gene, which account for the molecular etiology of 10-50% of patients with nonsyndromic hearing impairment http://davinci.crg.es/deafness . Therefore, GJB2 is normally the first gene to be tested in patients with hearing loss. In China, the ratio of patients carrying mutations in the coding exons of GJB2 is 21% (biallelic, 14.9%; monoallelic, 6.1%) [ 11 ]. However, few studies have examined the noncoding exon 1 of GJB2 in Chinese hearing-impaired patients, and even fewer studies have investigated the promoter region of this gene. The results of GJB2 screening performed to date have indicated that a substantial fraction of patients (6-15%) carry only one pathogenic mutation in the GJB2 gene with either recessive or unclear pathogenicity, despite direct sequencing of the entire coding region of the gene [ 12 - 14 ]. The ratio of a 309-kb deletion involving the GJB6 gene, now called del(GJB6-D13S1830), was shown to be the second causal mutation in these monoallelic heterozygous patients in Spain and France [ 15 , 16 ]. Previously, we tested Chinese patients with only one monoallelic mutation in the coding region of GJB2 for the presence of this mutation, but the results indicated this to be a very rare cause of hearing loss in the Chinese population, and this is not a major additional factor in our monoallelic patients (unpublished). Similar results have also been reported in Austria and the Czech Republic [ 17 , 18 ]. The splice site mutation IVS1+1G>A, also called the -3170 G>A mutation, in the GJB2 gene was originally reported by Denoyelle et al . [ 19 ]. This splice site mutation has been found in several populations [ 20 - 26 ] and is predicted to disrupt splicing, yielding no detectable mRNA [ 20 ]. Not all genetic laboratories routinely test for this mutation, which lies outside the coding region of the GJB2 gene. This study focused on clarifying the impact of GJB2 IVS1+1G>A mutation and the promoter region of this gene among Chinese patients with hearing loss, especially those with pathogenic mutation in only one allele of the GJB2 gene coding region.
Materials and methods Patients and DNA samples A total of 212 deaf subjects with monoallelic mutation in the coding region of GJB2 and 262 unrelated nonsyndromic hearing loss patients without GJB2 mutation from unrelated families were included in this study. The 212 deaf subjects with monoallelic mutation, mainly frameshift and nonsense mutations, in the coding region of GJB2 were screened from a total of 7133 nonsyndromic hearing loss cases in China (Table 1 ). Of the 7133 cases, 3433 were collected from 28 different regions, covering 90% of the provinces in China; 3700 were patients of the Genetic Testing Center for Deafness, PLA General Hospital, during the period from March 2002 to December 2010. The majority of the 7133 patients were Han Chinese (6540), followed by Southwest Chinese minorities (134, including Buyi, Hani, Yao, Yi, Bai, Wa, Miao, Dong, Tujia, Lahu, Dai, Bulang, Sala, etc.), Tibetan (123), Hui (113), minorities from the Xinjiang Uyghur Autonomous Region (77), Mongolian (63), Maan (51), Chuang (27), and Korean (5). Ethnic subgroup designations were based on permanent residency documentation. The 212 deaf patients consisted of 123 males and 90 females from 0.2 to 67 years old, with an average age of 5.41 ± 1.78 years. Ethnically, the patients consisted of 196 Han, 4 Hui, 3 Uygur, 3 Mongolian, 2 Tibetan, 2 Maan, 1 Miao, 1 Chuang, and 1 Buyi Chinese. The 262 unrelated nonsyndromic hearing loss patients without GJB2 coding region mutation were selected randomly from patients of the Genetic Testing Center for Deafness, PLA General Hospital, during the year 2007. This cohort consisted of 147 males and 115 females from 2 to 46 years old with an average age of 4.52 ± 1.16 years, and ethnically, they were all Han Chinese. The study protocol was performed with the approval of the Ethics Committee of the Chinese PLA General Hospital. Informed consent was obtained from all subjects prior to blood sampling. The parents of pediatric patients were interviewed with regard to age of onset, family history, mother's health during pregnancy, and patient's clinical history, including infection, possible head or brain injury, and the use of aminoglycoside antibiotics. All subjects showed moderate to profound bilateral sensorineural hearing impairment on audiograms. Careful medical examinations revealed no clinical features other than hearing impairment. DNA was extracted from the peripheral blood leukocytes of the 474 (212 + 262) patients with nonsyndromic hearing loss and 105 controls with normal hearing using a commercially available DNA extraction kit (Watson Biotechnologies Inc., Shanghai, China). Mutational analysis The coding exon (exon 2) and flanking intronic regions of GJB2 gene were amplified by PCR with the primers F (5'TTG-GTG-TTT-GCT-CAG-GAA-GA-3') and R (5'GGC-CTA-CAG-GGG-TTT-CAA-AT-3') in all 7133 nonsyndromic hearing loss cases. The GJB2 exon 1, its flanking donor splice site and the GJB2 basal promoter were amplified with the primers F (5'CTC-ATG-GGG-GCT-CAA-AGG-AAC-TAG-GAG-ATC-GG-3') and R (5'GGG-GCT-GGA-CCA-ACA-CAC-GTC-CTT-GGG-3') in all subjects with monoallelic mutation in the coding region of GJB2 , 262 unrelated nonsyndromic hearing loss patients without GJB2 mutation, and 105 normal controls. All the patients and controls were also tested for GJB6 309-kb deletion and the coding exon of GJB6 . The presence of the 309-kb deletion of GJB6 was analyzed by PCR [ 15 , 27 ]. A positive control (provided by Balin Wu, Department of Laboratory Medicine, Children's Hospital Boston and Harvard Medical School, Boston, MA) was used for detection of GJB6 gene deletions. The coding exon of GJB6 was amplified with the primers F (5' TTG-GCT-TCA-GTC-TGT-AAT-ATC-ACC-3') and R (5' TCA-TTT-ACA-AAC-TCT-TCA-GGC-TAC-AG-3'). All the PCR products were purified on Qia-quick spin columns (Qiagen, Valencia, CA) and sequenced using a BigDye Terminator Cycle Sequencing kit (version v.3.1) and ABI 3130 automated DNA sequencer (Applied Biosystems, Foster City, CA) with sequence-analysis software (Sequencing Analysis version v.3.7) according to the manufacturer's protocol. Mitochondrial 12S rRNA and SLC26A4 were also sequenced in the 262 unrelated nonsyndromic hearing loss patients without GJB2 coding region mutation. DNA sequence analysis of mitochondrial 12S rRNA and SLC26A4 were performed by PCR amplification of the coding exons plus approximated 50-100 bp of the flanking intron regions followed by Big Dye sequencing and analysis using ABI 3100 DNA sequencing machine (ABI, Foster City, USA.) and ABI 3100 Analysis Software v.3.7 NT according to manufacturer's procedures.
Results Hearing phenotype Deafness in 10.8%(767/7133) of the 7133 nonsyndromic hearing loss patients is postlingual and in 89.2% (6366/7133) is preligual. The percent of postlingual hearing loss in the 212 nonsyndromic hearing loss patients group with monoallelic mutation in the coding region of GJB2 is 6.6%(14/212) and that of preligual is 93.4% (198/212). The percent of postlingual hearing loss in the 262 nonsyndromic hearing loss patients group without GJB2 coding region mutation is 8%(21/262) and that of preligual is 92% (241/262). The average onset age of postlingual hearing loss in the 7133 patient cohort is 3.19 ± 1.56 years, and that age in the 212 patient group with monoallelic mutation in the coding region of GJB2 and the 262 patient group without GJB2 coding region mutation is 2.78 ± 1.06 years and 3.04 ± 2.39 years, respectively. All of the 212 unrelated patients with monoallelic GJB2 coding region mutation as well as the 262 unrelated nonsyndromic hearing loss patients without GJB2 coding region mutation showed bilateral moderate to profound sensorineural hearing loss. None of the patients in this study showed clinical signs in any other organs except hearing impairment. Genetic results By direct sequencing analysis of 7133 Chinese patients with hearing impairment, we found 212 unrelated patients with monoallelic GJB2 coding region mutation. All of the 212 patients carried frameshift or nonsense pathogenic mutations leading to insertion of a premature stop codon. The detailed genotypes of the 212 patients are shown in Table 1 . We detected four patients carrying the IVS1+1G>A mutation in the heterozygous state in addition to their already known c.235delC, c.35delG, and W3X mutations, respectively [two of the patients both carry the c.235delC mutation]. One novel variant in the GJB2 exon 1, -3175 C>T, was detected in a patient with 235delC mutation. No mutations or variants in the GJB2 basal promoter region were found in this study. In three of the compound heterozygotes carrying IVS1+1G>A and pathogenic mutation in the exon 2 of GJB2 , the separate segregation of each allele was confirmed in either the parents or patients' siblings (Table 2 ). We could not obtain pedigree blood samples in only one patient with GJB2 IVS1+1G>A/35delG mutation. This patient was of the Uygur ethnic minority from Xinjiang Uyghur Autonomous Region. In the patient whose genotype is IVS1+1G>A,c.11G>A(G4D)/c.9G>A(W3X), we confirmed the result by the analysis of the proband's parents' two alleles. We found that the father carried both IVS1+1G>A and c.11G>A(G4D) in one allele and the mother carried c.9G>A(W3X) in one allele, while the opposite alleles of the parents were both wild-type. After inclusion of the IVS1+1G>A mutation in our detection procedure, the percentage of individuals with bilateral sensorineural hearing loss with only one monoallelic frameshift or nonsense mutation in GJB2 decreased from 2.97% (212/7133) to 2.92% (208/7133). Among the 262 patients without GJB2 mutation, four carried the mitochondrial 12S rRNA A1555G mutation, and 19 carried SLC26A4 mutations and were diagnosed as having enlarged vestibular aqueduct by temporal CT scan. None of these patients was found to carry the GJB2 IVS1+1G>A mutation. One patient was shown to carry the GJB6 c.404C>A mutation (T135K), and this patient had no mutation in mitochondrial 12S rRNA or SLC26A4 . This patient was of the Uygur ethnic minority from Xinjiang Uyghur Autonomous Region. In the control group, we detected two c.235delC and one c.299delAT heterozygotes, representing 3%, which coincided with our previous results in a different control cohort [ 11 ]. No GJB2 IVS1+1G>A mutation was detected in the control group. A GJB6 variant, c.446 C>T mutation (A149V), was detected in an individual of the Uygur ethnic minority. We did not find the 309-kb deletion of GJB6 in any of the 212 patients with monoallelic GJB2 coding region mutation or in any of the 105 samples from normal hearing controls with no history of hearing loss.
Discussion The GJB2 gene is composed of two exons separated by an intron, and the coding region is entirely contained in exon 2. The basal promoter activity resides in the first 128 nucleotides upstream of the transcription start point (TSP) and has two GC boxes, at positions 281 and 293 from the TSP, which are important for transcription [ 28 ]. Most of the GJB2 sequence variations described to date are localized in the coding region, and only a few have been reported in noncoding regions of the gene [ 19 , 23 , 29 - 31 ]. Mutational screening performed to date has usually focused on the coding region. GJB2 is responsible for up to 21% of cases of deafness in the Chinese population [ 12 ]. The most common mutation is a frameshift mutation due to deletion of a single cytosine at position 235 (235delC). The four most prevalent mutations: c.235delC, c.299_c.300delAT, c.176_c.191del16, and c.35delG, account for 88.0% of all mutant GJB2 alleles identified in China [ 11 ]. Sequence analysis of the GJB2 gene in subjects with autosomal recessive hearing impairment has revealed a puzzling problem in that a large proportion of patients (6-15%) carry only one mutant allele [ 14 - 17 ]. Some of these families showed clear evidence of linkage to the DFNB1 locus, which contains two genes, GJB2 and GJB6 [ 3 ]. Further analysis demonstrated a 309-kb deletion, truncating the GJB6 gene, encoding connexin 30, near GJB2 in heterozygous affected subjects [ 18 , 19 ]. We had tested Chinese patients with only one monoallelic mutation in the coding region of GJB2 for the presence of this deletion, but it was shown to be a very rare cause of deafness in the Chinese population. Similar results in populations in Turkey, Iran, Austria, Taiwan, China, Poland, and the Altai Republic have also been reported [ 25 , 32 - 39 ]. Cases with one pathogenic mutation in the GJB2 gene may have another as yet unidentified pathogenic mutation in the promoter region or other noncoding regions of GJB2 . To evaluate the impact of the IVS1+1G>A splice-site mutation and the basal promoter region in the noncoding part of the GJB2 gene among Chinese patients, we initially carried the sequencing of GJB2 exon1 among 851 deaf individuals from Central China and no mutation was found[ 11 ], which suggested very low detection rate of GJB2 exon1 mutation among Chinese deaf population. Thus we began to collect and test all available nonsyndromic hearing loss patients with only one monoallelic pathogenic mutation in the coding part of GJB2 . By sequencing exon 1 and the basal promoter region of the GJB2 gene in 212 Chinese patients with GJB2 monoallelic mutation, we identified four patients carrying the IVS1+1G>A mutation. Testing for this mutation explained deafness in 1.89% of Chinese GJB2 monoallelic patients. This ratio is significantly lower than the value of 45% in Czech patients with one pathogenic mutation in GJB2 [ 40 ] and 23.40% of Hungarian patients carrying a mutation in only one allele of the coding region of the GJB2 gene [ 41 ]. It is also lower than the value of 4.6% among Brazilian patients with one pathogenic GJB2 mutation [ 42 ]. The percentage of the IVS1+1G>A mutation was 1.85% (4/216) of mutant alleles in our patient cohort, while in the Kurdish deaf population this percentage is 9.4%(3/32)[ 26 ], significantly higher than the Chinese population. As for the Mongolian population, the frequency of deaf probands carrying two GJB2 pathogenic mutations was 4.5%[ 43 ], significantly lower than that (14.9%) in the Chinese deaf population and the mutation spectrums of GJB2 is also different from that in China. The most common mutation in GJB2 was IVS1+1G to A with an allele frequency of 3.5%[ 43 ] in the Mongolian deaf population. While c.235delC was the most common mutation in the Chinese deaf population with an allele frequency of 12.34%[ 11 ], significantly higher than that in the Mongolian deaf population which was 1.5%[ 43 ]. The differences between the two Asian neighboring countries may lie in two aspects: a) the genetic background of the two races varies. b) in our study IVS1 +1G to A mutation was only screened in hearing loss patients with monoallelic mutation (mainly frameshift and nonsense mutation) in the coding region of GJB2 . These observations indicate that the carrying rate of GJB2 IVS1+1G>A mutation varies among different races. We also tested the IVS1+1G>A mutation in 262 unrelated nonsyndromic hearing loss patients without GJB2 ORF mutation and 105 normal controls, but neither homozygous IVS1+1G>A mutation nor heterozygous IVS1+1G>A mutation was found. The IVS1+1G>A mutation may account for the genetic etiology only in patients with GJB2 monoallelic pathogenic mutation in the Chinese deaf population, which suggests that the frequency of IVS1+1G>A mutation is very low in Chinese population. Matos et al . [ 44 ] reported a GJB2 mutation, -3438C>T, located in the basal promoter of the gene, in trans with V84M, in a patient with profound hearing impairment. They verified that the -3438C>T mutation can abolish the basal promoter activity of GJB2 . Although we extended mutational screening to regions of GJB2 exon 1, its flanking donor splice site, and the GJB2 basal promoter, we found no other mutation except one c.-3175C>T variant in exon 1 and four heterozygous IVS1+1G>A mutations. As the variant, c.-3175C>T, is in the noncoding region, it was taken to be nonpathogenic. There are two reasons that the percentage of monoallelic mutation in the GJB2 gene in our cohort was lower than our previously reported data (6%) [ 11 ], as follows. a) In this study, we only counted pathogenic mutations, frameshift mutations, and nonsense pathogenic mutations; if all the missense mutations which was not found or the carrier rate was significantly low in the normal hearing controls, were calculated, the rate was increased to 5.5%. b) Additionally, about 13% of patients had moderate hearing loss, whereas all the patients in our previous study [ 11 ] showed severe to profound hearing impairment. Through genotype and phenotype analysis in 1093 cases of unrelated, nonsyndromic Chinese individuals with hearing loss, GJB2 mutations were detected in 24.67% (130/527) of patients with bilateral profound hearing loss, 22.33% (44/197) with bilateral severe hearing loss, 14.33% (42/293) with bilateral moderate hearing loss, and 6.58% (5/76) with bilateral mild hearing loss (unpublished data). The differences between the severe to profound hearing loss group and the mild to moderate hearing loss group were statistically significant. In this patient group, the total percentage of GJB2 mutations in all the 1093 cases is 20.22%(221/1093), similar to that in our previous study[ 11 ]. Additionally, patients in the above two cohorts didn't overlap. There are three possible explanations for the failure to detect a second mutant allele in the 208 cases in the present study. a) The second mutant allele has not yet been identified due to the location of mutations deep in introns that were not sequenced. b) It is possible that a digenic pattern of inheritance is responsible for these cases. Therefore, the second mutation may be a connexin gene other than GJB6 or may involve another gene, the product of which interacts with connexin 26. Clearly, this hypothesis can not be verified until the other mutant alleles have been found. c) Part of these heterozygous probands are simply carriers, and their hearing impairment may have other causes.
Conclusion Testing for the GJB2 IVS 1+1 G to A mutation explained deafness in 1.89% of Chinese GJB2 monoallelic patients. Although the percentage is not as high as those in Western and Mongolian populations, it can still serve as a routine testing point in patients with GJB2 monoallelic pathogenic mutation in China.
Background Mutations in the GJB2 gene are the most common cause of nonsyndromic recessive hearing loss in China. In about 6% of Chinese patients with severe to profound sensorineural hearing impairment, only monoallelic GJB2 mutations known to be either recessive or of unclear pathogenicity have been identified. This paper reports the prevalence of the GJB2 IVS1+1G>A mutation in a population of Chinese hearing loss patients with monoallelic pathogenic mutation in the coding region of GJB2 . Methods Two hundred and twelve patients, screened from 7133 cases of nonsyndromic hearing loss in China, with monoallelic mutation (mainly frameshift and nonsense mutation) in the coding region of GJB2 were examined for the GJB2 IVS1+1G>A mutation and mutations in the promoter region of this gene. Two hundred and sixty-two nonsyndromic hearing loss patients without GJB2 mutation and 105 controls with normal hearing were also tested for the GJB2 IVS1+1G>A mutation by sequencing. Results Four patients with monoallelic mutation in the coding region of GJB2 were found carrying the GJB2 IVS1+1G>A mutation on the opposite allele. One patient with the GJB2 c.235delC mutation carried one variant, -3175 C>T, in exon 1 of GJB2 . Neither GJB2 IVS1+1G>A mutation nor any variant in exon 1 of GJB2 was found in the 262 nonsyndromic hearing loss patients without GJB2 mutation or in the 105 normal hearing controls. Conclusion Testing for the GJB2 IVS 1+1 G to A mutation explained deafness in 1.89% of Chinese GJB2 monoallelic patients, and it should be included in routine testing of patients with GJB2 monoallelic pathogenic mutation.
Conflict of interest statement The authors declare that they have no competing interests. Authors' contributions YY, FY, GW, SH, RY and XZ carried out the molecular genetic studies and participated in sequence alignment. YY drafted the manuscript. DeHu and DoHa participated in the design of the study. PD conceived the study, participated in its design and coordination, and helped draft the manuscript. All authors have read and approved the final manuscript.
Acknowledgements This work was supported by Chinese National Nature Science Foundation Research Grant (30572015, 30728030, 31071109), Beijing Nature Science Foundation Research Grant (7062062) to Dr. Pu Dai, Chinese National Nature Science Foundation Research Grant (30801285) and Beijing Nova programme (2009B34) to Dr. Yongyi Yuan.
CC BY
no
2022-01-12 15:21:36
J Transl Med. 2010 Dec 2; 8:127
oa_package/5b/bd/PMC3014891.tar.gz
PMC3014892
21138581
Introduction Over the last decade, cancer therapies that target specific molecular pathways or specific cell types have moved from the laboratory into clinical practice. Similarly, biomarkers that may indicate suitable patient populations for these therapies or act as surrogates for the potential development of a clinical response are increasingly used in the clinic. The clinical application of biomarkers to assess the effect of immune-based cancer therapies is important for several reasons. First, immune-based treatments, such as vaccines, are often designed to elicit a specific response so that the measurement of that response could be a marker of product (e.g., vaccine) potency. Secondly, as immune-based therapies are tested earlier in the therapeutic pathway (e.g., in the adjuvant setting), biomarkers of response become increasingly important as potential endpoints of clinical trials. Finally, clinically qualified biomarkers are needed so that new immunotherapies can be rapidly and efficiently tested and translated to clinical practice. As laboratory-based assays are being transitioned to clinical assays, several issues are raised. The assays must be robust. The clinical samples collected for analysis must be processed in a uniform way to ensure reproducibility of results. Results must be reported in a detailed and uniform way. New assays which have been developed, that will allow broad analysis of multiple immune parameters, must now be better utilized. The lessons learned from biomarker studies in fields such as HIV/AIDS and other infectious diseases, must be better incorporated into cancer immunotherapy studies. To address these and other issues related to the development and application of biomarkers in cancer immunotherapy, the International Society for Biological Therapy of Cancer (iSBTc, recently renamed the Society for Immunotherapy of Cancer, SITC) hosted a one-day symposium at the National Institutes of Health on September 30, 2010. The symposium, titled Immuno-Oncology Biomarkers 2010 and Beyond: Perspectives from the iSBTc/SITC Biomarker Task Force , was organized by Lisa H. Butterfield, PhD (University of Pittsburgh), Mary L. Disis, MD (University of Washington), Samir N. Khleif, MD (National Cancer Institute, CCR) and Francesco Marincola, MD (National Institutes of Health, CC, DTM). This program was a direct extension of the efforts of the iSBTc/SITC Biomarkers Taskforce [ 1 , 2 ], which recently published a collaborative report of its 2009 Workshop ( iSBTc-FDA-NCI Workshop on Prognostic and Predictive Immunologic Biomarkers in Cancer ) and the recommendations which resulted from the work of the Taskforce [ 3 ]. SITC President Bernard A. Fox, PhD (Earle A. Chiles Research Institute) initiated the symposium with a presentation on critical hurdles in cancer immunotherapy that lead to delays of scientific discoveries which provide strong evidence of antitumor effects in preclinical models to be tested in patients. As an extension from the 2009 iSBTc-FDA-NCI Workshop on Biomarkers, SITC and collaborating organizations had identified seven critical hurdles to the effective translation of cancer immunotherapy: 1) the inadequacy of animal models as predictors of efficacy; 2) the prolonged time to obtain approval for clinical trials; 3) the complexity of cancer biology/immunology; 4) the inability to obtain approval to combine most promising new agents in trials; 5) the lack of definitive biomarker(s) for assessment of clinical efficacy; 6) the paucity of translational research teams; and 7) the insufficient exchange of information critical to advancing the field. Fox discussed each of these problems and stressed the need to intensify collaboration to define potential solution. Accordingly, following the symposium (October 1, 2010) SITC hosted a Collaboration Summit with representatives from nine other domestic and international associations with similar interests in promoting research and translation of cancer immunotherapy (see Appendix). In an effort spearheaded by Fox, on behalf of SITC, the collaborating associations are preparing a joint publication that further defines these critical hurdles to cancer immunotherapy and joint initiatives to overcome the identified barriers. Samir N. Khleif, MD (National Cancer Institute, Center for Cancer Research) spoke briefly on the priorities in biomarker development in immunotherapy. He started by identifying the gaps between the ideal setting/goals of immunotherapy, its current state, and the role that biomarkers may play to bridge such gaps. He outlined the current state of immunotherapy/vaccine approaches as highly empirical in their design, which is partly a result of the lack of full understanding of the immune system response to therapy and its consequent interaction with the tumor microenvironment; and the lack of understanding of effective immune endpoints measurements. He described the complexity of immunotherapeutics compared to other types of cancer-targeted therapy for the need of immunotherapy agents to interact with the immune system, tumor microenvironment, and the tumor, to be able to generate a meaningful clinical response. This further reflects the complexity of developing biomarkers for immunotherapy and the need for a wider array of biomarkers that goes beyond the standard needed for development of cancer-targeted therapy (diagnostic, predictive, metabolism and outcome biomarkers). Immunotherapy may also require selecting biomarkers (e.g., to identify patients expressing a specific antigen and the ability to express the antigen), and biologic response biomarkers that determine the ability to generate an immune response to the therapy, which is needed for tumor response. He also addressed the complex variability of the "effective" immune response biomarkers and what biomarkers would predict the susceptibility for the generation of an effective immune response. A major effort is required to integrate immune profile biomarkers within the clinical trial design with better strategies to correlate objective responses. Further, a biomarker development process should be defined. Khleif concluded his presentation with the identification of the following critical areas for biomarker development: biospecimens; analytical performance/validation; standardization and harmonization; collaboration and data sharing; regulatory issues/science policy; and integration of biomarkers into clinical design/qualification [ 4 ]. Immunologic Monitoring: Standardization and Validation of Assays Lisa H. Butterfield, PhD (University of Pittsburgh) chaired a session on standardization and validation on assays for immunological monitoring and delivered the first presentation in the session. In this update from the 2009 iSBTc Workshop, Butterfield summarized work completed by the iSBTc/SITC Biomarkers Taskforce, which included the recent preparation of the society's position paper Recommendations from the iSBTc-SITC/FDA/NCI Workshop on Immunotherapy Biomarkers [ 3 ]. Road blocks to developing immunotherapy biomarkers are the inherent variability of patients, variability of collection and processing of their blood and tissues, of selection and conduct of assays, and of the information reported on samples and assays reported in clinical trial and biomarker study manuscripts. The Taskforce recommendations include suggestions for ways to minimize variability by using standardized methods for blood and tissue processing and banking; standardized functional assays, thorough reporting of details and controls in publications, and banking of not only blood and serum but also patient DNA, tumor cells and tumor RNA (to determine patient genotypes and tumor gene expression profiles), and sufficient blood and serum for testing novel developing assays and hypothesis generation. Paul V. Lehmann, MD, PhD (Cellular Technology Limited, Shaker Heights, OH), discussed the challenges of T cell monitoring: determining what parameters to measure, how to measure them, and most importantly, how to measure parameters precisely and reproducibly. He focused on the milestones that have lead to the successful standardization of enzyme-linked immunosorbent spot (ELISPOT) assays. These milestones included: 1) the development of protocols for the freezing of peripheral blood mononuclear cells (PBMCs) without functional loss; 2) the development of a library of reference PBMCs for assay comparisons, qualification/validation, and harmonization across institutions; 3) the development of serum free media for all steps of PBMC processing and testing; 4) the development of objective, automated analysis; 5) the development of ELISPOT assay qualification, validation, and high throughput testing; and 6) the demonstration that a unified platform suffices for obtaining highly reproducible ELISPOT data across technicians and institutions. Representing the Association for Cancer Immunotherapy (CIMT), Cedrik M. Britten, MD (University Medical Center of the Johannes Gutenberg-University and BioNTech AG, Mainz, Germany) presented on harmonization of immunological monitoring across institutions. Britten reviewed the CIMT Immunoguiding Program (CIP), a proficiency panel program with 40 participating laboratories in 12 European countries. The aims of this program are to promote: 1) quality assurance by providing immediate feed-back about performance relative to the group (or to a dynamic reference value); 2) assay harmonization by using the collected data to systematically investigate the performance of subgroups and deduce harmonization guidelines; and 3) protocol optimization by using the collected data to systematically identify critical process steps. Britten presented CIP recommendations for harmonization of ELISPOT, which included: refraining from using allogeneic antigen presenting cells (APCs), using triplicate wells for each antigen, introducing a resting time of the PBMCs before they are added to the ELISPOT plate, adding an optimal cell number per well (≥ 4 × 10 5 lymphocytes per well), using serum-free test conditions, and using a scientifically sound method for response determination. Large-scale harmonization initiatives may lead to dynamic reference values to rank test performance, increased comparability of results generated across institutions, and improved assay performance in a group, thereby potentially accelerating clinical development of new cancer immunotherapies. Britten also discussed the Minimal Information About T cell Assays (MIATA) initiative, which is part of a larger effort of "Minimal Information" projects for different types of data sets. The assay harmonization efforts conducted over the past five years have led to the identification of several critical experimental process steps. As a consequence, MIATA was launched as a community driven reporting framework for T cell experiments [ 5 ]. Published reports of T cell experiments, suggested Britten, should include sufficient information on all critical test variables and process steps, as agreed upon by a panel of participants, through a web-based iterative process with broad input from the immunotherapy field. Session 1 finished with a panel discussion with the audience, led by Butterfield, Lehmann, Britten, Sylvia Janetzk, MD (Zellnet Consulting, Inc., Fort Lee, NJ) and the CIC, and Michael Kalos, PhD (University of Pennsylvania). Correlation of Immunity to Clinical Response and Potency Assays In a session focused on correlating immunity to clinical responses and potency assays, chaired by Mary L. Disis (University of Washington), Raj K. Puri, MD, PhD (Division of Cellular and Gene Therapies, Office of Cellular, Tissue and Gene Therapies, CBER, FDA) first discussed the FDA's considerations on potency and immune monitoring for cancer vaccines and cancer immunotherapy products. He discussed the importance of full product characterization, including development of potency assays according to FDA regulations, in successful product development. Puri discussed approaches for potency measurements, including 1) direct measurement of biological activity with in vitro or in vivo bioassays; 2) indirect measurement (i.e., surrogate assay) of biological activity using analytical, non-bioassays that are correlated to biological activity; and 3) the combination of multiple assays (a combination of biological or analytical assays where the combined results constitute an acceptable potency assay). Successful potency assays indicate biological activity(s) specific and relevant to the product and measure activity of all components deemed necessary for in vivo activity. Potency assays must provide a quantitative readout, indicate product stability, and meet predefined acceptance and/or rejection criteria. Results must be available in time for lot release. Importantly, fully-developed potency assays are required prior to the initiation of Phase 3 clinical trials so they may be validated during Phase 3 trials. Puri summarized possible approaches to the successful development of potency assays, emphasizing the need to identify functional biomarkers (e.g., biomarkers that correlate with in vitro differentiation and/or detect functional cells in complex mixture). These may include the development of genomic or proteomic techniques to identify functional biomarkers, assessment of unique biochemical markers and secreted proteins, and/or flow cytometric assessment of cell phenotype for purity, which may link to identity and/or potency. Immunological monitoring during development and evaluation of cancer immunotherapies can support proof of concept, advance understanding of immunological mechanisms (including T cell responses and modulation of regulatory cells), and provide information on mechanisms of action. Indeed, an immune response may correlate with clinical benefit, harm, or lack of either; thus immune monitoring may play a significant role in both early and late phases of immunotherapy product development. The FDA has drafted guidance documents for industry and for therapeutic cancer vaccines [ 6 , 7 ]. Additional references for the regulatory process for the Office of Cellular, Tissue, and Gene Therapies (OCTGT) for manufactures are available from the FDA [ 8 ]. Immunologic biomarkers as correlates of clinical response after cancer immunotherapy were presented by session chair, Mary L. Disis, MD. Citing recent data from clinical trials and population-based studies that have correlated biomarkers with clinical outcomes, Disis identified unifying themes around what constitutes an effective anti-tumor response, immunity types and the tumor microenvironment. For example, there is a strong correlation between gene expression in type I T cells (T H1 cells) and relapse in colorectal cancer [ 9 ] and the density of intratumoral T cells and overall survival in ovarian cancer [ 10 ]. Moreover, the composition of tumor-infiltrating T cells is associated with clinical outcomes; higher CD8 + /CD4 + T cell ratios and CD8 + /T reg + ratios are independent predictors of survival in ovarian cancer [ 11 ]. Effective anti-tumor immunity also correlates with measurable changes in the tumor microenvironment following cancer immunotherapy. Modulation of self-regulation within the tumor is associated with response, as exemplified by the correlation between low T reg cell density within ER + breast cancer tumors [ 12 ]. Modulations of immune evasion within the tumor microenvironment are likewise linked to response, with high levels of PD-L1 expression correlating with lower density of CD8 + T cells and survival in ovarian cancer [ 13 ]. Growth-factor mediated changes within the tumor microenvironments are also predictive of outcomes; lower TGFβ-1 levels within the tumor independently predicted longer disease free survival (DFS) among patients with breast cancer [ 14 ]. Functional persistence is also associated with an effective anti-tumor response, with higher density of CD45RO + memory T cells within the tumor independently predicting DFS among patients with colorectal cancer [ 15 ]. As a unifying theme surrounding immunological biomarkers of clinical response after cancer vaccine and T cell therapy, Disis emphasized that Type I immunity facilitates cross-priming and that autoimmunity is the ultimate endpoint of effective cross-priming. While current biomarker candidates generally focus only on the treatment-induced immune response, the impact of therapy on the tumor microenvironment may best predict maintenance of the induced immune response. Newer approaches that integrate measurement of effectors and environmental impact need to be fully assessed and larger studies are needed to demonstrate stronger associations between biomarkers and clinical response after cancer immunotherapy. David Stroncek, MD (National Institutes of Health, Clinical Center) presented on measuring the potency of dendritic cell preparations using transcriptional analysis. Stroncek noted the importance of identifying biomarkers for new cellular therapies that can be used to assess: 1) consistency i.e., technical validation, including method validation (assays) and process validation (manufacturing); 2) biological variability, including inter-individual variability associated with genetic, epigenetic and clinical conditions, and intra-individual variability associated with changes in an individual over time or changes in health status. Potency biomarkers must discriminate between a biologically active and inactive product with minimal assay variability and accurately reflect manufacturing and individual variability. Stroncek et al are engaged in identifying biomarkers to assess mature dendritic cells (DCs). Standard phenotypic markers are useful for assessment of DC identity and purity, but not functional analysis. Stroncek reported on RNA microarray strategies for assessing patterns in DC gene expression that could be correlated with assay variability, manufacturing variability, and inter- or intra-donor variability. He provided examples of different levels of the expression of several immune response genes (e.g., CCL1, AIM2, and CD80) associated with these classes of variability. Stroncek's group is refining this strategy to systematically characterize cellular therapy potency biomarkers that reflect product consistency as well as individual and manufacturing variability. Dendritic cells are particularly challenging due to their environmental responsiveness, and thus, their phenotypic and functional changes during manufacture. Stroncek et al are using the concepts of this broad approach to design validation studies during clinical trials. Sipuleucel-T immune parameters and correlation with overall survival was presented by Mark W. Frohlich, MD (Dendreon Corporation, Seattle, WA) based on recently reported results from the randomized Phase 3 IMPACT Trial (Immunotherapy Prostate AdenoCarcinoma Treatment) [ 16 ]. Immunological monitoring included assessment of product potency measures (i.e., CD54 upregulation as a marker of APC activation) and measures of cellular and humoral response. After the initial treatment with Sipuleucel-T, APC activation increased, indicated by CD54 upregulation, as did secretion of Type 1 cytokines. Proliferation and ELISPOT assays demonstrated specific T cell responses to the immunizing antigen after the initial dose. Sipuleucel-T was also shown to generate a persistent antigen-specific humoral response, which was characterized by antibody class switching from IgM to IgG (for anti-PA2024). In a combined analysis of Phase 3 Sipuleucel-T data, CD54 + cell counts, number of total nucleated cells, and CD54 upregulation correlated significantly with overall survival, even after adjustment for baseline prognostic factors (PSA and LDH levels). The IMPACT study revealed a correlation between overall survival and measures of an antigen-specific antibody response, T cell proliferation, and ELISPOT. The APC activation and cytokine profile associated with Sipuleucel-T is suggestive of an immunological prime-boost mechanism. The correlation between overall survival and the monitored immunological parameters suggests these measures may be useful biomarkers for assessing the clinical activity of this new cancer immunotherapy. Session 2 finished with a panel discussion led by Disis, Puri, Stroncek, Frohlich, Leif Håkansson, MD, PhD (Biotherapy Development Association) and Nicholas Restifo, MD (NCI Surgery Branch). Novel Methodologies for Assessing the Immune Landscape: Clinical Utility of Novel Technologies The iSBTc/SITC Biomarkers Symposium included a session designed to address emerging methodologies that are proving useful in immune assessment for clinical immunotherapeutic approaches to cancer treatment chaired by Francesco Marincola (NIH) and Peter P. Lee (Stanford University). Thomas R. O'Brien, MD (National Cancer Institute, Division of Cancer Epidemiology and Genetics) presented on genetic variants in IL28B (IFN-λ) as major predictors of response to IFN-α therapy for chronic hepatitis virus C (HCV). Chronic HCV infection is the leading cause of liver cancer in the United States today. Standard treatment of chronic HCV infection involves pegylated IFN-alfa in combination with ribavirin, a regimen that generates a sustained virological response in about half of infected patients but which can have significant adverse effects. Use of appropriate markers and technologies to identify patients less likely to benefit from standard HCV treatment would be beneficial, as would more effective treatment approaches among these patients. O'Brien reported on genome-wide association studies (GWAS) that have helped to link genetic variants in IL28B (which encodes IFN-λB) with the response to standard therapy. Analysis of global distribution of two IL28B alleles that differ by only a single nucleotide suggests that the higher frequency of the unfavorable allele within populations of African descent partially explains racial differences in response to standard treatment, pointing to a potential clinical role for IFN-λ in chronic HCV infection. While IL28B genotype may be helpful in indentifying patients who are not good candidates for therapy, personalized clinical decisions must consider other factors (e.g., viral load and hepatic fibrosis score) associated with a sustained virological response. Samuel C. Silverstein, MD (Columbia University) presented data and mathematical models that indicate that a critical concentration of cytolytically active, tumor antigen-specific CD8 + T cells is required to control growth of cognate antigen-expressing tumor cells. Silverstein described a clonogenic assay in which varying numbers of CD8 + T cells from an OT-1 transgenic mouse whose T cell receptor specifically recognizes SIINFEKL peptide were mixed with B16 mouse melanoma cells (previously pulsed with SIINFEKL peptide) and co-incubated in a collagen/fibrin gel for 24, 48 and 72 hours. The gel was dissolved, the surviving cells plated, and the resulting colonies were counted to determine the number of surviving melanoma cells. In the absence of specific CD8 + T cells, the melanoma cells demonstrate log-linear growth. With increasing numbers of co-incubated CD8 + T cells, the melanoma cell growth rate is reduced, and at a critical CD8 + T cell concentration, the cytolytic cells kill the tumor cells at the same rate as tumor cell growth. Silverstein reported on a mathematical model for determining killing efficiency in which the constant k was equal to the volume of antigen-expressing tumor cells cleared per cytolytically active, tumor antigen-specific CD8 + T cell per minute. He presented killing efficiencies for in vitro (collagen-fibrin gels) and in vivo models (spleen cells of mice infused with LCMV-pulsed target cells) and demonstrated that k decreases 0.7 log 10 for every log 10 increase in CD8 + T cell concentration and was dependent on the percent of cytolytically active , antigen-specific CD8 + T cells present in the CD8 + T cell milieu. Jérôme Galon, PhD (INSERM, Integrative Cancer Immunology Laboratory, Cordeliers Research Center) presented on immune biomarkers, drawing from work that demonstrated that the immune contexture (nature, functional orientation, density and location of immune cells in colorectal cancer) had a prognostic value that was superior to that of the classic UICC-TNM classification system. He reviewed data that indicated that the presence of memory T cells within the tumor correlates with the absence of early-metastatic invasion and improved clinical outcome in colorectal carcinoma. He also discussed the prognostic value of tumor invasion vs. immune reaction, demonstrating an inverse relationship between intratumoral density of CD8 + T cells and the T stage of the in colorectal carcinoma tumor at the time of surgery. Moreover, data he summarized indicated that most patients with a strong and coordinated cytotoxic response presented with early-stage colorectal carcinoma, whereas patients with a weak cytotoxic response progressed to late-stage disease. Additionally, the density of CD8 + T cells at the center of the tumor also correlated inversely with tumor T stage and relapse. Peter P. Lee, MD (Stanford University) presented information on the assessment of immune changes in tumor-draining lymph nodes (TDLNs) as novel biomarkers using an integrated image analysis approach. Using 5-color immunohistochemical staining, automated high-resolution (whole section) imaging, and customized image analysis software, Lee's group have been able to create composite images that map each cell type within sections of TDLNs. The number, proportion, and spatial characteristics (i.e., spatial relationships between immune and tumor cells) were compared to five year clinical outcome data. Lee reported changes in immune cells in TDLNs, both in number and spatial relationship, and that some of these changes appear to predict clinical outcome. He noted that quantitative, spatial analysis tools for histology have been developed for high throughput analysis, thus image analysis of immune cells in TDLNs may serve as a novel biomarker for cancer. Initial analysis of TDLNs from patients with breast cancer suggests that this approach may also have broader utility in other cancers. Session 3 finished with a panel discussion led by Marincola, Lee, O'Brien, Silverstein, and Galon. Recommendations on Incorporation of Biomarkers into the Clinical Arena The final session geared toward providing insight into the incorporation of biomarkers into clinical applications was chaired by John M. Kirkwood, MD (University of Pittsburgh). First, Diane Longo, PhD (Nodality, Inc., Foster City, CA) presented on single cell network profiling (SCNP) technology and applications in immunological monitoring. This technology, based on multiparameter flow cytometry, provides measurement of both extracellular surface markers and intracellular signalling within single cells. This approach can be used to distinguish basal, unevoked subsets of cells from evoked cells after clinically-relevant stimulation, making it useful for immunological monitoring. SCNP technology may help in disease characterization by mapping deregulated pathways. In pre-clinical drug profiling efforts, SCNP may be useful in characterizing drug potency, target selectivity, and off-target activity, and resistance. Additionally, SCNP may assist in patient stratification and individual patient drug profiling. Thus, interrogation of cell signaling with SCNP allows a direct means to classify disease activity and response to treatment. The relationships of signaling events to each other can be used to infer a structure to the immune system, providing useful immunological information during development and clinical testing of immunotherapies. Daniel Normolle, PhD (University of Pittsburgh Cancer Institute) presented on biostatistical considerations for biologics and biomarkers in oncology, summarizing the limitations of the 3 + 3 design of early phase clinical studies and outlining alternative designs that include immunotherapy biomarkers. Among the limitations of the 3 + 3 trial design, often used in early clinical trials of biological therapies of cancers, is that this study design is intended for treatments in which toxicities increase with dose. A large proportion of participants are treated with sub-therapeutic doses. This study design can results in a slow dose escalation even when no dose limiting toxicities are observed and there is no quantitative mechanism to employ prior understanding of toxicities in the design. While the 3 + 3 design can eliminate harmful doses from further testing, it is underpowered for selecting among the remaining doses. Thus, while this design can eliminate extremely toxic doses, it does not choose between doses that are not extremely toxic and is less suited for evaluation of biological therapies that have low toxicities or toxicities that do not increase with dosing. In the context of non-cytotoxic biological therapies, monitoring toxicity is distinct from escalating dose based on toxicity. In the 3 + 3 design, if toxicity is low with a given dose, the dose is automatically moved to the next highest dose, which may not be the best therapeutic dose. Moreover, if an added component reduces toxicity, escalating dose on toxicity may again fail to choose the most useful dose. Importantly, cohorts of 3 and 6 patients are often too small to provide meaningful statistical information to guide dosing decisions. Normolle outlined an alternate, adaptive design to escalating dose based on toxicities which incorporated the assessment of biomarkers. The alternate early trial design should be constructed to provide information to prove the principle and identify sources of variability in biomarker assessment. It should estimate the biologically effective doses and eliminate ineffective doses as well as provide information on the relationships between biomarkers at biologically effective doses. An adaptive trial design of immunotherapies should establish immunological activity at the highest dose and determine if lower doses are as effective as the highest dose, while avoiding ineffective doses. Toxicity must be monitored and a global stopping rule for toxicity should be in place. In randomized trials, participants should be allocated equally to the dosing arms of the study. The studies can be designed as simple randomized trials, two- or three-staged randomized trials or as trials of combination therapy to reduce toxicity. It is critical that the trial be statistically powered to achieve the primary objective of the study. Holden T. Maecker, PhD (Stanford University) discussed prospects for new clinical flow cytometry assays. While clinical tests for cellular immunity are largely lacking, flow cytometry represents a powerful technology for dissecting cellular immune responses. In assessing immune responses it is useful to determine the number of functional and non-functional T cells specific to a particular antigen. Qualitative information on T cells to a specific antigen is also invaluable. Such qualitative information may include the breadth of epitopes recognized, the types of cytokines produced, degranulation or lytic capacity, and phenotypic markers on the T cells (e.g., memory/effector markers, markers of exhaustion [PD-1], perforin, granzymes). Flow cytometry can provide much of this information because it can used to measure multiple markers on individual cells, detect rare cell populations, and can measure both cellular phenotypes and functions. Intracellular cytokine staining (ICS) has been simplified and standardized for flow cytometry using plates with lyophilized antigen. This approach has been useful in dissecting the cytokine profile of various T cell subsets in response to HIV and cytomegalovirus. Phospho-Flow assays are useful for the assessment of intracellular signaling as they can measure phosphorylation events in very short-term stimulated whole blood, PBMC, and other cells. These assays can measure multiple cell-surface and intracellular markers in combination, using multiparameter flow cytometry and detect signaling through T cell receptors, surface Ig, cytokines and other molecules. Phospho-Flow assays may be used to detect signaling defects in aging or immune-mediated diseases. Flow cytometry can provide useful information on early and late cellular immune responses and may have clinical utility in the assessment of cellular changes in response to various disease and treatment. Simplification and standardization of methodology will be necessary for clinically useable tests [ 17 ]. In the final presentation, Howard L. Kaufman, MD (Rush University) discussed predictive biomarkers for tumor immunotherapy and whether the community is ready for clinical implementation. Kaufman outlined requirements for an ideal biomarker--that it correlate with disease progression or treatment response, be easily collected and accurately measured, that it be validated, and that it be cost-effective. Biomarkers may be useful for monitoring adverse events, identifying potential targets for drug discovery, and informing decisions about clinical trials, including selection of patients, endpoints and dosing. In immunotherapy studies, biomarkers have included soluble factors (e.g., serum proteins, circulating DNA, circulating tumor cells), tumor factors (e.g., receptor expression, cellular infiltrates), patients factors (indicators of humoral and cellular immune responses, immune system polymorphisms) and mathematical predictions. Cancer immunotherapy trials have included CD4 + , CD8 + T cell responses, Treg responses and antibody titre as predictors for clinical response. The utility of these biomarkers has been limited by the small size of most of these trials, limited clinical response and by the fact that biomarker analysis is often retrospective and unplanned for in the trial design. A number of biomarkers have been evaluated in IL-2 immunotherapy in renal cell carcinoma, including pre-treatment leukocyte and neutrophil levels, Ki-67 expression, CAIX levels, VEGF levels, clonal T cell expansion, and levels of CD4 + CD25 hi Treg cells. Kaufman et al have employed a computational model that includes density and distribution of the IL-2 receptor in conjunction with delivered IL-2 dose to predict the clinical response to IL-2 immunotherapy for renal cell carcinoma. These computational biomarkers and other potential soluble and cellular biomarkers warrant incorporation into prospective clinical trials of cancer immunotherapies and further validation in larger trials. The session finished with a panel discussion led by Kirkwood, Longo, Normolle, Maecker and Kaufman. In summary, the Symposium speakers presented promising new data on emerging immune biomarkers in cancer. Several themes recurred through many of the presentations: first, standardization and harmonization efforts have identified critical parameters in patient sample handling and assay conduct and reporting; second, we are observing clinical and subclinical autoimmunity in treated patients as well as extensive responses to self tumor antigens, which may indicate the critical role for in vivo cross-presentation; third, there were examples of large scale trials in which biomarkers were examined not only in blood, but also in tumor and lymph nodes, which were highly significantly correlated to clinical outcome; and fourth, that the labs, taskforces, and societies represented were all participating in overlapping collaborations, indicating the success of working together. Intensive interaction between academia, industry and government--as represented in this iSBTc/SITC symposium--is necessary to promote the development of predictive biomarkers for improved cancer outcomes through immunotherapy.
Novel Methodologies for Assessing the Immune Landscape: Clinical Utility of Novel Technologies The iSBTc/SITC Biomarkers Symposium included a session designed to address emerging methodologies that are proving useful in immune assessment for clinical immunotherapeutic approaches to cancer treatment chaired by Francesco Marincola (NIH) and Peter P. Lee (Stanford University). Thomas R. O'Brien, MD (National Cancer Institute, Division of Cancer Epidemiology and Genetics) presented on genetic variants in IL28B (IFN-λ) as major predictors of response to IFN-α therapy for chronic hepatitis virus C (HCV). Chronic HCV infection is the leading cause of liver cancer in the United States today. Standard treatment of chronic HCV infection involves pegylated IFN-alfa in combination with ribavirin, a regimen that generates a sustained virological response in about half of infected patients but which can have significant adverse effects. Use of appropriate markers and technologies to identify patients less likely to benefit from standard HCV treatment would be beneficial, as would more effective treatment approaches among these patients. O'Brien reported on genome-wide association studies (GWAS) that have helped to link genetic variants in IL28B (which encodes IFN-λB) with the response to standard therapy. Analysis of global distribution of two IL28B alleles that differ by only a single nucleotide suggests that the higher frequency of the unfavorable allele within populations of African descent partially explains racial differences in response to standard treatment, pointing to a potential clinical role for IFN-λ in chronic HCV infection. While IL28B genotype may be helpful in indentifying patients who are not good candidates for therapy, personalized clinical decisions must consider other factors (e.g., viral load and hepatic fibrosis score) associated with a sustained virological response. Samuel C. Silverstein, MD (Columbia University) presented data and mathematical models that indicate that a critical concentration of cytolytically active, tumor antigen-specific CD8 + T cells is required to control growth of cognate antigen-expressing tumor cells. Silverstein described a clonogenic assay in which varying numbers of CD8 + T cells from an OT-1 transgenic mouse whose T cell receptor specifically recognizes SIINFEKL peptide were mixed with B16 mouse melanoma cells (previously pulsed with SIINFEKL peptide) and co-incubated in a collagen/fibrin gel for 24, 48 and 72 hours. The gel was dissolved, the surviving cells plated, and the resulting colonies were counted to determine the number of surviving melanoma cells. In the absence of specific CD8 + T cells, the melanoma cells demonstrate log-linear growth. With increasing numbers of co-incubated CD8 + T cells, the melanoma cell growth rate is reduced, and at a critical CD8 + T cell concentration, the cytolytic cells kill the tumor cells at the same rate as tumor cell growth. Silverstein reported on a mathematical model for determining killing efficiency in which the constant k was equal to the volume of antigen-expressing tumor cells cleared per cytolytically active, tumor antigen-specific CD8 + T cell per minute. He presented killing efficiencies for in vitro (collagen-fibrin gels) and in vivo models (spleen cells of mice infused with LCMV-pulsed target cells) and demonstrated that k decreases 0.7 log 10 for every log 10 increase in CD8 + T cell concentration and was dependent on the percent of cytolytically active , antigen-specific CD8 + T cells present in the CD8 + T cell milieu. Jérôme Galon, PhD (INSERM, Integrative Cancer Immunology Laboratory, Cordeliers Research Center) presented on immune biomarkers, drawing from work that demonstrated that the immune contexture (nature, functional orientation, density and location of immune cells in colorectal cancer) had a prognostic value that was superior to that of the classic UICC-TNM classification system. He reviewed data that indicated that the presence of memory T cells within the tumor correlates with the absence of early-metastatic invasion and improved clinical outcome in colorectal carcinoma. He also discussed the prognostic value of tumor invasion vs. immune reaction, demonstrating an inverse relationship between intratumoral density of CD8 + T cells and the T stage of the in colorectal carcinoma tumor at the time of surgery. Moreover, data he summarized indicated that most patients with a strong and coordinated cytotoxic response presented with early-stage colorectal carcinoma, whereas patients with a weak cytotoxic response progressed to late-stage disease. Additionally, the density of CD8 + T cells at the center of the tumor also correlated inversely with tumor T stage and relapse. Peter P. Lee, MD (Stanford University) presented information on the assessment of immune changes in tumor-draining lymph nodes (TDLNs) as novel biomarkers using an integrated image analysis approach. Using 5-color immunohistochemical staining, automated high-resolution (whole section) imaging, and customized image analysis software, Lee's group have been able to create composite images that map each cell type within sections of TDLNs. The number, proportion, and spatial characteristics (i.e., spatial relationships between immune and tumor cells) were compared to five year clinical outcome data. Lee reported changes in immune cells in TDLNs, both in number and spatial relationship, and that some of these changes appear to predict clinical outcome. He noted that quantitative, spatial analysis tools for histology have been developed for high throughput analysis, thus image analysis of immune cells in TDLNs may serve as a novel biomarker for cancer. Initial analysis of TDLNs from patients with breast cancer suggests that this approach may also have broader utility in other cancers. Session 3 finished with a panel discussion led by Marincola, Lee, O'Brien, Silverstein, and Galon.
The International Society for Biological Therapy of Cancer (iSBTc, recently renamed the Society for Immunotherapy of Cancer, SITC) hosted a one-day symposium at the National Institutes of Health on September 30, 2010 to address development and application of biomarkers in cancer immunotherapy. The symposium, titled Immuno-Oncology Biomarkers 2010 and Beyond: Perspectives from the iSBTc/SITC Biomarker Task Force , gathered approximately 230 investigators equally from academia, industry and governmental/regulatory agencies from around the globe for panel discussions and presentations on the following topics: 1) immunologic monitoring: standardization and validation of assays; 2) correlation of immunity to biologic activity, clinical response and potency assays; 3) novel methodologies for assessing the immune landscape: clinical utility of novel technologies; and 4) recommendations on incorporation of biomarkers into the clinical arena. The presentations are summarized in this report; additional program information and slides are available online at the iSBTc/SITC website.
Competing interests MLD discloses the following relationships: Glaxo, Grant Funding, Principal Investigator; Hemispherex, Grant Funding, Principal Investigator; and VentiRx, Consulting Fee, Consultant. LHB, SNK, JB and FM declare that they have no competing interests. Authors' contributions LB, MD, SK and FM: planned, organized, and chaired the Symposium; JB: drafted the manuscript; LB: critically reviewed and edited the manuscript and prepared the bibliography; All authors read and approved the final manuscript. Appendix Organizations represented at the 2010 SITC Collaboration Summit included Biotherapy Development Association (BDA), Canadian Cancer Immunotherapy Consortium (CCIC), Association for Cancer Immunotherapy (CIMT), Cancer Immunotherapy Consortium, a program of the Cancer Research Institute (CRI-CIC), Chinese Society of Clinical Oncology (CSCO), European Society for Cancer Immunology and Immunotherapy (ESCII), Japanese Society of Clinical Immunology (JSCI), Nordic Center for Development of Antitumour Vaccine Concept (NCV-Network), and the Italian Network for Tumor Biotherapy (NIBIT).
Acknowledgements The authors and the Society for Immunotherapy of Cancer wish to acknowledge the following collaborating organizations that helped make this initiative a success and ensure a broad perspective on immuno-oncology biomarkers: Association for Immunotherapy of Cancer (CIMT); Biotherapy Development Association (BDA); Cancer Immunotherapy Consortium (CIC) of the Cancer Research Institute (CRI); National Institutes of Health, Clinical Center; Nordic Center for Development of Anti-tumour Vaccines (NCV-Network). We wish to acknowledge the Symposium speakers and those who have made their presentation slides available online. The presentations are summarized in this report; additional program information and slides are available online at the iSBTc/SITC website [ 18 ].
CC BY
no
2022-01-12 15:21:36
J Transl Med. 2010 Dec 7; 8:130
oa_package/ab/f0/PMC3014892.tar.gz
PMC3014893
21143967
Introduction A great interest has arisen in research in the field of stem cells, which may have important applications in tissue engineering, regenerative medicine, cell therapy, and gene therapy because of their great therapeutic potential, which may have important applications [ 1 , 2 ]. Cell therapy is based on transplantation of live cells into an organism in order to repair a tissue or restore lost or defective functions. Cells mainly used for such advanced therapies are stem cells, because of their ability to differentiate into the specific cells required for repairing damaged or defective tissues or cells [ 3 ]. Regenerative medicine is in turn a multidisciplinary area aimed at maintenance, improvement, or restoration of cell, tissue, or organ function using methods mainly related to cell therapy, gene therapy, and tissue engineering. There is however much to be investigated about the specific characteristics of stem cells. The mechanisms by which they differentiate and repair must be understood, and more reliable efficacy and safety tests are required for the new drugs based on stem cells. General aspects of stem cells The main properties that characterize stem cells include their indefinite capacity to renew themselves and leave their initial undifferentiated state to become cells of several lineages. This is possible because they divide symmetrically and/or asymmetrically, i.e. each stem cell results in two daughter cells, one of which preserves its potential for differentiation and self-renewal, while the other cell directs itself toward a given cell lineage, or they both retain their initial characteristics. Stem cells are able to renew themselves and produce mature cells with specific characteristics and functions by differentiating in response to certain physiological stimuli. Different types of stem cells are distinguished based on their potential and source. These include the so-called totipotent embryonic cells, which appear in the early stages of embryo development, before blastocyst formation, capable of forming a complete organism, as well as all intra and extra embryonic tissues. There are also pluripotent embryonic cells, which are able to differentiate into any type of cell, but not into the cells forming embryonic structures such as placenta and umbilical cord. Multipotent adult cells (such as hematopoietic cells, which may differentiate into platelets, red blood cells, or white blood cells) are partially specialized cells but are able to form a specific number of cell types. Unipotent cells only differentiate into a single cell lineage, are found in the different body tissues, and their function is to act as cell reservoirs in the different tissues. Germ stem cells are pluripotent embryonic stem cells derived from gonadal buds of the embryo which, after a normal embryonic development, will give rise to oocytes and spermatozoa [ 4 , 5 ]. In the fetal stage there are also stem cells with differentiation and self-renewal abilities. These stem cells occur in fetal tissues and organs such as blood, liver, and lung and have similar characteristics to their counterparts in adult tissues, although they show a greater capacity to expand and differentiate [ 6 ]. Their origin could be in embryonic cells or in progenitors unrelated to embryonic stem cells. Adult stem cells are undifferentiated cells occurring in tissues and organs of adult individuals which are able to convert into differentiated cells of the tissue where they are. They thus act as natural reservoirs for replacement cells which are available throughout life when any tissue damage occurs. These cells occur in most tissues, including bone marrow, trabecular bone, periosteum, synovium, muscle, adipose tissue, breast gland, gastrointestinal tract, central nervous system, lung, peripheral blood, dermis, hair follicle, corneal limbus, etc. [ 7 ]. In most cases, stem cells from adult tissues are able to differentiate into cell lineages characteristics of the niche where they are located, such as stem cells of the central nervous system, which generate neurons, oligodendrocytes, and astrocytes. Some unipotent stem cells, such as those in the basal layer of interfollicular epidermis (producing keratinocytes) or some adult hepatocytes, may even have a repopulating function in the long term [ 8 ]. Adult stem cells have some advantages in terms of clinical applications over embryonic and induced pluripotent stem cells because their use poses no ethical conflicts nor involves immune rejection problems in the event of autologous implantation, but induced pluripotent stem cells are at least, if not more capable than those from adult (stem) cells. Mesenchymal stem cells Although adult stem cells are primarily unipotent cells, under certain conditions they show a plasticity that causes them to differentiate into other cell types within the same tissue. Such capacity results from the so-called transdifferentiation in the presence of adequate factors--as occurs in mesenchymal stem cells, which are able to differentiate into cells of an ectodermal (neurons and skin) and endodermal (hepatocytes, lung and intestinal cells) origin--or from the cell fusion process, such as in vitro fusion of mesenchymal stem cells with neural progenitors or in vivo fusion with hepatocytes in the liver, Purkinje neurons in the brain, and cardiac muscle cells in the heart [ 9 ]. This is why one of the cell types most widely used to date in cell therapy are mesenchymal stem cells (MSCs), which are of a mesodermal origin and have been isolated from bone marrow, umbilical cord blood, muscle, bone, cartilage, and adipose tissue [ 10 ]. From the experimental viewpoint, the differential characteristics of MSCs include their ability to adhere to plastic when they are cultured in vitro ; the presence of surface markers typical of mesenchymal cells (proteins such as CD105, CD73, and CD90) and the absence of markers characteristic of hematopoietic cells, monocytes, macrophages, or B cells; and their capacity to differentiate in vitro under adequate conditions into at least osteoblasts, adipocytes, and chondroblasts [ 11 , 12 ]. Recent studies have shown that MSCs support hematopoiesis and immune response regulation [ 13 ]. They also represent an optimum tool in cell therapy because of their easy in vitro isolation and expansion and their high capacity to accumulate in sites of tissue damage, inflammation, and neoplasia. MSCs are therefore useful in regenerative therapy, in graft-versus-host disease and in Crohn's disease, or in cancer therapy [ 14 - 17 ]. The development in the future of an optimum methodology for genetic manipulation of MSCs may even increase their relevant role in cell and gene therapy [ 18 ]. Adipose-derived mesenchymal stem cells Bone marrow has been for years the main source of MSCs, but bone marrow harvesting procedure is uncomfortable for the patient, the amount of marrow collected is scarce, and the proportion of MSCs it contains as compared to the total population of nucleated cells is very low (0.001%-0.01%) [ 19 ]. By contrast, subcutaneous adipose tissue is usually abundant in the body and is a waste product from the therapeutic and cosmetic liposuctions increasingly performed in Western countries. These are simple, safe, and well tolerated procedures, with a complication rate of approximately 0.1%, where an amount of fat ranging from a few hundreds of milliliters to several liters (up to 3 liters, according to the recommendation of the World Health Organization) is usually drawn and subsequently discarded. Despite the suction forces exerted during aspiration, it is estimated that 98%-100% of tissue cells are viable after extraction. The liposuction method is therefore the most widely accepted for MSCs collection [ 20 , 21 ]. Adipose-derived stem cells (ASCs) were first identified in 2001 by Zuk et al. [ 22 ]. In addition to having the differentiation potential and self-renewal ability characteristic of MSCs, these cells secrete many cytokines and growth factors with anti-inflammatory, antiapoptotic, and immunomodulatory properties such as vascular endothelial growth factor (VEGF), hepatocyte growth factor (HGF), and insulin-like growth factor-1 (IGF-1), involved in angiogenesis, healing, and tissue repair processes [ 23 ]. This ability to secrete proangiogenic cytokines makes ASCs optimum candidates for cell therapy of ischemic diseases. In this regard, in a lower limb ischemia model in rats, intravenous or intramuscular ASCs administration was reported to significantly improve blood flow, probably due to the direct effect of ASCs differentiation into endothelial cells, and to the indirect effect of secretion of growth factors that promote neovascularization [ 24 , 25 ]. The immunomodulatory properties of ASCs and their lack of expression of MHC class II antigens also make them adequate for allogeneic transplantation, minimizing the risk of rejection. ASCs regulate T cell function by promoting induction of suppressor T cells and inhibiting production of cytotoxic T cells, NK cells, and proinflammatory cytokines such as tumor necrosis factor-α (TNF-α), interferon-γ (IFN-γ), and interleukine-12 (IL-12). These effects, complemented by secretion of soluble factors such as IL-10, transforming growth factor-β (TGF-β) and prostaglandin E2, account for the immunosuppressive capacity of these cells, which was demonstrated in a clinical trial where graft-versus-host disease (GVDH) was treated by intravenous injection of ASCs [ 26 - 28 ]. This immunosuppressive role of ASCs and their adjuvant effect in healing are also reflected in the encouraging results which are being achieved in various clinical trials investigating ASCs transplantation for treating fistula in patients with Crohn's disease [ 29 ] and radiotherapy-induced chronic ulcers [ 30 ]. Many other studies being conducted in animal models show the potential of ASCs to regenerate cranial bone, periodontal tissue and joint cartilage, in functional repair of myocardial infarction, and in stroke [ 31 , 32 ]. Other types of stem cells Hematopoietic stem cells together with mesenchymal stem cells, the so-called "side population", and multipotent adult progenitor cells (MAPCs), are the stem cells forming bone marrow [ 33 ]. Their role is maintenance and turnover of blood cells and immune system. The high rate of regeneration of the liver, as compared to other tissues such as brain tissue, is due to proliferation of two types of liver cells, hepatocytes, and oval cells (stem cells). In response to acute liver injuries (hepatectomy or hepatotoxin exposure), hepatocytes regenerate damaged tissue, while oval cells are activated in pathological conditions where hepatocytes are not able to divide (acute alcohol poisoning, phenobarbital exposure, etc.), proliferating and converting into functional hepatocytes [ 34 ]. In skeletal muscle, the stem cells, called satellite cells , are in a latent state and are activated following muscle injury to proliferate and differentiate into muscle tissue. Muscle-derived stem cells have a greater ability for muscle regeneration [ 35 ]. In cardiac tissue, cardiac progenitor cells are multipotent and may differentiate both in vitro and in vivo into cardiomyocytes, smooth muscle cells, and vascular endothelial cells [ 36 , 37 ]. Neuronal stem cells able to replace damaged neurons have been reported in the nervous system of birds, reptiles, mammalians, and humans. They are located in the dentate fascia of hippocampus and the subventricular area of lateral ventricles [ 38 , 39 ]. Stem cells have also recently been found in the peripheral nerve system (in the carotid body) [ 40 ]. Astrocytes, which are glial cells, have been proposed as multipotent stem cells in human brain [ 41 ]. The high renewal capacity of the skin is due to the presence in the epidermis of stem cells acting as a cell reservoir. These include epidermal stem cells , mainly located in the protuberance of hair follicle and which are capable of self-renewal for long time periods and differentiation into specialized cells, and transient amplifying cells , distributed throughout basal lamina and showing in vivo a very high division rate, but having a lower differentiation capacity [ 42 ]. Induced pluripotent stem cells Induced pluripotent stem cells (iPSCs) from somatic cells are revolutionizing the field of stem cells. Obtained by reprogramming somatic stem cells of a patient through the introduction of certain transcription factors [ 43 - 48 ], they have a potential value for discovery of new drugs and establishment of cell therapy protocols because they show pluripotentiality to differentiate into cells of all three germ layers (endoderm, mesoderm, and ectoderm). The iPSC technology offers the possibility of developing patient-specific cell therapy protocols [ 49 ] because use of genetically identical cells may prevent immune rejection. In addition, unlike embryonic stem cells, iPSCs do not raise a bioethical debate, and are therefore a "consensus" alternative that does not require use of human oocytes or embryos [ 50 ]. Moreover, iPSCs are not subject to special regulations [ 51 ] and have shown a high molecular and functional similarity to embryonic cells [ 52 , 53 ]. Highly encouraging results have been achieved with iPSCs from skin fibroblasts, differentiated to insulin-secreting pancreatic islets [ 54 ]; in lateral amyotrophic sclerosis (Lou Gehrig disease) [ 55 ]; and in many other conditions such as adenosine deaminase deficiency combined with severe immunodeficiency, Shwachman-Bodian-Diamond syndrome, type III Gaucher disease, Duchenne and Becker muscular dystrophy, Parkinson and Huntington disease, diabetes mellitus, or Down syndrome [ 56 ]. Good results have also been reported in spinal muscular atrophy [ 57 ] and in screening tests in toxicology and pharmacology, for toxic substances for the embryo or for teratogenic substances [ 58 ]. A very recent application has been reported by Moretti et al. [ 59 ] in the long QT syndrome, a hereditary disease associated to prolongation of the QT interval and risk of ventricular arrhythmia. iPCSs retain the genotype of type 1 disease and generate functional myocytes lacking the KCNQ1 gene mutation. Patients show normalization of the ventricular, atrial, and nodal phenotype, and positively express various normal cell markers. Stem cell therapy: A new concept of medical application in Pharmacology For practical purposes, human embryonic stem cells are used in 13% of cell therapy procedures, while fetal stem cells are used in 2%, umbilical cord stem cells in 10%, and adult stem cells in 75% of treatments. To date, the most relevant therapeutic indications of cell therapy have been cardiovascular and ischemic diseases, diabetes, hematopoietic diseases, liver diseases and, more recently, orthopedics [ 60 ]. For example, more than 25,000 hematopoietic stem cell transplantations (HSCTs) are performed every year for the treatment of lymphoma, leukemia, immunodeficiency illnesses, congenital metabolic defects, hemoglobinopathies, and myelodysplastic and myeloproliferative syndromes [ 61 ]. Depending on the characteristics of the different therapeutic protocols and on the requirements of each condition, each type of stem cell has its advantages and disadvantages. Thus, embryonic stem cells have the advantages of being pluripotent, easy to isolate, and highly productive in culture, in addition to showing a high capacity to integrate into fetal tissue during development. By contrast, their disadvantages include immune rejection, the possibility that they differentiate into inadequate cell types or induce tumors, and contamination risks. Germ stem cells are also pluripotent, but the source from which they are harvested is scarce, and they may develop embryonic teratoma cells in vivo . Adult stem cells are multipotent, have a greater differentiation potential, less likely to induce no immune rejection reactions, and may be stimulated by drugs. Their disadvantages include that they are scarce and difficult to isolate, grow slowly, differentiate poorly in culture, and are difficult to handle and produce in adequate amounts for transplantation. In addition, they behave differently depending on the source tissue, show telomere shortening, and may carry the genetic abnormalities inherited or acquired by the donor. These disadvantages of adult stem cells are less marked in some of the above mentioned subtypes, such as mesenchymal stem cells obtained from bone marrow or adipose tissue, or iPSCs. In these cases, harvesting and production are characterized by their easiness and increased yield rates in the growth of the cultures. Their growth is slow but meets experimental requirements, and their differentiation and implantation are highly adequate [ 62 , 63 ]. Overall, at least three types of therapeutic strategies are considered when using stem cells. The first is stimulation of endogenous stem cells using growth factors, cytokines, and second messengers, which are able to induce self-repair of damaged tissues or organs. The second alternative is direct administration of stem cells so that they differentiate at the damaged or non-functional tissue sites. The third possibility is transplantation of cells, tissues, or organs taken from cultures of stem cell-derived differentiated cells. The US Food and Drug Administration defines somatic cell therapy as the administration of autologous, allogeneic, or xenogeneic non-germ cells--excluding blood products for transfusion--which have been manipulated or processed and propagated, expanded, selected ex vivo , or drug-treated. Cell therapy applications are related to the treatment of organ-specific diseases such as diabetes or liver diseases. Cell therapy for diabetes is based on islet transplantation into the portal vein of the liver and results in an improved glucose homeostasis, but graft function is gradually lost in a few years after transplantation. Liver diseases (congenital, acute, or chronic) may be treated by hepatocyte transplantation, a technique under development and with significant disadvantages derived from difficulties in hepatocyte culture and maintenance. The future here lies in implantation of hepatic stem cells, or in implantation of hepatic cells obtained by differentiation of a different type of stem cell, such as mesenchymal stem cells. Other applications, still in their first steps, include treatment of hereditary monogenic diseases such as hemophilia using hepatic sinusoidal endothelial cells [ 64 ] or murine iPSCs obtained by fibroblast differentiation into endothelial cells or their precursors [ 65 ]. As regards hemophilia, an optimum candidate because it is a monogenic disease and requires low to moderate expression levels of the deficient coagulation factor to achieve a moderate phenotype of disease, great progress is being made in both gene therapy and cell therapy using viral and non-viral vectors. The Liras et al. group has reported encouraging preliminary results using non-viral vectors and mesenchymal stem cells derived from adult human adipose tissue [ 66 - 68 ]. Very recently, Fagoonee et al. [ 69 ] first showed that adult germ line cell-derived pluripotent stem cells (GPSCs) may differentiate into hepatocytes in vitro , which offers a great potential in cell therapy for a very wide variety of liver diseases. Histocompatible stem cell therapy Since one of the most important applications of cell therapy is replacement of the structure and function of damaged or diseased tissues and organs, avoidance or reduction of rejection due to a natural immune response of the host to the transplant is a highly relevant consideration. Recent progress in nuclear transference from human somatic cells, as well as the iPSC technology, have allowed for availability of lineages of all three germ layers genetically identical to those of the donor patient, which permits safe transplantation of organ-tissue-specific adult stem cells with no immune rejection [ 70 ]. On the other hand, adipose-derived mesenchymal stem cells (ASCs) are able to produce adipokines, including many interleukines [ 71 ]. ASCs also have immunosuppressive capacity, regulating inflammatory processes and T-cell immune response [ 72 - 74 ]. The lack of HLA-DR expression and immunosuppressive properties of ASCs make these cells highly valuable in allogeneic transplantation to prevent tissue rejection. They do not induce alloreactivity in vitro with incompatible lymphocytes and suppress the antigen response reaction by lymphocytes. These findings support the idea that ASCs share immunosuppressive properties with bone marrow-derived MSCs and may therefore represent a new alternative for conditions related to the immune system [ 75 - 77 ]. Suitability of infrastructure and technical staff Any procedure related to cell therapy requires a strict control of manipulation and facilities. In addition, it should not be forgotten that cell therapy products are considered as drugs, and the same or a similar type of regulation should therefore be followed for them. Products must be carefully detailed and described, stating whether autologous, allogeneic, or xenogeneic cells are administered. Xenogeneic cells are included by the US Food and Drug Administration [ 78 ] as human cells provided there has been ex vivo contact with living non-human cells, tissues, or organs. It should also be detailed whether cells have been manipulated together with other non-cell materials such as synthetic or natural biomaterials, with other types of materials or agents such as growth factors or serum. As regards the production process, a detailed description must be given of all procedures related to product quality in the Standard Operating Procedures (SOPs), as for conventional medical products. The purity, safety, functionality, and identity criteria used for conventional drugs must be met. Because of the characteristics of these products, their storage period before sale or distribution should necessarily be shorter, as they cannot obviously be subject to prior sterilization (hence the use of cryopreservation as the most adequate storage method). Therefore, the production process must occur in a highly aseptic environment with comprehensive controls of both raw materials and handlers. Needless to say that production process should be highly reproducible and validated both on a small scale for a single patient and on a large scale. For an autologous therapy procedure, cell harvesting from the patient will be aimed at collecting healthy cells whenever this is possible, because in some cases, if no mosaicism exists and the disease is inherited, all cells will carry the relevant mutation, in which case this procedure will not be feasible. In hemophilia the situation may be favorable, because mosaicism is found in 30% of the cases [ 79 ]. Cell therapy products should adhere to the Current Good Manufacturing Practices , including quality control and quality assurance programs, which establish minimum quality requirements for management, staff, equipment, documentation, production, quality control, contracting-out, claims, product recall, and self-inspection. Production and distribution should be controlled by the relevant local or national authorities based on the International Conference on Harmonization of Pharmaceuticals for Human Use , which standardizes the potential interpretations and applications of the corresponding recommendations [ 80 ]. It is of paramount importance to prevent potential contamination, both microbiological and by endotoxins, due to defects in environmental conditions, handlers, culture containers, or raw materials, or crossed contamination with other products prepared at the same production plant. Care should be taken with methods for container sterilization and control of raw materials and auxiliary reagents, use of antibiotics, use of High Efficiency Particulate Absorbing (HEPA) filters to prevent airborne cross-contamination, separate handling of materials from different patients, etc. In compliance with official standard books such as the European Pharmacopoeia (Eur.Ph.) [ 81 ] or the United States Pharmacopeia (USP) [ 82 ], each batch of a biological product should pass a very strict and specific test depending on the characteristics of the cell therapy product, such as colorimetry, oxygen consumption, or PCR. Facilities where products are handled, packaged, and stored should be designed and organized according to the guideline Good Manufacturing Practice for Pharmaceutical Manufacturers (GMP) [ 83 ]. The most important rooms of these facilities include the so-called clean rooms , which are classified in four classes (A-D) depending on air purity, based on the number of particles of two sizes (≥ 0.5 μm, ≥ 5 μm). Other parameters such as temperature, humidity, and pressure should be taken into account and monitored because of their potential impact on particle generation and microorganism proliferation. As regards to the number of technical staff, this should be the minimum required and should be especially trained in basic hygiene measures required for manipulation in a clean room. Material and staff flows should be separated and be unidirectional to minimize cross contamination, and control and documentation of all activities is necessary. Technical staff should have adequate qualification both for the conduct and surveillance of all activities. Good Manufacturing Practice for Pharmaceutical Manufacturer s is a general legal requirement for all biological medicinal products before their marketing or distribution. As in tissue donation, use of somatic cells from a donor requires the method to be as least invasive as possible and to be always performed after obtaining signed informed consent. In this regard, risk-benefit assessment in this field is even more necessary in this field than in other areas because of the sometimes high underlying uncertainty when stem cells are used [ 84 ]. Biomaterials for Cellular Therapy In advanced therapies, particularly in cell therapy and tissue engineering, the biomaterial supporting the biological product has a similar or even more important role as the product itself. Such biomaterials serve as the matrix for nesting of implanted cells and tissues because they mimic the functions of the tissue extracellular matrix. Biomaterials for cell therapy should be biocompatible to prevent immune rejection or necrosis. They should also be biodegradable and assimilable without causing an inflammatory response, and should have certain structural and mechanical properties. Their primary role is to facilitate location and distribution of somatic cells into specific body sites--in much the same way as excipients in classical pharmacology--and to maintain the three-dimensional architecture that allows for formation and differentiation of new tissue. Materials may be metals, ceramic materials, natural materials, and synthetic polymers, or combinations thereof. Synthetic polymers are biocompatible materials (although less so than natural materials) whose three-dimensional structure may easily and reproducibly be manufactured and shaped. Their degradation rate may be controlled, they are free from pathogens, and bioactive molecules may be incorporated into them. Their disadvantage is that they may induce fibrous encapsulation. Natural polymers such as collagen, alginate, or keratin extracts are also biocompatible and, as synthetic polymers, may be incorporated active biomolecules. They have however the disadvantages that they may mimic the natural structure and composition of extracellular matrix, their degradation rate is not so easy to control, have less structural stability, are sensitive to temperature, and may be contaminated by pathogens. In any case, use of one or the other type of biomaterial is always related to the administration route in cell therapy protocols, implantation or injection . Thus, in the injection -based procedure, which is simpler and requires no surgery but can only be used for certain areas, biomaterials are usually in a hydrogel state, forming a hydrophilic polymer network, as occurs in PEO ( polyethylene oxide ), PVA ( polyvinyl alcohol ), PAA ( polyacrylic acid ), agarose, alginate, collagen, and hyaluronic acid. Research on biomaterials for cell therapy is aimed not only at finding or synthesizing new materials, but also at designing methods that increase their efficacy [ 85 ]. For example, control of the porous structure of these materials is very important for increasing their efficacy in tissue regeneration (through solvent casting/particulate leaching, freeze-drying, fiber bonding, electrospinning, melt molding, membrane lamination, or hydrocarbon templating ). An attempt may also be made to increase biocompatibility through chemical (oxidation or hydrolysis) or physical modification. To increase cell adhesion and protein adsorption, water-soluble polymers may be added to the biomaterial surface. Bioactive molecules such as enzymes, proteins, peptides, or antibodies may also be coupled, as is the standard and routine practice, to the biomaterial surface to provide it with functionality. Other substances such as cytokines or growth factors which promote migration, proliferation, or overall function of cells used in therapy may be coupled. Another highly relevant line of research aims at minimizing immune rejection when cells to be used are not autologous cells. Immunoisolation by cell microencapsulation (coating of biologically active products by a polymer matrix surrounded by a semipermeable membrane), which allows for two-directional substance diffusion, is extremely important and is giving optimal results [ 86 - 89 ]. Many types of biomaterials are being developed for bone tissue regeneration based on either demineralized bone matrix or in bladder submucosa matrix combined with poly(lactic-co-glycolic acid) (PLGA), which accelerates regeneration and promotes cell accommodation in in vivo bone formation [ 90 , 91 ]. For bypass procedures in large-diameter vessels, synthetic polymers such as expanded polytetrafluoroethylene (ePTFE) or polyethylene terephthalate (PET) fiber have been applied [ 92 ]. For peripheral nerve repair, use of axonal guides made of several materials such as silicone, collagen, and PLGA [ 93 ], and recently of Schwann cells to accelerate axonal regeneration, have been reported [ 94 ]. Advances in identification of the optimal characteristics of the matrix and an increased understanding of interactions between cells and biomaterials will condition development of future cell therapy protocols. Production costs, biobanks and biosecurity in cell therapy Production costs in cell therapy are high (currently, a treatment may cost more than 40,000 dollars), mainly because drug products based on cell therapy are prepared on a low and almost individual scale, but allogeneic procedures [ 95 ] and availability of cryopreserved cell banks (biobanks) will lead cell therapy to occupy a place in the market of future pharmacology. Costs are accounted for by different items, all of them necessary, including multiple surgical procedures, maintenance of strict aseptic conditions, specific training of technical staff and maintenance of overall technical and staff support, specialized facilities, the need for producing small and highly unstable batches and, of course, design and development of the different market strategies. The question arises as to whether these costs will be compatible with at least partial funding by governments, medical insurance companies, and public and private health institutions, and with current and future demographic movements ("demographic" patients) [ 96 ]. Until widespread use of allogeneic protocols becomes established, thus overcoming the problems derived from immune rejection, and although it is not certain if allogeneic cell transplantation will ever be free from clinical complications, biobanks represent the hope for the project of cell therapy to become a reality in the future [ 97 ]. Concerning production costs, even if biobanks exist, the production of cellular therapies often require the use of cytokines, growth factors and specialized reagents which are very expensive. Stem cell banks [ 98 ] store lines of embryonic and adult human stem cells for purposes related to biomedical research. Regardless of their public (nonprofit, anonymous donation) or private (donation limited to a client's environment) nature, stem cell banks may store cell lines from umbilical cord and placental tissue, rich in hematopoietic stem cells, or cell lines derived from various somatic tissues, either differentiated or not. There are banks of cryopreserved umbilical cord bloods throughout Europe and North America. These were set up primarily for hematopoietic stem cell transplantation, but they are available for other clinical uses. Two of the most relevant international banks are the US National Stem Cell Bank (NSCB) [ 99 ] and the United Kingdom Stem Cell Bank [ 100 ]. The NSCB was set up at the WiCell Research Institute on September 2005 and is devoted to acquisition, characterization, and distribution of 21 embryonic stem cell lines and their subclones for use in research programs funded by the National Institute of Health (NIH), and to provide the research community with adequate technical support. The UKSCB was created on September 2002 as an independent initiative of the Medical Research Council (MRC) and the Biological Sciences Research Council (BBSRC), and serves as a storage facility for cell lines from both adult and embryonic stem cells which are available for use in basic research and in development of therapeutic applications. Culture of adult stem cells, which are safer to use, must be kept in culture since they are harvested until they are used. This may involve risks of contamination or pseudodifferentition leading to a loss of biological specificity of each target cell population in its physiological interaction with all other tissues. This makes it essential, for biosafety purposes, to assess and monitor any ex vivo differentiation procedure, first in vitro cultures and then in animal models, to verify the properties of the stem cell and its genetic material and to prevent risks, which may range from tumor formation to simple uncertainty about its differentiation [ 101 ]. If the biological material is human embryonic stem cells (hES), there is no standard method for characterization, but some of their specific characteristics may be assessed, including the nucleus-cytoplasm ratio, number of nucleoli and morphological characteristics of the colony, growth rate, percent clonogenicity, in vitro embryoid body formation, and in vitro teratoma formation after subcutaneous implantation in immunodeficient mice. Clinical use of this type of cells always requires control of their in vitro differentiation into multipotent or fully differentiated cells with tumorogenic potential. The cell characterization process in molecular and cellular terms is time-consuming and takes some years, especially as regards self-renewal pathways and development potential, which are very different in humans and murine models. Control of cell transformation is particularly important for biosecurity of cell therapy products. Hematopoietic stem cells are extremely resistant to transformation due to two types of control, replicative senescence (phase M1) and cell crisis (phase M2). Cell senescence is usually induced by a moderate telomere shortening or by oncogene expression leading to morphological changes such as cell lengthening or a change in expression of specific senescence markers. The cell crisis phase occurs when some cell types avoid this control until telomeres become critically short, chromosomes become unstable, and apoptosis is activated. Spontaneous transformations have been reported in human (hMSCs) and murine (mMSCs) mesenchymal stem cells [ 102 ], suggesting that extreme caution is required when these cells are used in clinical treatments. However, it should also be noted that cell transformation occurs after a long time period (4 months), much longer than the culture periods of therapeutic cells (2-14 passages; 1-8 weeks), which is the minimum and almost sufficient time to obtain an adequate number of cells for a cell therapy treatment, and during which the senescence phenomenon is less likely. Biotechnological industry Stem cell research is in its early stages of development, and the market related to cell therapy is therefore highly immature, but the results achieved to date raise great expectations. In order to analyze the current status and perspectives of this particular market, a distinction should be made between embryonic and adult stem cells, because the number of companies in these two fields is very different (approximately 30-40 working with adult versus 8-10 working with embryonic stem cells). Such difference is mainly due to ethical and legal issues associated to each cell type or to the disparity of criteria between the different countries regarding the industrial and even intellectual properties of the different technologies derived from stem cell research. Overall, the potential numbers of patients who could benefit from cell therapy in the US would be approximately 70 million patients with cardiovascular disease, 50 millions with autoimmune diseases, 18 millions with diabetes, 10 millions with cancer, 4.5 millions with Alzheimer's disease, 1 million with Parkinson's disease, 1.1 millions with burns and wounds, and 0.15 millions with medullary lesions (data taken from Advanced Cell Technology [ 103 ]. Today, many pharmaceutical companies, including the big ones, are reluctant to enter this market because of the great investment required and because a very hard competition is expected in the pharmaceutical market. To date, the most profitable strategy has been the signing of agreements between big pharmaceutical companies and other small biotechnological companies whose activity is 100% devoted to cell therapy and regenerative medicine. Special mention should be made of induced pluripotent stem cells (iPSCs), which have raised great expectations in the pharmaceutical industry because products to be derived from them, as noted above, will be applicable in a very wide range of development of new drugs and new procedures for the treatment of a great number of human diseases. At least 5-10 years will elapse until these products, not therapeutic yet and under study, are in therapeutic use and yield an economic return to biotechnological companies. Today, this interesting potential of therapeutic products derived from iPSCs still faces great technical and scientific challenges, and a very long time will be required until they fulfill their promise. Overall, business models for marketing must be well devised and optimized, and also very well tested and based on accumulated experience with the various types of both adult and embryonic or induced stem cells [ 104 ]. Research perspectives of stem cells The general objectives in the area of stem cell research in the next few years, are related to identification of therapeutic targets and potential therapeutic tests. Within these general objectives, other specific objectives will be related to studies of cell differentiation and cellular physiological mechanisms that will enhance understanding, prevention, and treatment of some congenital or acquired defects. Other objectives would be to establish the culture conditions of pluripotent stem cells using reliable cytotoxicity tests and the optimum type of cell or tissue to be transplanted depending on the disease to be treated (bone marrow for leukemia and chemotherapy; nerve cells for treating conditions such as Parkinson and Alzheimer diseases; cardiac muscle cells for heart diseases, or pancreatic islets for the treatment of diabetes. The current reality is that, although extensive research is ongoing and encouraging partial results are being achieved, there is still much to be known about the mechanisms of human development and all differentiation processes involved in the whole process from fertilization to the full development of an organism. In this, which appears so simple, lies the "mystery" surrounding differentiation of the different stem cells and the many factors that condition it. A second pending question, is the efficacy and safety tests for stem cell-based drugs or procedures to be performed in both animal and human models in the corresponding phase I-III clinical trials. The final objective of stem cell research is to "cure" diseases. Theoretically, stem cell therapy is one of the ideal means to cure almost all human diseases known, as it would allow for replacing defective or dead cells by normal cells derived from normal or genetically modified human stem cell lines [ 105 ]. If, as expected, such practices are possible in the future, stem cell research will shift the paradigm of medical practice. Some scientists and healthcare professionals think, for example, that Parkinson's disease, spinal cord injury, and type 1 diabetes [ 106 ] may be the first candidates for stem cell therapy. In fact, the US Food and Drug Administration has already approved the first clinical trial of products derived from human embryonic stem cells in acute spinal cord injuries [ 107 , 108 ]. Human stem cells, mainly autologous bone marrow cells, autologous and allogeneic mesenchymal cells, and some allogeneic neural cells, are currently being assessed in various clinical studies. As regards transplantation of bone marrow and mesenchymal cells, many data showing its safety are already available, while the efficacy results reported are variable. The most convincing likely explanation for this is that many mechanisms of action of these cells are not known in detail, which makes results unpredictable. Despite this, there is considerable optimism based on the immune suppression induced by mesenchymal stem cells and on their anti-inflammatory properties, which may be beneficial for many conditions such as graft-versus-host disease, solid organ transplantation, and pulmonary fibrosis. Variable results have been reported after use of mesenchymal stem cells in heart diseases, stroke, and other neurodegenerative disorders, but no significant effects were seen in most cases. By contrast, encouraging results were found in the correction of multiple sclerosis, at least in the short term. Neural stem cells may be highly effective in inoperable glioma, and embryonic stem cells for regeneration of pancreatic beta cells in diabetes [ 109 ]. The change in policy regarding research with embryonic stem cells by the Obama administration, which heralds a change of environment leading to an increased cooperation in the study and evaluation of stem cell therapies, opens up new and better expectations in this field. The initiative by the California Institute for Regenerative Medicine [ 110 ] has resulted in worldwide collaboration for these new drugs based on stem cells [ 111 ]. Thus, active participation of governments, research academies and institutes, pharmaceutical and biotechnological companies, and private investment may shape a powerful consortium that accelerates progress in this field to benefit of health. Legal and regulatory issues of cell therapy Cell therapy is one of the advanced therapy products (ATPs), together with gene therapy and tissue engineering. A regulatory framework is required for ATPs to ensure patient accessibility to products and governmental assistance for their regulation and control. Certainty, scientific reality and objectivity, and flexibility to keep pace with scientific and technological evolution are the characteristics defining an effective regulation. Aspects to be regulated mainly include control of development, manufacturing, and quality using release and stability tests; non-clinical aspects such as the need for studies on biodistribution, cell viability and proliferation, differentiation levels and rates, and duration of in vivo function; and clinical aspects such as special dose characteristics, stratification risk, and specific pharmacovigilance and traceability issues. European Medicines Agency: Regulation in the European Union European countries may be classified into three groups based on their different positions regarding research with embryonic stem cells of human origin. i) Countries with a restrictive political model (Iceland, Lithuania, Denmark, Slovenia, Germany, Ireland, Austria, Italy, Norway, and Poland); ii) Countries with a liberal political model (Sweden, Belgium, United Kingdom, and Spain); and iii) Countries with an intermediate model (Latvia, Estonia, Finland, France, Greece, Hungary, Switzerland, the Netherlands, Bulgaria, Cyprus, Portugal, Turkey, Ukraine, Georgia, Moldavia, Romania, and Slovakia). The Seventh Framework Program for Research of the European Union , coordinated by the European Medicines Agency, was approved on July 2006 [ 112 ]. This Seventh Framework Program provides for funding of research projects with embryonic stem cells in countries where this type of research is legally accepted, and the projects involving destruction of human embryos will not be financed with European funds. Guidelines on therapeutic products based on human cells are also established [ 113 ]. This regulation replaces the points in the prior 1998 regulation (CPMP/BWP/41450/98) referring to the manufacture and quality control of therapy with drugs based on human somatic cells, adapting them to the applicable law and to the heterogeneity of products, including combination products. Guidance is provided about the criteria and tests for all starting materials, manufacturing process design and validation, characterization of cell-base medicinal products, quality control aspects of the development program, traceability and vigilance, and comparison. Is also provides specific guidance of matrixes and stabilizing and structural devices or products as combination components. The directive recognizes that conventional non-clinical pharmacology and toxicological studies may be different for cell-based drugs, but should be strictly necessary for predicting response in humans. It also establishes the guidelines for clinical trials as regards pharmacodynamic and pharmacokinetic studies, defining the clinically effective safe doses. The guideline describes the special consideration to be given to pharmacovigilance issues and the risk management plan for these products. The guideline has therefore a multidisciplinary nature and addresses development, manufacture, and quality control, as well as preclinical and clinical development of medicinal products based on somatic cells (Directive 2001/83/EC) and tissue engineering products (Regulation 1394/2007/EC2). Includes autologous or allogeneic (but not xenogeneic) protocols based on cells either isolated or combined with non-cell components, or genetically modified. However, the document does not address non-viable cells or fragments from human cells. Legislation on cell therapy in Europe is based on three Directives: Directive 2003/63/EC (amending Directive 2001/83/EC), which defines cell therapy products as clinical products and includes their specific requirements; Directive 2001/20/EC, which emphasizes that clinical trials are mandatory for such products and describes the special requirements for approval of such trials; and Directive 2004/23/EC, which establishes the standard quality, donation safety, harvesting, tests, processing, preservation, storage, and distribution of human tissues and cells. The marketing authorization application has been prepared by the European Medicines Agency so that cell therapy products should meet the same administrative and scientific requirements as any other drug [ 114 ]. US Food and Drug Administration (FDA): Regulation in the United States of America In the United States of America, restrictions are limited to research with federal funds. No limitations exist for research with human embryonic stem cells provided the funds come from private investors or specific states. In countries such as Australia, China, India, Israel, Japan, Singapore, and South Korea, therapeutic cloning is permitted. The FDA has developed a regulatory framework that controls both cell- and tissue-based products, based on three general areas: i) Prevention of use of contaminated tissues or cells (e.g. AIDS or hepatitis); ii) prevention of inadequate handling or processing that may damage or contaminate those tissues or cells; and iii) clinical safety of all tissues or cells that may be processed, used for functions other than normal functions, combined with components other than tissues, or used for metabolic purposes. The FDA regulation, derived from the 1997 basic document " Proposed approach to regulation of cellular and tissue-based products " [ 115 ]. The FDA has recently issued updates to previous regulations referring to human cells, tissues, and all derived products [ 116 ]. This regulation provides an adequate regulatory structure for the wide range of stem cell-based products which may be developed to replace or repair damaged tissue, as both basic and clinical researchers and those working in biotechnological and pharmaceutical companies which need greater understanding and information to answer many questions before submitting a stem cell-based product for clinical use. It should be reminded that, unlike conventional medicinal products, many stem cell-derived products are developed at universities and basic research institutions, where preclinical studies are also conducted, and that researchers there may not be familiar with the applicable regulations in this field. The FDA also provides specific recommendations on how scientists should address the safety and efficacy issues related to this type of therapies [ 117 ]. Any product based on stem cells or tissues undergoes significant processing, and it should therefore be fully verified that they retain their normal physiological function, either combined or not with other non-tissue components, because they will generally be used for metabolic purposes [ 116 , 118 ]. This is why many such products, if not all, must also comply with the Public Health Services Act, Section 351, governing the granting of licenses for biological products, which requires FDA submission and application for investigational protocols of new drugs before conducting clinical trials in humans. The key points of the current FDA regulation for cell therapy products [ 117 ] include: i) demonstration of preclinical safety and efficacy; ii) no risk for donors of transmission of infectious or genetic diseases; iii) no risk for recipients of contamination or other adverse effects of cells or sample processing; iv) specific and detailed determination of the type of cells forming the product and what are their exact purity and potency; v) in vivo safety and efficacy of the product. There is still much to be learned about the procedures to establish the safety and efficacy of cell therapy products. The greater the understanding of the biology of stem cell self-renewal and differentiation, the more precise the evaluation and prediction of potential risks. Development of techniques for cell identification within a mixed cell culture population and for follow-up of transplanted cells will also be essential to ascertain the potential in vivo invasive processes and to ensure safety. Since new stem cell-based therapies develop very fast, the regulatory framework must be adapted and evolve to keep pace with such progress, although it may be expected to change more slowly. Meanwhile, the current regulations must provide the framework for ensuring the safety and efficacy of the next generations of stem cell-based therapeutic products. Bioethical aspects of cell therapy Ethics is not in itself a discipline within human knowledge, but a "dialogue" where each person, from his/her stance, gives his/her opinion and listens to the other person's opinion. Most cell therapy protocols have not been controversial. The exception is therapy with human embryonic stem cells, which has raised moral and ethical issues [ 119 , 120 ]. Such considerations refer to donor consent and problems associated to oocyte collection and the issue of destruction of human embryos [ 121 ]. Guidelines--ranging from total prohibition to controlled permissiveness--defining what may be permitted in research with pluripotent stem cells have been issued in countries all over the world [ 122 ]. All such guidelines reflect the different views about when life starts during the human embryonic development, as well as regulation of measures to protect oocyte donors and to reduce the probability of human embryo destruction [ 123 ]. There is general international agreement in that the results of stem cell research should not be applied in humans without prior ethical scrutiny. For this purpose, 42 European countries have national ethics committees since 2006, and a President's Council on Bioethics with an advisory role in bioethical matters was created in the US in 2001. The European Commission currently has the Group on Ethics in Science and New Technologies , an advisory, independent, and plural multidisciplinary body [ 124 ], and in other countries, such as the United Kingdom, legislation on action and bioethics is clearly established since several years ago [ 125 ]. The Ethics and Health Team at World Health Organization [ 126 ] acts as a permanent secretariat for the Global Summit of National Bioethics Commissions and cooperates with the European Conference of National Ethics Committees (COMETH) [ 127 ]. On the other hand, the UNESCO created in 1992 the International Bioethics Committee [ 128 ]. In the United States, the National Institute of Health provides detailed and updated information on various aspects related to stem cells [ 129 ] in order to educate and update on the different viewpoints on bioethical issues as a function of progress in science and technology related to the field of cell therapy. The National Academy of Sciences issued in 2005 its first set of ethical standards for stem cell research [ 130 ], which were updated in 2007, 2008, and 2010, to adapt the guidelines to rapid scientific and political advances, by the Human Embryonic Stem Cell Research Advisory Committee created in 2006 with the support of the Ellison Medical Foundation, The Greenwall Foundation, and Howard Hughes Medical Institute. These updates and amendments have updated the guidelines of the different national academies and take into account the new role of the National Institute of Health with regard to research with human embryonic stem cells. The Presidential Commission on Bioethics for the Study of Bioethical Issues advises President Obama on any bioethical issues that may arise from advances in biomedicine and in related areas of science and technology [ 131 ]. This commission works to identify and promote policies and practices ensuring ethically responsible actions in scientific research, health care, and technological innovation. The Kennedy Institute of Ethics at Georgetown University Library and Information Services [ 132 ] allows for searching books, newspapers, journal articles, and other materials on bioethical issues. On the other hand, the International Society for Stem Cell Research [ 133 ] and, among others, the Bioethics Advisory Committee (BAC) Singapore [ 134 ] have set up ethical, legal, and social regulations derived from research in biomedical sciences and act as an advisory public service on stem cells. In conclusion, scientists are aware of the need for ethical evaluation of their research. This is discussed in the Declaration on Science and the Use of Scientific Knowledge of the 1999 World Conference on Science held in Budapest, entitled Science for the Twenty-First Century: a New Commitment attests to this awareness [ 135 ]. This declaration states that scientific research and use of scientific knowledge should respect human rights and the dignity of human beings, in accordance with the Universal Declaration of Human Rights and the Universal Declaration on the Human Genome and Human Rights. The special responsibility of scientists for preventing uses of science which are ethically incorrect or have a negative impact for society is also established. Commitments are established [ 136 ] to teach the next generations of scientists that ethics and responsibility are part of their daily training and work, and to warn about any potential dilemmas that may arise in the future with the inexorable progress of science. There are two general basic issues related to bioethics that should be considered with care and separately: First, scientific and therapeutic relevance , and second, the cost of cryopreservation over time . In term of relevance, it should be considered that cells should be useful for the treatment of a specific disease, but the exact time of their use is not known, and they therefore have to be cryopreserved. From a bioethical viewpoint, this is more questionable when dealing with embryos whose cryopreservation should be authorized by the parents and which will be used either for a particular use or for donation. These embryos may ultimately be used for scientific uses of research with embryonic stem cells, in which case the bioethical conflict may be further aggravated. The second aspect is the cost of cryopreservation. In some cases, such as preservation of umbilical cord blood, private biobanks are mainly used today, which may lead to a significant discrimination of people who cannot afford payment for such banks as compared to those who can. Although ethical issues are less questionable in the case of adult stem cells as compared to embryonic stem cells, the Council of Europe's Steering Committee on Bioethics [ 137 ] has prepared an additional protocol, in the Convention on Human Rights and Biomedicine [ 138 ], which represents a general ethical and legal framework for signatory countries. This document details the different conditions, such as the prerequisite of approval by an independent committee competent in the corresponding field of a research project with both adult and embryonic stem cells assessing the relevance of the research purpose and the multidisciplinary aspects from the bioethical viewpoint. Signature by the donor, the research or hospital center, and the principal investigator of the project of an informed consent that explains in detail the potential risks and benefits and informs on the rights and safeguards, is also established as an indispensable condition.
Conclusions In recent decades, a great interest has arisen in research in the field of stem cells, which may have important applications in tissue engineering, regenerative medicine, and cell- and gene therapy. There is however much to be investigated about the specific characteristics of efficacy and safety of the new drugs based on this type of cells. Cell therapy is based on transplantation of live cells into an organism in order to repair a tissue or restore lost or defective functions. Recent studies have shown that mesenchymal stem cells (MSCs) support hematopoiesis and immune response regulation and they represent an optimum tool in cell therapy because of their easy in vitro isolation and expansion and their high capacity to accumulate in sites of tissue damage, inflammation, and neoplasia. On the other hand, adipose-derived stem cells (ASCs) secrete many cytokines and growth factors with anti-inflammatory, antiapoptotic, and immunomodulatory properties. This makes these stem cells optimum candidates for cell therapy. Induced pluripotent stem cells (iPSCs) from somatic cells are revolutionizing the field of stem cells. They have a potential value for discovery of new drugs and establishment of cell therapy protocols because they show pluripotentiality to differentiate into cells of all three germ layers. The iPSC technology offers the possibility of developing patient-specific cell therapy protocols because use of genetically identical cells may prevent immune rejection, and unlike embryonic stem cells, iPSCs do not raise a bioethical debate, and are therefore a "consensus" alternative that does not require use of human oocytes or embryos. Cell therapy applications are related to the treatment of organ-specific diseases such as diabetes or liver diseases. Another relevant application of cell therapy is development of cancer vaccines based on dendritic cells or cytotoxic T cells, in order to induce natural immunity. Other applications, still in their first steps, include treatment of hereditary monogenic diseases such as hemophilia. Until widespread use of allogeneic protocols becomes established, thus overcoming the problems derived from immune rejection, biobanks represent the hope for the project of cell therapy to become a reality in the future; control of cell transformation is also particularly important for biosecurity of cell therapy products. Stem cell research is in its early stages of development, and the market related to cell therapy is therefore highly immature, but the results achieved to date raise great expectations. Today, many pharmaceutical companies, including the big ones, are reluctant to enter this market because of the great investment required and because very hard competition is expected in the pharmaceutical market. The general objectives in this area in the next few years, are related to identification of therapeutic targets and potential therapeutic tests. Within these general objectives, other specific objectives will be related to studies of cell differentiation and cellular physiological mechanisms that will enhance understanding, prevention, and treatment of some congenital or acquired defects. Other objectives would be to establish the culture conditions of pluripotent stem cells using reliable cytotoxicity tests and the optimum type of cell or tissue to be transplanted depending on the disease to be treated. Up to now, most cell therapy protocols have not been controversial. The exception is therapy with human embryonic stem cells, which has raised moral and ethical issues. Such considerations refer to donor consent and problems associated to oocyte collection and the issue of destruction of human embryos. Guidelines--ranging from total prohibition to controlled permissiveness--defining what may be permitted in research with pluripotent stem cells have been issued in countries all over the world. Bioethical aspects will be required related to the scientific and therapeutic relevance and cost of cryopreservation over time, but specially with respect to embryos which may ultimately be used as source of embryonic stem cells, in which case the bioethical conflict may be further aggravated. Also, a regulatory framework will be required to ensure patient accessibility to products and governmental assistance for their regulation and control.
There is much to be investigated about the specific characteristics of stem cells and about the efficacy and safety of the new drugs based on this type of cells, both embryonic as adult stem cells, for several therapeutic indications (cardiovascular and ischemic diseases, diabetes, hematopoietic diseases, liver diseases). Along with recent progress in transference of nuclei from human somatic cells, as well as iPSC technology, has allowed availability of lineages of all three germ layers genetically identical to those of the donor patient, which permits safe transplantation of organ-tissue-specific adult stem cells with no immune rejection. The main objective is the need for expansion of stem cell characteristics to maximize stem cell efficacy (i.e. the proper selection of a stem cell) and the efficacy (maximum effect) and safety of stem cell derived drugs. Other considerations to take into account in cell therapy will be the suitability of infrastructure and technical staff, biomaterials, production costs, biobanks, biosecurity, and the biotechnological industry. The general objectives in the area of stem cell research in the next few years, are related to identification of therapeutic targets and potential therapeutic tests, studies of cell differentiation and physiological mechanisms, culture conditions of pluripotent stem cells and efficacy and safety tests for stem cell-based drugs or procedures to be performed in both animal and human models in the corresponding clinical trials. A regulatory framework will be required to ensure patient accessibility to products and governmental assistance for their regulation and control. Bioethical aspects will be required related to the scientific and therapeutic relevance and cost of cryopreservation over time, but specially with respect to embryos which may ultimately be used for scientific uses of research as source of embryonic stem cells, in which case the bioethical conflict may be further aggravated.
Competing interests The author declares that he has no competing interests. The author is Principal Investigator of a preclinical project (not clinical trial) on gene and cell therapy for treatment of haemophilia. Authors' contributions AL has conceived the manuscript, and its design. The author has made intellectual contributions and has made the acquisition, analysis and interpretation of literature data, drafting the manuscript and the final revised manuscript.
CC BY
no
2022-01-12 15:21:36
J Transl Med. 2010 Dec 10; 8:131
oa_package/61/d0/PMC3014893.tar.gz
PMC3014894
21122156
Background A number of techniques exist for the treatment of distal radius fractures including closed reduction and cast immobilization, percutaneous pin fixation, external fixation, open reduction and internal fixation with a dorsal or volar plating system or a combination of small plate systems[ 1 ]. In particular, many clinical reports have demonstrated that internal fixation of unstable distal radial fractures with a volar locking plate system provides excellent outcomes[ 2 - 6 ]. These excellent results are associated with the prevention of radial shorting, malunion, and articular incongruity based on the stable fixation of a volar locking plate system. A number of volar plate systems have been designed and biomechanical studies have reported the stability and ultimate strength of the plates in testing to failure under axial compression[ 7 - 10 ]. The Acu-Loc ® Targeted Distal Radius system has recently become available as the volar locking plate which can be characterized by 2 or 3 distal locking screws that target the radial styloid to provide fixation of radial styloid fragments[ 11 ]. However, it is unknown whether the radial styloid screws increased the stability of the volar plating system fixation along the entire distal radius. We hypothesized that a significant difference in the biochemical stability of unstable distal radial fractures exists between the volar locking plate fixation with and without the radial styloid screws. The purpose of this study was to evaluate whether the distinctive screws targeting the radial styloid were effective in the stable fixation of distal radial fractures using a cadaver unstable intra-articular fracture model.
Methods Specimen and Preparation Six matched pairs of fresh-frozen human cadaver wrists, complete from the proximal forearms to the metacarpal bones were procured for this study. The average age at the time of death for the cadavers was 76.8 years (range, 59 - 83). One radius from each matched pair was randomly assigned to each of the 2 volar plate fixation groups. Specimens were thawed at room temperature on the day of testing. Skin and soft tissues were removed, and the wrist capsule and interosseous membrane, triangular disc, and the capsule of the distal radioulnar joint were left intact. A standardized 3-part intra-articular and severe comminuted fracture was simulated as reported previously with some modification [ 7 , 9 ]. Briefly, a 1-cm transverse gap was made at a point 2-cm proximal to the articular surface of the lunate fossa. A second sagital split osteotomy was performed between the scaphoid and lunate fossa under protection of the wrist and distal radioulnar joints, creating an unstable intra-articular fracture with both radial- and unlar-side fracture fragments. In addition, polymethyl methacrylate was mounted on the metacarpal bones of each specimen to simulate axial loading of the distal radius across the intact wrist at full extension (Figure 1 ). Specimens were then fixed with the Acu-Loc ® volar plate system (Acumed, Hillsboro, OR). Two locking screws were used to fix the ulnar fragment and 2 more to fix the radial fragment, while 2 locking screws and one cortical screw were used to fix the proximal fragment. In addition, the radial fragment was fixed with (+) or without (-) 2 locking screws targeting the radial styloid (Figure 2 ). Biomechanical Testing The proximal radius was placed in a Materials Testing Machine (Autograph, Shimadzu, Kyoto, Japan), and a load frame was mounted to the flat surface of the polymethyl methacrylate on the metacarpal bones of specimen at full extension of the wrist (Figure 3 ). Each specimen was loaded at a constant rate of 20 mm/min to failure. Load data was recorded by a computer and plotted graphically. Ultimate strength was defined as the peak load followed by a sharp decrease in the load-time curve[ 7 ]. Gap closing data was recorded using a digital video camera (Digital Movie Camera DMX-HD, Sanyo Ltd, Osaka, Japan). After testing, distal radial bones, fixation plates and screws were examined for signs of failure. Statistical Analysis Data from the 2 groups, fixation with (+) or without (-) the locking screws targeting the radial styloid, were compared. Student's t test and Mann-Whitney U test were used to determine the significance of observed differences. A p value of less than .05 was considered statistically significant.
Results The average ultimate strength at failure of the volar plate fixation with radial styloid screws (913.5 ± 157.1 N) was significantly higher than that without radial syloid screws (682.2 ± 118.6 N) (Figure 4 ). The average change in gap between the radial or ulnar fragment and the proximal fragment (Figure 2 ) decreased with loading time. The gap distance in cases of fixation without radial styloid screws (-) tended to be lower at about 10 sec after the start of loading compared to that in cases with the screws (+), though there was no final difference between the 2 groups (Figure 5A ). On the other hands, the average distance in gap between the ulnar fragment and the proximal fragment in cases of fixation without radial styloid screws (-) was lower after 10 sec under loading, compared to that with screws (+), although the differences were not statistically significant (Figure 5B ). Figure 6 shows example of plates and screws after the experiments. In cases of volar plate fixation without radial styloid screws, both 2 ulnar fragment screws were broken while the radial fragment screws remained no broken. In contrast, both ulnar screws remained intact in cases of volar plate fixation with 2 radial styloid screws. After loading to failure, the number of bent or broken screws among the 4 distal screws inserted into the radial and ulnar fragments was examined. In volar plate fixation without radial styloid screws (-), 2 of 4 screws were bent or broken in two specimens, 3 of 4 screws in three specimens, and all 4 screws in one specimen. In volar plate fixation with radial styloid screws, no screws were bent or broken in three four specimens, 1 of 4 screws in two specimens, and 2 of 4 screws in one specimen. Failure loading did not result in any bent or broken volar plates or proximal screws. The number of specimens with bent or broken screws in the fixation using radial styloid screws group was significantly fewer than that in the fixation without radial styloid screws group (Figure 7 ; Mann-Whitney's U test, p = 0.0065). With regard to the 2 radial fragment screws, there were no bent or broken screws in the fixation with radial styloid screws (+) group, but 1 or 2 screws were bent or broken in four of six (66.7%) specimens without radial styloid screw fixation (-) (Figure 8A ). For the 2 ulnar fragment screws, all specimens (100%) revealed both screws to be bent or broken in the fixation without radial styloid screws (-) group, whereas three of six specimens (50%) in the fixation with radial styloid screws (+) group revealed that both screws to be intact, and the number of specimens with bent or broken screws tended to be fewer (Figure 8B ).
Discussion New developments in vloar plate and locking screw design have improved results of surgical treatment of distal radial fractures,[ 2 - 6 ] and several biomechanical studies have shown that a volar plate and locking screw system is efficient in the stabilizing of fractures against axial force[ 7 - 10 ]. Recently, the Acu-Loc ® Targeted Distal Radius system was designed as a best fit at the watershed line with 2 rows of distal locking screws and 1 or 2 screws targeting the radial styloid which theoretically provides greater stability against radial styloid fragments[ 11 ]. We undertook biomechanical testing to determine the efficacy of the distinctive screws targeting radial styloid in the stable fixation of entire distal radial fractures using a fresh-frozen human cadaver fracture model. In the present study, we showed that the radial styloid screws were effective in increasing the ultimate strength at failure of the volar plate fixation (Figure 4 ) and that their use led to decrease in the number of bent or broken screws after failure loading (Figures 7 and 8 ). In cases of fixation without radial styloid screws (-), the ulnar fragment was prone to greater displacement than was the fragment with radial styloid screws (+) under axial loading. Interestingly, no difference in the gap amount at the radial fragments was found between fixation with (+) and without (-) the radial styloid screws. Furthermore, in all six specimens without radial styloid screws, both the ulnar fragment screws were broken. On the other hand, only one of six specimens with radial styloid screws revealed both ulnar fragment screws to be broken (Figure 8B ). Based on these results, the ulnar fragment appeared to be more intensively stressed than the radial fragment under axial loading of the distal radius at full wrist extension. Previous study showed that force transmission patterns with the wrist in a neutral position consisted of 50% across the scaphoid fossa, 35% across the lunate fossa, and the remaining 15% across the triangular fibrocartilagenous complex [ 12 ]. We speculated that there was different pressure distribution under axial loading with the wrist in full extension, although we did not measure pressure in the wrist. Furthermore, we demonstrated that radial styloid screws could significantly increase ulnar fragment stability in cases of volar plate fixation for intra-articular distal radial fracture. Thus, additional fixation using the radial styloid screws was effective in preserving the stability of unstable and intra-articular distal radial fractures. We recommended that the radial styloid screws would be used in volar plate fixation for distal radial fracture regardless of the presence or absence of radial styloid fracture while the additional styloid screw fixation was not critical. Recent trends in distal radial fracture fixation have emphasized anatomic reduction and rigid fixation allowing early mobilization and return to functional activities. Most previously reported studies directly loaded the isolated radius using a cadaver fracture model;[ 7 , 8 , 10 ] however, a more clinically relevant loading pattern was that used by Taylor et al[ 9 ] in which loading was directed across the wrist joint. In this study, we modified their fracture model so that the positioning of wrist was at full extension, and axial compression was loaded through a flat palmar surface comprised of polymethyl methacrylate on the metacarpal bones. This model better simulated the clinical conditions, such as a fall on an outstretched hand or push-up after internal fixation for intra-articular unstable distal radial fracture. There are several limitations of this study. First, it seemed to be difficult to decide the failure mode for distal radial fracture based on the small sample size of this study, although the data showed the tendency of failure pattern and would be valuable information to make a plan in fixation of unstable intra-articular fracture. Second, we could not examine the rigidity of the plate system because specimens included several joint spaces and soft tissue connections between joint spaces. Third, the distal radius was loaded across the wrist in an extended position only, not in a flexed nor neutral position. Fourth, we did not examine a cyclic loading model using a physiological load.
Conclusion We showed that the distinctive screws targeting the radial styloid were effective in the stable fixation of distal radial fractures in the volar plate and locking screw system (Acu-Loc ® volar plate system) using a cadaver unstable intra-articular fracture model. Acu-Loc ® volar plate systems were provided by Acumed, Hillsboro, OR.
Background The locking screws target the radial styloid, theoretically provide greater stability against radial styloid fragment. However, it is unknown whether the radial styloid locking screws increased the stability of the volar plating system fixation along the entire distal radius or not. In this study, we evaluated the stability of the volar plating system fixation with or without the radial styloid screws using a biomechanical study in a cadaver fracture model. Methods Six matched pairs of fresh-frozen human cadaver wrists complete from the proximal forearm to the metacarpal bones were prepared to simulate standardized 3-part intra-articular and severe comminuted fractures. Specimens were fixed using the volar plating system with or without 2 radial styloid screws. Each specimen was loaded at a constant rate of 20 mm/min to failure. Load data was recorded and, ultimate strength and change in gap between distal and proximal fragments were measured. Data for ultimate strength and screw failure after failure loading were compared between the 2 groups. Results The average ultimate strength at failure of the volar plate fixation with radial styloid screws (913.5 ± 157.1 N) was significantly higher than that without them (682.2 ± 118.6 N). After failure loading, the average change in gap between the ulnar and proximal fragment was greater than that between the radial and proximal fragment. The number of bent or broken screws in ulnar fragment was higher than that in radial fragment. The number of specimens with bent or broken screws in cases with radial styloid screws was fewer than that in the fixation without radial styloid screws group. Conclusion The ulnar fragment is more intensively stressed than the radial fragment under axial loading of distal radius at full wrist extension. The radial styloid screws were effective in stable volar plate fixation of distal radial fractures.
Competing interests The authors declare that they have no competing interests. No benefits in any form have been received or will be received from a commercial party related directly or indirectly to the subject of this article. All the authors have no conflicts of interest. Authors' contributions KI, YO and TW carried out the preparation of specimens, establishment of the fracture model and data analysis. The biomechanical experiment and data analysis were carried out by TK and MA. TY and MA participated in the study design and coordination, and helped to draft the manuscript. All authors read and approved the final manuscript.
CC BY
no
2022-01-12 15:21:37
J Orthop Surg Res. 2010 Dec 2; 5:90
oa_package/aa/43/PMC3014894.tar.gz
PMC3014895
21167023
Background Microorganisms are often used as cell factories to produce a wide range of metabolites and proteins. Metabolic engineering is a suitable method to increase the production levels of these desired compounds. Feasibility studies with lactic acid bacteria have been performed in which strains were constructed with increased production of metabolites such as D-alanine, sorbitol, riboflavin, and folate [ 1 - 4 ]. In Lactococcus lactis , overproduction of alanine dehydrogenase in a lactate dehydrogenase (LDH) deficient strain resulted rerouting the glycolytic flux towards alanine [ 3 ]. In another case, overexpression of the complete riboflavin gene cluster in L. lactis resulted in a high riboflavin producing L. lactis strain [ 2 ]. A third example is the combined overexpression of the folate gene cluster and the p -aminobenzoate ( p ABA) gene cluster in L. lactis which resulted in a high folate producing strain [ 1 ]. The latter strain was able to produce 100-fold more folate (total folate levels) when compared to control strains. Folate biosynthesis proceeds via the conversion of GTP in seven consecutive steps towards the biologically active cofactor tetrahydrofolate (THF). The biosynthesis of THF includes two condensation reactions. The first is the condensation of p ABA with 2-amino-4-hydroxy-6-hydroxymethyl-7,8-dihydropteridine to produce dihydropteroate. Subsequently, glutamate is attached to dihydropteroate to form dihydrofolate [ 5 ]. Without p ABA, no THF can be produced and THF is needed as the donor and acceptor of one-carbon groups (i.e methyl, formyl, methenyl and methylene) in the biosynthesis of purines and pyrimidines, formyl-methionyl tRNA fmet and some amino acids [ 6 , 7 ]. The model organism Escherichia coli is commonly used for recombinant overexpression of proteins [ 8 ]. This micro-organism has a long history of application in the production of a vast range of proteins such as insulin, human growth hormones or interferon [ 9 - 11 ]. A problem with overexpression of recombinant or homologous proteins on high-copy plasmids is that the desired phenotype may be rapidly lost when propagated for prolonged periods of time [ 12 ]. One cause for this instability is a metabolic burden [ 13 , 14 ]. In E. coli , for example, the overproduction of a truncated elongation factor EF- Tu leads to a reduced growth rate of the strain [ 15 ]. It is evident that this EF- Tu overproducing strain is handicapped because of the production of a non-functional protein. In this case the production of functional proteins is reduced since the functional and non-functional proteins compete for the same resources of the translation machinery. Lactobacilli are commonly used to ferment food products like meat, vegetables and dairy products [ 16 ]. Lactobacillus plantarum is a well-characterized lactic acid bacterium and strain WCFS1 was the first in the genus Lactobacillus for which the entire genome sequence became publicly available [ 17 ]. Previously, a high folate-producing L. plantarum WCFS1 strain was constructed that produced more than 100-fold increased folate pools, when compared to the control strain. Remarkably, this strain exhibited a 20-25% reduction in growth rate [ 18 ]. It remains unclear whether high production of specific secondary metabolites such as folate can provoke a large cellular response. This paper describes the impact of metabolic engineering of folate production on the overall performance of the cell. Functional genomics tools, including transcriptomics and metabolomics, were used to elucidate global effects of folate overproduction. Leads from this analysis were used to help explain the growth rate reduction upon the overexpression of the folate gene cluster.
Methods Bacterial strains, media and culture conditions Lactobacillus plantarum WCFS1 and derivatives thereof (Table 7 for the complete list of used strains and plasmids) were cultivated at 37°C on Chemically Defined Medium (CDM), as described before [ 39 ]. Unless stated otherwise, CDM is complete. In a number of specific batch culture experiments p ABA was omitted or added, thereby using a concentration of 10 mg/L. Precultivations of L. plantarum harboring pNZ7021 and pNZ7026 was performed in non-pH regulated batch cultures using 56 mM glucose as fermentable substrate. L. plantarum harboring pNZ7021 and pNZ7026 was also cultivated in a pH-regulated batch fermentor and in chemostat culture on CDM supplemented with 25 mM glucose. A concentration 80 mg/L chloramphenicol (CM) was used in batch and continuous culture. For the construction of genetically modified strains, MRS broth and agar was used (Difco, Surrey, U.K.). For selection on MRS plates 10 mg/L CM was applied to the agar. Lactococcus lactis was grown at 30°C on CDM supplemented with 56 mM glucose as described previously [ 40 , 41 ]. Transformed L. lactis strains were cultivated and selected on M17 broth [ 42 ] and agar using 10 mg/L CM. Construction of genetically engineered strains Genomic DNA of L. plantarum WCFS1 was isolated using established procedures [ 43 ]. PCR was performed using PFX (Invitrogen, Breda, The Netherlands), applying PCR cycles of 94°C for 30 sec denaturation, 43°C for 30 sec for primer annealing, and 68°C for elongation (1 min per Kb). DNA ligation was performed using T4 DNA ligase (Invitrogen) by overnight incubation at 16°C. DNA fragments were mixed at a 5:1 insert:vector weight ratio. Two nisin inducible vectors were constructed, based on pNZ8148 [ 44 ]. In one vector the folate gene cluster of L. plantarum was cloned under the control of the nisin promoter in the sense orientation and, in the other, in the antisense orientation. The folate gene cluster was amplified in the sense orientation by PCR using LpfBnco-F and LpfPkpn-R as forward and reverse primers, respectively. Both primers were modified to introduce a restriction site for cloning of the DNA fragments (modified bases underlined in Table 7 ). The insertion plasmid pNZ8148 and the amplified DNA were digested with KpnI and NcoI. Both fragments were mixed and used for T4 DNA ligation. The DNA mix was transferred to L. lactis NZ9000 for transformation by electroporation, using established procedures [ 45 ]. The electroporated L. lactis suspension was plated and incubated for 40 h at 30°C. Chloramphenicol (CM) resistant colonies were checked for the presence of proper plasmids by PCR with pNis-F and LpfB-R as forward and reverse primer, respectively. Positive colonies were grown and plasmid DNA was extracted and then isolated using Jetstar columns (Genomed GmbH, Bad Oeynhausen, Germany). The restriction profile of the plasmid was determined; the plasmid with the proper restriction profile was named pNZ7030. The antisense vector was made by amplification of the folate gene cluster using, LpfBatg-F and LpfPkpn-R as the forward and reverse primers, respectively. The amplified linear fragment of DNA was digested with KpnI, and pNZ8148 was digested with KpnI and PmlI. The digested PCR product and digested plasmid were mixed and used for T4-DNA ligation. The DNA mix was transferred L. lactis NZ9000 for transformation as described above and plated on M17 plates with CM. After 40 h of growth, CM resistant colonies were checked for the presence of the correct plasmid by PCR; pNis-F and LpfP-xbatest were used as forward and reverse primer, respectively. Positive colonies were grown and plasmid DNA was extracted and then isolated using Jetstar columns. The restriction profile of the plasmid was determined and the plasmid with the proper restriction profile was named pNZ7031. The plasmids pNZ8148, pNZ7030 and pNZ7031 were used for transformation of L. plantarum NZ7100 [ 46 ] by electroporation using established procedures [ 47 ], and plated on MRS with CM. CM-resistant colonies were checked for the proper plasmid by PCR, using the primers as described above. Colonies with the proper plasmid were grown on CDM with the 80 mg/L CM and stored at -80°C in glycerol stocks waiting for further use. Continuous culture Chemostat cultivation was performed in a 1-L reactor (Applikon Dependable Instruments, Schiedam, The Netherlands) containing 0.5 L CDM. Temperature was controlled at 37°C. L. plantarum harboring pNZ7021 and pNZ7026 were inoculated in the reactor; first exponential growth of the culture was allowed until the maximal turbidity at 600 nm was reached. Next, the dilution rate of both cultures was set at 0.25 h -1 . Steady state was assumed after 5 volume changes. A stable pH of 5.5 was maintained by titration with 5 M NaOH, the pH was monitored by an ADI 1020 fermentation control unit (Applikon Dependable Instruments, Schiedam, The Netherlands). Anaerobic conditions were obtained by flushing the headspace of the reactor with nitrogen gas. Folate, p ABA and pterin analyses Folate was quantified using the microbiological assay, including enzymatic deconjugation of polyglutamate tails [ 48 , 49 ]. Pterin pools were determined (after oxidation to the aromatic forms) by HPLC in the intracellular and extracellular fractions of L. plantarum WCFS1 cultures using the procedures described by Klaus [ 50 ]. The 6-hydroxymethylpterin standard for HPLC was purchased from Schircks (Jona, Switzerland). Transcriptome analysis Cultures of L. plantarum WCFS1 strains were quenched using the cold methanol method [ 51 ]. Total RNA was isolated and extracted as described before [ 52 ]. The RNA concentration was determined with the ND-1000 spectrophotometer (NanoDrop Technologies Inc., USA). The quality of the isolated RNA was checked using the 2100 Bioanalyser (Agilent Technologies, Santa Clara, CA, USA); a ratio of 23 S over 16 S rRNA of ≥1.6 was taken as satisfactory. For cDNA synthesis, 5 μg RNA was used. Indirect labeling was performed with the CyScribe first-strand cDNA labeling kit (Amersham, United Kingdom) according to the manufacturer's protocol. The cDNA samples were labeled with cyanine 3 and cyanine 5. After labeling, the cDNA concentration and the labeling-efficiency were determined using the ND-1000 spectrophotometer. Each microarray was hybridized with 0.5 μg labeled Cy3 and Cy5 cDNA. A total of 12 custom designed microarrays (Agilent Technologies) were used for the comparison between the L. plantarum harboring pNZ7021 and pNZ7026 in continuous culture. Both strains were also cultivated in pH regulated batch culture on CDM with and without p ABA; for this experiment 21 microarrays were used. Microarrays were hybridized and washed according to the manufactures protocol. Slides were scanned with a ScanArray Express scanner (Perkin-Elmer), using a 10-μm resolution. Images were analyzed with the ImaGene 4.2 software (BioDiscovery, Inc.). Raw data are deposited on GEO under accession number GSM226923 till GSM226943 for the batch experiment microarrays and GSM239110 till GSM239121 for the continuous culture experiment, respectively. The fraction of folate mRNAs as part of the total mRNA pool was determined as follows. First the signals from the control spots, which are needed for validation purposes, on the custom designed Agilent DNA-micro-arrays were removed from the raw data set, assuring that only 8012 L. plantarum probes, representing 2792 genes (91.5% of the genome), were measured. From each probe the intensity of the foreground-signal and background-signal was measured separately for Cy3 and Cy5 signals. The pure probe signal was determined by subtracting the background from the foreground signal. Total signal was determined by summing the raw probe signal of all 8012 probes, the folate signal was determined by adding-up the raw probe signals of the 18 folate probes. Microarray hybridization schemes were made for the continuous culture experiment and the batch experiment performed in the presence and absence of p ABA. The continuous culture scheme consisted of a loop design with 12 microarrays with the following samples hybridized on one array and labeled with Cy3 and Cy5, respectively: C1 and F1, F1 and C3, C3 and F2, F2 and C2, C2 and F3, and F3 and C1, C1 and C2, F2 and F1, C4 and F4, C2 and C4, F4 and F1, and F4 and C3. Here, C1, C2, C3, and C4 represent fourfold biological replicates from L. plantarum harboring pNZ7021. F1, F2, F3, and F4 represent fourfold biological replicates of L. plantarum harboring pNZ7026. The experimental scheme for the batch experiment performed with and without p ABA, consisted of a loop design with 21 microarrays with the following samples hybridized on one array and labeled with Cy3 and Cy5, respectively: C1+P and F1+P, F1+P and C2+P, C2+P and F3+P, F3+P and C3+P, C3+P and F2+P, F2+P and C1+P, C1+P and C2+P, F2+P and F3+P, C1-P and F1-P, F1-P and C2-P, C2-P and F3-P, F3-P and C3-P, C3-P and F2-P, F2-P and C1-P, C1-P and C2-P, F2-P and F3-P, C3-P and F1+P, F2+P and C1-P, C2+P and F3-P, F1-P and C3+P, and F2+P and F1-P. Here, C1+P, C2+P, C3+P, F1+P, F2+P, and F3+P represent threefold biological replicates of the L. plantarum harboring pNZ7021 and pNZ7026, respectively, when grown in batch in the presence of p ABA. The C1-P, C2-P, C3-P, F1-P, F2-P, and F3-P, represent the L. plantarum harboring pNZ7021 and pNZ7026, respectively, when grown in batch in the absence of p ABA. Microarray data were analyzed as described previously [ 52 ]. The statistical significance of differences was calculated from variation in biological replicates, using the eBayes function in Limma (cross-probe variance estimation) and Holmes determination of significance. Only genes with a log2 ratio of -1 and +1 and a Holmes value less than 0.1 were considered significant. The microarray platform and microarray data are available at the Gene Expression Omnibus http://www.ncbi.nlm.nih.gov/geo under the accession numbers given above. Metabolome analysis The complete metabolome of L. plantarum WCFS1 harboring pNZ7021 and pNZ7026 from continuous cultivation, in three independent replicates, was quenched using the sodium chloride-method as described previously [ 53 ]. After dissolving in water, the intracellular metabolites were profiled in an untargeted manner on a reversed phase HPLC-MS system with a high resolution accurate mass detector (QTOF Ultima MS) as described before [ 54 ]. A Synergi Hydro-RP column, 250 × 2.0 mm and 4 μm pore size (Phenomenex, USA), and a gradient of 0 to 35% acetonitrile in water (acidified with 0.1% formic acid) during 45 min were used to separate the metabolites. Full scan accurate mass data (m/z 80-1500) were collected in both positive and negative electrospray ionization mode, using leucine enkephalin as a lock mass. Hereafter the mass signals exceeding three times the local noise were extracted, and mass profiles of both strains were compared using MetAlignTM software [ 54 - 56 ]. This program is designed for determining significant differences in the relative abundance of mass signals originating from metabolites. Based on their accurate masses and MS/MS fragmentation patterns, metabolites have been annotated by using the PubChem DB metabolite database http://www.ncbi.nlm.nih.gov . Determining the relative copy number of the pNZ derived plasmids The relative copy number was determined by quantitative PCR (qPCR). One primer-set was designed for the CM resistance gene on the plasmids pNZ7021 and pNZ7026, the other primer set was designed for the tryptophan gene, trpE , on the chromosome of L. plantarum WCFS1. The primers for the CM gene on the plasmid contain the following sequences, CTTAGTGACAAGGGTGATAAACTCAAA and CAATAACCTAACTCTCCGTCGCTAT, for the forward and reverse primers, respectively. The primer sequences of the tryptophan gene, trpE , on the chromosome of L. plantarum WCFS1 were as follows: GCTGGCGCGCCTAAGA (forward primer) and GCGGCACCTGCTCATAATG (reverse primer). The primers for the chromosome are used as marker for the chromosomal copy number to which all plasmid copy numbers were compared; this determines the relative copy number. Total DNA fraction was isolated from L. plantarum in the stationary phase. Total DNA was isolated from 5 ml of cell pellet using established procedures [ 43 ]. For qPCR, 0.2 μg of total DNA was used. The amplification efficiency was determined for: genomic DNA of L. plantarum WCFS1, pNZ7021 plasmid DNA and pNZ7026 plasmid DNA, amplification factors ranging from 1.9 to 2.0 were considered to be reliable. Sybr Green (ABI, Cheshire, UK) was used as fluorescent dye for determining the level of amplification. The Critical threshold number ( C t ) was determined using ABI Prism 7500 Fast Real-Time PCR system and software. The C t value was used to calculate the relative gene copy number ( N relative ) for the plasmid copy number in relation to the chromosomal copy number with the formula N relative = 2 ( C t , plasmid - C tchromosome ) . C t , plasmid is C t value for plasmid and C t ,chromosome is C t value for the chromosome. All relative copy number determinations were performed in triplicate. RT-qPCR Cells of L. plantarum WCFS1 cultures were quenched using the cold methanol method as described above. RNA was extracted, quantified, and checked for quality as described above. Primers were used to convert specific mRNA molecules into cDNA using a first-strand cDNA synthesis kit (Amersham, United Kingdom). In L. plantarum harboring pNZ8148 and pNZ7030 the following primers were used for cDNA synthesis: groES-re(2), pfk-re1, RQPCRfolBS, and RQPCRFPS. In L. plantarum harboring pNZ7031 the following primers were used for cDNA synthesis: groES-re(2), pfk-re1, RQPCRfolBAS, and RQPCRfPAS. The sequence of the primers can be found in Table 7 . All cDNA samples were diluted 100-fold to allow accurate quantification by qPCR. Sybr Green (ABI, Cheshire, UK) was used as fluorescent dye for determining the level of amplification. For qPCR on groES , pfK , folBS , folBAS , folPS , and folPAS the following primers-sets were used: groES-fo(2) and groES-re(2), pfk-fo1 and pfk-re2, FQPCRfolBS and RQPCRfolBS, FQPCRfolBAS and RQPCRfolBAS, FQPCRFPS and RQPCRFPS, and FQPCRfPAS and RQPCRfPAS, respectively. The Critical threshold number ( C t ) was determined using ABI Prism 7500 Fast Real-Time PCR system and software. The C t value was used to calculate the relative gene expression ( N relative ) using the formula N relative = 2 (( C tRF - C tRN )-(C tEF -C tEN )) . In this formula, C tRF and C tRN represent the C t value in the reference strain for the folate gene and normalizing gene, respectively. C tEF and C tEN are the C t value for the tested strain for the folate and normalizing gene, respectively. SDS-PAGE and protein quantification Protein was isolated as described previously [ 57 ]. To determine the level of protein overexpression, SDS-PAGE was performed as described previously [ 44 ]. The level of protein overexpression was quantified using ImageJ http://rsb.info.nih.gov/ij/ . The program ImageJ has a package for the conversion of protein bands into peaks, each peak can be quantified by determining the area.
Results Metabolite formation upon folate overproduction First of all, the impact of folate overproduction on metabolite formation and the transcript profile was determined. Secondly, specific analyses were performed to determine mechanisms that cause the observed growth rate reduction upon folate overproduction. Previously it was shown that homologous overexpression of the folate gene cluster of L. plantarum results in high folate pools [ 18 ]. It was shown that there is 55-fold more folate produced in L. plantarum cultures harboring plasmid pNZ7026 (which carries all genes in the folate biosynthesis pathway) when compared to the control strain carrying plasmid pNZ7021 (empty expression vector) [ 18 ]. Using differential metabolomics it was determined whether specific metabolites were more or less abundant in L. plantarum harboring pNZ7026 in comparison to L. plantarum carrying the control plasmid pNZ7021. Both strains were cultivated in a pH controlled chemostat culture in the presence of p ABA. At steady state, cells were harvested, quenched and extracted for metabolome analysis by LC-MS/MS. In total 18 metabolites with differential abundance were detected (Table 1 ). Of this group, 15 metabolites were significantly more abundant in L. plantarum harboring pNZ7026 and 3 metabolites were significantly less abundant. Five of the 15 metabolites, that were more abundant in L. plantarum harboring pNZ7026, could be linked directly to folate biosynthesis. The metabolite assigned as 10-formyl folate (Figure 1a ) showed the largest difference in relative abundance; this molecule was 117-fold more abundant in L. plantarum harboring pNZ7026 as compared to the control strain (pNZ7021). We also detected a 33- and 2.1-fold increase in abundance of a 10-formyl folate isomer and 10-formyl tetrahydrofolate (Figure 1b ), respectively. One metabolite, 2-amino-1,4-dihydro-4-oxo-6-pteridinecarboxylic methyl ester, is a known breakdown product of folate. When folate is exposed to light it decomposes into the latter compound and 2-amino-4-hydroxypteridine [ 19 ]. The other 11 metabolites cannot be linked directly to the folate biosynthesis pathway and their involvement remains to be investigated. Only 3 metabolites were present in a significantly lower abundance (less than 2-fold) in L. plantarum harboring pNZ7026; these components were putatively annotated as thymidine, 3-dehydroshikimate and 1-amino guanosine. In conclusion, the overexpression of the folate gene cluster leads to a massive accumulation in 10-formyl folate and other folate related metabolites. However, the global impact of folate overproduction on metabolite accumulation is relatively low with only 18 metabolites showing a significantly different relative abundance. In addition, folate and pterin (intermediates in the folate pathway) pools were analyzed by a microbiological assay and HPLC in the intra- and extracellular fractions, respectively (Table 2 ). High intracellular pterin pools were detected only in L. plantarum harboring pNZ7026 in the absence of p ABA. The principal pterin was identified as 6-hydroxymethylpterin from its chromatographic properties, and was detected in the extra- and intracellular fraction. In the folate biosynthesis pathway, 6-hydroxymethylpterin (in its dihydroform) is activated by pyrophosphorylation and then condensed with p ABA to form dihydropteroate, which is then glutamylated to yield folate. This demonstrates that L. plantarum WCFS1 cannot convert 6-hydroxymethylpterin into folate in a medium lacking p ABA. In addition, Table 2 shows that independent from the presence of p ABA in CDM; the growth of L. plantarum harboring pNZ7026 was 25% lower when compared to control strain. In summary, the high folate or high pterin levels alone cannot explain the growth rate reduction of the folate overproducing strain. Transcriptional profiling of folate overproducing cells DNA microarrays were used to analyze differential gene expression in response to high intracellular folate pools. For this study, we selected two different cultivation conditions (continuous and batch culture) to make a distinction between gene expression profiles specific for high folate pools and secondary effects of the overexpression of the folate gene cluster, e.g. differences in growth rate (as can be observed in batch cultures Table 2 ). It is assumed that any similarity in gene expression between both cultivation conditions is due to the production of folate or the high folate pools. All genes which are significantly up- or down-regulated are presented in Table 3 . The only genes that were differentially expressed both in batch and continuous culture are the 6 genes of the folate biosynthesis cluster (shown in bold and italics in Table 3 ). Because these genes were constitutively overexpressed on a high copy plasmid, the observed response is expected. This analysis shows that high folate pools or the elevated synthesis of folate does not lead to a global transcriptional response. Instead, it was found that 8 and 11 other genes responded specifically to secondary effects of the overexpression of the folate cluster in continuous and batch culture, respectively (Table 3 ). In continuous culture the 8 differentially expressed genes are involved in cation uptake or belong to a cell surface cluster which is predicted to be involved in the uptake of complex carbohydrates [ 20 ]. The biological relevance of down-regulation of these genes is unclear. In the batch experiment a total of 11 genes were significantly regulated upon the overexpression of the folate gene cluster. One gene cluster, involved in pyrimidine biosynthesis, appears to respond specifically to the growth rate reduction; as was noted in Table 2 . Remarkably, this gene cluster was also down-regulated when the folate gene cluster was overexpressed in the absence of p ABA (data not shown). The pyrimidine biosynthesis gene cluster is composed of 9 genes, from lp_2697 ( pyrE ) until lp_2704 ( pyrR1 ), including a gene upstream of the pyrimidine gene cluster, lp_2696 and a pyrimidine transporter pyrP , lp_2371. Two additional genes, ansB and rhe1 , are up-regulated upon the overexpression of the folate gene cluster in batch culture. AnsB (E.C. 3.5.1.1) is involved in the conversion of L-aspargine into L-aspartate. Rhe1 is involved in the unwinding of RNA-helices. The biological relevance of the differential expression of these genes under those conditions remains unclear. However, from these experiments it can be concluded that the reduced growth rate (as observed in batch culture in the presence and absence of p ABA; Table 2 ) does not trigger a large transcriptional response, instead only a few genes could potentially be linked to the growth rate reduction. Moreover, none of the genes of L. plantarum appears to respond specifically to high folate pools, or the increased biosynthesis of folate. Mechanism of growth rate reduction Functional genomics tools such as transcriptomics and metabolomics showed that folate overproduction in L. plantarum has a low impact on the global transcription profile and metabolite formation. The growth rate of L. plantarum harboring pNZ7026 was reduced by 25%, when compared to L. plantarum harboring pNZ7021 in the presence or absence of p ABA (Table 2 ). This notion shows that a high folate pool itself cannot explain the growth rate reduction. To get insight into potential mechanisms for the growth rate reduction we explored several possible causes of reduced growth rate: i) metabolic costs for mRNA synthesis and plasmid synthesis; ii) increased pools of mRNA and/or protein of the transcription/translation machinery; and iii) depletion of GTP by its drainage away for folate production. The experimental approaches to investigate the involvement of these mechanisms are described below. Effect of elevated mRNA synthesis and plasmid replication on the growth rate It was determined whether the growth rate reduction could be explained by increased metabolic cost for mRNA synthesis or plasmid replication. When comparing the signals of all transcripts (9606 gene related probes representing the 3688 genes) on the microarrays with the signals of the folate biosynthesis transcripts (a total of 18 probes on the microarray), it was found that the latter are the highest expressed genes on the entire microarray, even higher than glycolytic and ribosomal protein transcripts. In L. plantarum WCFS1 harboring pNZ7021 and pNZ7026 the folate mRNAs are on average 0.16% and 8.3% of the total mRNA pool, respectively. Next, it was investigated whether the cost for mRNA synthesis could explain the reduced growth rate of L. plantarum harboring pNZ7026. Simultaneously, the difference in plasmid size of pNZ7021 and pNZ7026, with 3.3 and 7.7 Kb, respectively, was also marked as a potential cause, reflecting the plasmid replication cost and assuming a similar copy number for both plasmids. To test this explanation, the growth performance, mRNA synthesis and plasmid copy numbers were determined for L. plantarum harboring pNZ8148 (empty vector), pNZ7030 (folate gene cluster in sense orientation) and pNZ7031 (folate gene cluster in antisense orientation). The gene expression using plasmids pNZ7021 and pNZ7026 is constitutive which is in contrast to pNZ8148, pNZ7030 and pNZ7031, in these plasmids gene expression is regulated by the addition of nisin. Using the strains with the latter plasmids we were able to make a distinction between the effect of mRNA synthesis alone ( L. plantarum harboring pNZ7031) and the combined effects of mRNA and protein synthesis ( L. plantarum harboring pNZ7030). In silico analysis using MEME and MAST predicted no putative functional ribosome binding sites on the folate gene cluster in the antisense orientation (pNZ7031), showing that no antisense-proteins are likely to be made using this construct. Growth rates and folate pools were determined in the strains carrying the different plasmids (Table 4 ). The growth rate of L. plantarum harboring pNZ7030 was reduced regardless of whether gene expression was induced with nisin. The growth rates of L. plantarum containing pNZ8148 (control plasmid) and pNZ7031 (antisense orientated plasmid) were unaffected. Interestingly, overexpression of the folate gene cluster in the antisense orientation results in a 6-fold increase in folate production pools when compared to control strain. By RT-qPCR it was confirmed that L. plantarum strains harboring pNZ7030 and pNZ7031 produced the anticipated mRNAs (Table 5 ). The relative expression level in L. plantarum harboring pNZ8148 is arbitrarily set at 1 and the gene expression values in the two other strains were related to this strain. Overexpression of the folate genes in the sense and antisense orientations resulted in a vast increase in the expected mRNAs, but only in L. plantarum harboring pNZ7030 was a reduced growth rate observed, suggesting that mRNA production itself is not responsible for the growth impairment. The relative plasmid copy number of L. plantarum harboring pNZ8148, pNZ7030 and pNZ7031 before and after nisin induction is shown in Table 6 . This analysis shows that the relative plasmid copy number varies between the different constructs. The strain with the highest plasmid copy number is L. plantarum harboring pNZ7030, suggesting that increased plasmid synthesis could explain the growth rate reduction. However, a 5-fold increase in relative copy number for L. plantarum harboring pNZ7031 in the induced and uninduced condition did not result in a growth rate reduction, showing that relative copy numbers may vary between strains and are not necessary linked to growth rate effects. In conclusion, the observed growth rate reduction in the folate overproducer cannot be attributed to the increased metabolic costs for mRNA synthesis or plasmid replication. Analysis of mRNA and protein pools upon overexpression of the folate gene cluster Another explanation for the growth rate reduction of the folate overproducing strain might be competition between growth related and gratuitous transcripts/proteins for the transcription/translation machinery. It was described above that in L. plantarum WCFS1 harboring pNZ7026, the transcripts derived from the folate genes constitute 8.3% of the total mRNA pool. Since the growth rate of L. plantarum harboring pNZ7030 was also reduced, the same analysis was performed on the mRNA pools of this strain. It was determined that the folate specific mRNA pool in this strain constitute an impressive 33% of the total mRNA pool. Consequently, the overexpression of the folate gene cluster results in an enormous accumulation of folate specific mRNAs. Also, the relative abundance of the folate biosynthesis enzymes was determined by SDS-PAGE for L. plantarum WCFS1 harboring pNZ7021, pNZ7026, pNZ8148, pNZ7030, and pNZ7031 (in pNZ8148, pNZ7030, and pNZ7031 with and without induction with nisin) (Figure 2 ). The protein band patterns on the SDS-PAGE gel were quantified using ImageJ. The total peak area (representing the total protein content) and the peak area of folate biosynthesis proteins were determined. Clear folate protein peaks could be distinguished for L. plantarum harboring pNZ7030 that matched with the expected protein sizes (5 of the 6 proteins were detected, 1 protein is too small for detection on gel). For L. plantarum harboring pNZ7026, the two largest proteins were identified (Figure 2 ). The folate protein content in L. plantarum harboring pNZ7021, pNZ8148 and pNZ7031 were set at 0% folate proteins. In L. plantarum containing pNZ7026 and pNZ7030 (after nisin induction) the folate proteins constitute 4 and 10% of the total protein pool, respectively. The relatively high production of folate related transcripts and proteins in relation to transcripts and protein needed for growth, indicates that the metabolic burden of folate overproduction is an important factor. The drain on GTP pools by folate production Apart from being a precursor in folate biosynthesis, GTP is also consumed during the synthesis of DNA and RNA. The drain on the GTP pool due to excessive folate production is calculated for L. plantarum WCFS1 harboring pNZ7026. Based on the biomass composition of L. plantarum WCFS1 [ 21 ], it was determined that 0.10 mmol/g dry weight (DW) GTP is stored in DNA and RNA. In L. plantarum harboring pNZ7026 approximately 0.04 mmol/g DW GTP is stored in folate. Assuming a free GTP pool of approximately 0.5 mM [ 22 ] and an internal bacterial cell volume of 3.6 μl/mg protein [ 23 ], the free GTP pool is calculated to be in the order of magnitude of 10 -6 mol/g DW and therefore negligible. Based on these numbers it was estimated that 29% of the GTP in L. plantarum harboring pNZ7026 is directed into folate (or pterins). For L. plantarum harboring pNZ7021 this is less than 0.03%. Surprisingly, the large drain on GTP did not provoke a transcriptional response with respect to expression of purine biosynthesis genes in L. plantarum harboring pNZ7026. These calculations show that folate overproduction may impose a large drain on the biosynthesis of important molecule such as GTP, without affecting the expression of genes related to purine biosynthesis.
Discussion Overexpression of the folate gene cluster in L. plantarum leads a high level of folate production, but this is also accompanied by a reduction in growth rate. This reduction, however, did not provoke a clear transcriptional or metabolic response. This is in contrast to Saccharomyces cerevisiae and Escherichia coli where gene expression profiles were found to be profoundly different at varying growth rates [ 24 , 25 ]. It appears that the folate overproducing L. plantarum strain is unable to respond to the growth rate reduction. Our experiments demonstrated that the folate specific mRNAs constitute 8.3% and 33% of the total mRNA pool of the cell in cells using the constitutive- (pNZ7026) and nisin inducible plasmid (pNZ7030), respectively. These mRNA levels were even higher than glycolytic- and ribosomal protein transcripts. Based on the observed inability of the cell to respond to the imposed growth rate reduction, we hypothesize that the reduced growth rate in the overproducer is caused by the high proportion of gratuitous transcripts which dilute all growth related mRNAs (such as those for ribosomal protein synthesis). This is not trivial since the growth rate itself is largely dictated by the level of protein synthesis and RNA production [ 26 ]. Additionally, it is reported that at a high growth rate the mRNAs become ever more crowded with ribosomes, thereby the average spacing of ribosomes on the mRNA shifts from 120 to 60 nucleotides at higher growth rates [ 27 ]. When a huge number of ribosomes start to occupy gratuitous mRNAs (such as folate mRNAs), translation of growth related mRNAs (such as ribosomal proteins themselves) will be reduced. In many cases growth rate reductions upon the overexpression of gratuitous proteins have been referred to as a metabolic burden, and have been associated with the production of specific proteins which lead to a reduction in growth rate [ 15 , 28 ]. However, since in bacteria the process of transcription and translation are tightly coupled, it might very well be that dilution of growth related mRNAs is crucial for explaining the growth rate reduction upon overexpression. Still, the need for rare tRNAs cannot be excluded as one of the factors explaining the growth rate reduction. It was found that three codons (tRNA Arg (AGG), tRNA Cys , (UGC), and tRNA Ile (AUA)) were 5-fold less abundant in the genome of L. plantarum WCFS1 when compared to the sequence of the folate gene cluster (unpublished data). In E. coli it was observed that the overexpression of tryptophanase (EC 4.1.99.1) resulted in a growth rate reduction mainly because it led to a shortage of a specific tRNA molecules [ 29 ]. The reduced growth rate of L. plantarum harboring pNZ7026 suggests a kind of stress, but besides the down-regulation of pyrimidine gene cluster (in the batch cultures) no generic stress response was provoked. Applying stress to a microorganism often leads to slower growth. In E. coli , for example, the transcriptional response was determined in a strain carrying a plasmid for overproduction of chloramphenicol acetyltransferase in comparison with a wild-type strain carrying no recombinant plasmids [ 14 ]. From this experiment it was evident that overproduction of chloramphenicol acetyltransferase provoked stress to the cell, as indicated by the large number of stress-response and growth related genes that were differentially expressed. The response of L. plantarum to folate overproduction is clearly different from the response of E. coli towards overproduction of chloramphenicol acetyltransferase. One possible explanation is that we have used a control strain carrying an empty plasmid, and therefore both the control strain and the overproducer experience the presence of chlorampenicol. The metabolomics data in our study indicate that only a few metabolites were significantly affected in their relative abundance in L. plantarum harboring pNZ7026. One metabolite, 10-formyl folate, was 117-fold more abundant in L. plantarum harboring pNZ7026. This was unexpected since it is assumed that the reduced derivative, 10-formyl tetrahydrofolate, is produced by the organism. In L. lactis , for example, 10-formyl tetrahydrofolate was detected as the most dominant type of folate [ 30 ]. Since tetrahydrofolate derivatives are known to be unstable [ 31 - 33 ] this component may have been converted to the oxidized form (folate) in the bacterial cells or during metabolite extraction or LC-MS analysis. The compound 10-formyl folate is supposed to be biologically inactive [ 34 ], however, we have demonstrated that 10-formyl folate can be used by the indicator strain in the microbiological folate assay. Remarkably, overexpression of the folate gene cluster in the antisense orientation results in 6 fold increased folate production when compared to control strain. Possibly, the antisense mRNA stabilizes the sense mRNA. This partially double stranded RNA is expected to be protected from degradation by RNA nucleases which may explain increased folate production and consequently elevated folate pools. Such mechanism of antisense overexpression could be exploited as a novel procedure for overproduction of proteins or metabolites. Based on our results, we calculated that approximately 29% of the synthesized GTP is directed into folate, indicating that the growth rate reduction is, at least partly, linked with a shortage in the supply of GTP. Therefore, since folate overproduction has a large drain on GTP pools, this might have implications for protein synthesis, since GTP hydrolysis for protein synthesis alone accounts for more than 32% of the total energy turnover a of lactic acid bacterium [ 35 , 36 ]. Transcriptome analysis showed no differential expression of the purine biosynthesis genes, suggesting that either there is no shortage in GTP supply, or GTP shortage does not provoke a transcriptional response to the purine genes. In Bacillus subtilis , a positive correlation was found between free GTP pools and the growth rate [ 37 ]. In L. lactis , the GMP-synthetase inhibitor, decoyinine, reduced the free GTP pool 2-fold, and reduced the growth rate of the organism [ 22 ]. When comparing the metabolome of the control strain with the folate overproducer, no reduction in relative abundance of GMP, GDP, or GTP was detected in our metabolome analysis. The only metabolite that could be linked to GTP shortage is 1-amino guanosine. However, it remains unclear whether this component can be phosphorylated, since few nucleoside kinases are known in lactic acid bacteria [ 36 , 38 ].
Conclusion High copy plasmids are often used for the overproduction of commercially interesting proteins or metabolites. In Lactobacillus plantarum WCFS1, homologous overexpression of entire gene cluster encoding folate biosynthsis results in high folate production. An important obstacle for robust folate production is the reduced growth rate of this overproducing strain. In the folate overproducing L. plantarum strain we did not observe large changes in transcript or metabolite formation. Apparently, L. plantarum does not adequately respond to the adverse (metabolic) effects of excessive high levels of folate biosynthesis. A possible explanation for the observed growth rate reduction is competition between highly abundant non-growth related mRNAs (of the folate biosynthesis pathway) and growth related (household) mRNAs at the level of the transcription/translation machinery. This explanation is generally applicable for all microbial cell factories employing high copy overexpression vectors.
Background Using a functional genomics approach we addressed the impact of folate overproduction on metabolite formation and gene expression in Lactobacillus plantarum WCFS1. We focused specifically on the mechanism that reduces growth rates in folate-overproducing cells. Results Metabolite formation and gene expression were determined in a folate-overproducing- and wild-type strain. Differential metabolomics analysis of intracellular metabolite pools indicated that the pool sizes of 18 metabolites differed significantly between these strains. The gene expression profile was determined for both strains in pH-regulated chemostat culture and batch culture. Apart from the expected overexpression of the 6 genes of the folate gene cluster, no other genes were found to be differentially expressed both in continuous and batch cultures. The discrepancy between the low transcriptome and metabolome response and the 25% growth rate reduction of the folate overproducing strain was further investigated. Folate production per se could be ruled out as a contributing factor, since in the absence of folate production the growth rate of the overproducer was also reduced by 25%. The higher metabolic costs for DNA and RNA biosynthesis in the folate overproducing strain were also ruled out. However, it was demonstrated that folate-specific mRNAs and proteins constitute 8% and 4% of the total mRNA and protein pool, respectively. Conclusion Folate overproduction leads to very little change in metabolite levels or overall transcript profile, while at the same time the growth rate is reduced drastically. This shows that Lactobacillus plantarum WCFS1 is unable to respond to this growth rate reduction, most likely because the growth-related transcripts and proteins are diluted by the enormous amount of gratuitous folate-related transcripts and proteins.
List of abbreviations used The abbreviations used are: CDM: Chemically Defined Medium; C t : Critical threshold; GTP: Guanosine triphosphate; HPLC: High Performance Liquid Chromatography; LC-MS/MS: Liquid Chromatography-Mass Spectrometry/Mass Spectrometry; Limma: Linear models for microarray data; MAST: Motif Alignment and Search Tool; MEME: Multiple EM for Motif Elicitation; mRNA: messenger RNA; p ABA: para-aminobenzoic acid; PCR: polymerase chain reaction; SDS-PAGE: Sodium Dodecyl Sulfate Poly Acrylamide Gel Electrophoreses; RT-qPCR: Reverse Transcriptase quantitative polymerase chain reaction;. Competing interests The authors declare that they have no competing interests. Authors' contributions AW constructed overexpression strain, preformed microarray experiments, QPCR, folate analysis, SDS Page and drafted the manuscript. AEM and MF carried out some of the chemostat cultures for obtaining data for metabolomics and transcriptomics. DM developed the microarrays and helped analyzing the data. RCHdeV performed the differential metabolomics work and analyzed the data. SMJK and ADH performed the pterine analysis. WMdeV and EJS supervised the study and reviewed results. All authors have read and approved the final manuscript.
Acknowledgements We thank Dr. Michiel Wels for the MEME and MAST search for predictions of the ribosome binding sites on the sense and antisense mRNA of the folate gene cluster. Roger Bongers for discussing much of the RNA work, and Prof. Bas Teusink for his help in determining the flux of GTP through the folate biosynthesis pathway. We thank Dr. Matthe Wagenmaker for discussing much of the protein burden work. Work in the laboratory of ADH was supported by U.S. National Science Foundation award MCB-0839926. This work was part of the Kluyver Centre for Genomics of Industrial Fermentation which is financially supported by the Netherlands Genomics Initiative.
CC BY
no
2022-01-12 15:21:37
Microb Cell Fact. 2010 Dec 17; 9:100
oa_package/09/eb/PMC3014895.tar.gz
PMC3014896
21129218
Background Cytochromes b can be defined as electron transfer proteins having heme b group(s), noncovalently bound to the protein. b -Type cytochromes possess a wide range of properties and functions in a large number of different redox processes. Among them, cytochromes b 5 are ubiquitously found in animals, plants, fungi and some bacteria. The microsomal and mitochondrial (outer membrane; OM) variants are known and are present in a membrane-bound form. On the other hand, bacterial and those from erythrocytes and some animal tissues are water-soluble (such as for the reduction of methemoglobin in erythrocytes and for the biosynthesis of N -glycolylneuraminic acid [ 1 ]). A membrane-bound (microsomal) form of cytochrome b 5 is required for numerous biosynthetic and biotransformation reactions, which include cytochrome P450-dependent reactions [ 2 ], desaturation of fatty acids [ 3 ], plasmalogen biosynthesis [ 4 ], and cholesterol biosynthesis [ 5 , 6 ]. The role of cytochrome b 5 in microsomal P-450-dependent monooxygenase reactions has been studied most extensively [ 2 ]. In addition, a number of fusion enzymes exist in nature containing cytochrome b 5 as a domain component. These include mitochondrial flavocytochrome b 2 (L-lactate dehydrogenase) [ 7 ], sulfite oxidase [ 8 ], the ∆ 5 - and ∆ 6 -fatty acid desaturases [ 9 ], and yeast inositolphosphorylceramide oxidase [ 10 ]. Plant and fungal nitrate reductases are also cytochrome b 5 -containing fusion enzymes [ 11 ]. For human cytochrome b 5 , only a few naturally occurring mutations recognized as a genetic disorder have been reported. One such example was found by Kurian et al . [ 12 ]. They reported that naturally occurring human cytochrome b 5 T60A mutant [ 12 ] displayed an impaired hydroxylamine reduction capacity. They observed further that the expressed protein in rabbit reticulocyte lysate system showed an enhanced susceptibility to the proteolytic degradation. Expression level in transfected HeLa cells was also significantly lowered. Another genetically confirmed example was previously reported. In this case, Steggles et al . identified a homozygous splice site mutation in the CYB5A gene, resulting in premature truncation of the protein, leading to a very high methemoglobin concentration in red blood cells of the patient, being consistent with methemoglobinemia type IV [ 13 ]. The patient exhibited female genitalia at birth, but, was determined as a male pseudohermaphrodite, probably due to the low levels of androgen synthesis by the lack of cytochrome b 5 activity, which has been shown to participate in 17α-hydroxylation in adrenal steroidogenesis [ 14 ]. Whereas more than 300 patients had been reported with hereditary methemoglobinemia types I or II, only a few cases of type IV had been reported. Thus, one may attribute that the rarity of naturally occurring cytochrome b 5 mutation may be due to lethality of most type IV mutations. However, in a very recent study by employing transgenic mice, Finn et al . found that cytochrome b 5 completely null mice were viable, fertile and produced grossly normal pups at expected Mendelian ratios [ 15 ]. Further, the cytochrome b 5 null mice exhibited a number of intriguing phenotypes, including altered drug metabolism, methemoglobinemia, disrupted steroid hormone biosynthesis. In addition, the cytochrome b 5 null mice displayed skin defects and retardation of neonatal development. These observations suggested that cytochrome b 5 might play a role controlling saturated/unsaturated homeostasis of fatty acids in higher animals including human. The membrane-bound form of cytochrome b 5 is associated with the endoplasmic reticulum. It has a molecular weight of 16,700 Da and contains about 134 amino acids in animals (Figure 1A ). It is composed of three domains: a hydrophilic heme-containing catalytic domain of about 99 amino acids; a membrane-binding hydrophobic domain containing about 30 amino acids at the carboxy terminus of the molecule; and a membrane-targeting region represented by the 10-amino-acid sequence located at the carboxy-terminus of the membrane-binding domain. Three-dimensional structures of a number of cytochrome b 5 are known [ 16 ], but only for the heme-containing hydrophilic catalytic domain [ 17 ]. Two His residues (His44 and His68) provide the fifth and sixth heme ligands (Figure 1A, B ), and two propionate groups of the heme b lies at the opening of the heme-binding pocket, which is formed by highly conserved hydrophobic amino acid residues (Figure 1A ). The roles of each amino acid were investigated by detailed site-directed mutagenesis in the past with employing various structural, spectroscopic and electrochemical techniques, including X-ray crystallography [ 18 - 20 ], NMR [ 21 - 23 ], UV-visible absorption spectroscopy, and redox potential measurements [ 24 ]. Redox potentials of various forms of cytochrome b 5 span a range of ~400 mV. It is well documented that several factors could regulate and induce changes in the reduction potential of cytochrome b 5 spanning almost entire range observed. The electrostatic contribution by surface charges might play important roles in adjusting the selectivity of the protein-protein interaction. On the other hand, difference in the redox potential of two reactant proteins provides the driving force for the electron transfer reactions. Thus, the clarification of the regulatory mechanism of the redox potentials might be essential for the understanding of the biological electron transfer reactions. Biological redox potential measurements were usually conducted either by an equilibrating electrochemical method or by employing a dynamic cyclic voltammetry. Common features to all the past voltammetric experiments involving cytochrome b 5 and electrodes pre-treated with various thiol-containing aliphatic acid or related groups are the large difference between the half-wave potential (E 1/2 ) and the midpoint potential determined by the equilibrating method [ 25 ]. In the case of rat OM cytochrome b 5 , its midpoint potential determined by the equilibrating method showed as low as -102 mV; whereas the half-wave potential was found as +8 mV [ 25 ]. Similar large positive shifts were reported for bovine liver microsomal cytochrome b 5 (~+31 mV) [ 26 ] and chicken liver microsomal cytochrome b 5 (~+40 mV) [ 27 ]. The large positive shift (+110 mV) observed for rat OM cytochrome b 5 were attributed to the binding of multivalent cations, such as, poly-L-lysine, which were used for shielding the negatively charged protein surface and negatively-charged electrode surface to facilitate the electron transfer [ 25 ]. The difference in the potentials was ascribed, initially, for the binding of multivalent cations to the specific charged residues on the surface of cytochrome b 5 , such as Glu and Asp (Figure 1C ) [ 25 ], leading to the modulation of the heme redox potential differently from that measured by the equilibrating method. Later, however, a carboxylate of an exposed heme propionate group and conserved acidic residues (Glu44, Glu48, Glu56, and Asp60) (Figure 1C ) (corresponding to Glu49, Glu53, Glu61, and Asp65, respectively, of human cytochrome b 5 ) were proposed to be responsible for the specific binding of multivalent cations [ 28 ]. The formation of such a complex will result in a neutralization of the charge on the heme propionate and lowering of the dielectric of the exposed heme microenvironment by excluding water from the complex interface. These two factors act synergistically to destabilize the positive charge of the ferric heme with respect to the neutral ferrous heme, leading to a positive shift of the redox potential upon binding of poly-L-lysine [ 28 , 29 ]. This postulation was partly verified by the esterification of the heme propionate groups, leading to the half-wave potential to be independent of the concentration of multivalent cations [ 28 , 29 ]. In the present study, we focused on three conserved hydrophobic amino acid residues (Leu51, Ala59, and Gly67) consisting of the heme-binding pocket (Figure 1A, B ). These residues were not investigated previously despite of their higher conservation among the various members of cytochrome b 5 protein family (Figure 1A ). Gly67 is located besides the heme axial His residue (His68) and is near the entrance of the heme-pocket crevice (Figure 1B ). Leu51 and Ala59, on the other hand, are located in the bottom of the heme pocket (Figure 1B ). The former is on the side of His44 residue, the other heme axial ligand. The latter is on the side of His68 residue. These two residues might be essential for the stabilization of the heme prosthetic group in the hydrophobic heme pocket. Therefore, we selected replacing amino acid residues not too hazardous for the maintenance of the heme cavity. Accordingly, we chose Thr, Ile, Ala, Ser residues for the replacement of Leu51, Ala59, and Gly67 residues. We produced and purified site-directed mutants for these three sites, having particular interests in the changes of local structure and hydrophobicity of the heme pocket, which may affect the redox properties of cytochrome b 5 . We measured spectroscopic and electrochemical properties ( i.e ., redox potentials were analyzed by an equilibrating method and a cyclic voltammetry technique) of these mutants to clarify the structural and electrochemical importance of the conserved residues.
Methods Construction of the expression plasmid for wild-type and site-directed mutants of HLMW b 5 The gene coding for a soluble domain (amino acid residues from Met1 to Leu99; LMW b 5 ) of human cytochrome b 5 in pIN3/ b 5 /2E1/OR plasmid [ 30 , 31 ] was subcloned into pCW ori vector as previously described [ 32 ]. Then, the BamH I- Hind III fragment of the pC/LMW b 5 plasmid encoding entire LMW b 5 (amino acid residues from Met1 to Leu99) was inserted into the BamH I- Hind III site of pBluescript II KS(+) to form a plasmid pBS/LMW b 5 for easier handling upon the site-directed mutagenesis. The nucleotide sequence of the pBS/LMW b 5 plasmid was confirmed with a DNA sequencer (PRISM 3100 Genetic Analyzer, ABI). The site-directed mutagenesis was conducted using QuikChange Site-Directed Mutagenesis Kit (Stratagene, La Jolla, CA, USA) according to the manufacturer's manual. Following mutagenic primers were used (substituted codons are underlined): for L51I, L51I-R (5'-CCAGCTTGTTCCCT GAT AACTTCTTCCCCACC-3') and L51I-F (5'-GGTGGGGAAGAAGTT ATC AGGGAACAAGCTGG-3'); for L51T, L51T-R (5'-CCAGCTTGTTCCCT TGT AACTTCTTCCCCACC-3') and L51T-F (5'-GGTGGGGAAGAAGTT ACA AGGGAACAAGCTGG-3'); for A59V, A59V-R (5'-CCTCAAAGTTCTCAGT AAC GTCACCTCCAGCTTG-3') and A59V-F (5'-CAAGCTGGAGGTGAC GTT ACTGAGAACTTTGAGG-3'); for A59 S, A59S-R (5'-CAAGCTGGAGGTGAC TCT ACTGAGAACTTTGAGG-3') and A59S-F (5'-CAAGCTGGAGGTGAC TCT ACTGAGAACTTTGAGG-3'); for G67A, G67A-R‪(5'-GGCATCTGTAGAGTG CGC GACATCCTCAAAGTTC-3') and G67A-F‪‪ (5'-GAACTTTGAGGATGTC GCG CACTCTACAGATGCC-3'); and for G67 S, G67S-R (5'-GGCATCTGTAGAGTG CGA GACATCCTCAAAGTTC-3') and G67S-F (5'-GAACTTTGAGGATGTC TCG CACTCTACAGATGCC-3'). After the site-directed mutagenesis, transformation, and plasmid preparation, each mutated plasmid (pBS/L51I, pBS/L51T, pBS/A59V, pBS/A59 S, pBS/G67A, pBS/G67S) was treated with Nde I and Hind III. The each Nde I- Hind III fragment of pBS/LMW b 5 plasmid and the mutated plasmids was inserted into the Nde I- Hind III site of pET-28b(+) vector (Novagen, Merck, Darmstadt, Germany) to construct pET/HLMW b 5 , pET/L51I, pET/L51T, pET/A59V, pET/A59 S, pET/G67A, and pET/G67 S, respectively, to achieve an efficient expression and an easier purification of a recombinant protein. The pET-28b(+) vector contains a 6x-His-tag moiety at the upstream of the Nde I- Hind III site and, therefore, gives an additional extension with a sequence of MGSSHHHHHHSSGLVPRGSH at the NH 2 -terminus of the LMW b 5 protein (designated as HLMW b 5 , hereafter). Mutations were confirmed with an ABI PRISM 3100 Genetic Analyzer (Applied Biosystems Japan Ltd.) for both types of plasmids prepared from pBS and pET vectors. Escherichia coli strain BL21(DE3)pLysS was transformed with pET/HLMW b 5 (or with one of the mutated pET plasmids) and was cultivated in low-salt Luria-Bertani (LB) medium containing 30 μg/ml of kanamycin and 34 μg/ml chloramphenicol at 37°C for pre-culture. After the pre-culture, HLMW b 5 protein (or each mutant protein) was produced by growing the transformed cells at 37°C in TB medium (12.0 g/L of tryptone, 24.0 g/L yeast extract, 4 ml/L glycerol, 23.1 g/L KH 2 PO 4 , and 125.4 g/L K 2 HPO 4 ) in the presence of 30 μg/ml of kanamycin and 34 μg/ml of chloramphenicol. Induction of the protein expression was achieved by addition of 200 μM (final) IPTG when the cells had grown to an O.D. of 0.6 at 600 nm. Then, the incubation temperature was lowered to 26°C. Cells were harvested 48 h after the addition of IPTG and were frozen in liquid nitrogen and stored at -80 °C until use. The thawed cells were mixed with a lysis buffer (20 mM Tris-HCl buffer (pH 8.0) containing 0.5 mM EDTA) and disrupted by the treatment with lysozyme (final, 1 mg/mL) and DNase (final, 50 μg/mL) in the presence of 1 mM of phenylmethylsulfonyl fluoride followed by sonication on ice with a model 250 sonifier (Branson Ultrasonic). The disrupted cells were centrifuged at 26,000 g for 20 min at 4 °C. The supernatant was saved as a crude extract. Purification of HLMW b 5 was conducted as follows. The crude extract was loaded onto a column of DEAE-Sepharose CL-6B previously equilibrated with 20 mM Tris-HCl (pH 8.0) buffer containing 0.5 mM EDTA. The HLMW b 5 was adsorbed in the column as a reddish band. The column was washed with the same buffer containing 50 mM NaSCN. The adsorbed LMW b 5 was eluted by a linear gradient of NaSCN concentration from 50 to 300 mM in the same buffer. Main fractions were collected based on the SDS-PAGE analysis (12% gel) and absorbance at 414 nm and were concentrated to about 5 mL using an Amicon concentrator and a Millipore membrane (MWCO = 10,000). The concentrated HLMW b 5 was, then, subjected onto an affinity column chromatography with Ni-NTA agarose gel (QIAGEN) previously equilibrated with 50 mM sodium phosphate buffer (pH 8.0) containing 10 mM imidazole and 300 mM NaCl. The column was washed with 50 mM sodium-phosphate buffer (pH 8.0) containing 20 mM imidazole and 300 mM NaCl. Finally, adsorbed HLMW b 5 protein was eluted with 50 mM sodium-phosphate buffer (pH 8.0) containing 250 mM imidazole and 300 mM NaCl and the eluate was collected. Fractions that showed a single protein band on SDS-PAGE were pooled and concentrated, gel-filtrated against 50 mM sodium phosphate buffer (pH 7.0) with PD-10 mini-column (Amersham Bioscience). The full-length form of human cytochrome b 5 was purified according to the procedure as described previously [ 33 ]. Concentrations of purified recombinant proteins were determined spectrophotometrically from the absorbance at 423 nm in the dithionite-reduced form using the extinction coefficient of 163 mM -1 cm -1 [ 34 ]. The protein concentration was determined with a modified Lowry method as previously described [ 35 ], in which bovine serum albumin was used as a standard. EPR spectroscopy Oxidized HLMW b 5 samples (or mutants in the oxidized form) in 50 mM potassium-phosphate buffer (pH 7.0) were concentrated to about 200 ~500 μM with a 50-mL Amicon concentrator fitted with a membrane filter (Millipore PTTK04110; pore size MWCO = 10,000). For HLMW b 5 and G67A mutant, concentrated poly-L-lysine solution (5 mM; Sigma-Aldrich Japan K.K.; mol. wt. = 1,000~4,000; corresponding to 8~30 lysine residues) was added to make its final concentration as 400 μM. The samples were introduced into EPR tubes and frozen in liquid nitrogen (77 K). EPR measurements were carried out at X-band (9.23 GHz) microwave frequency using a Varian E-109 EPR spectrometer with 100-kHz field modulation. An Oxford flow cryostat (ESR-900) was used for the measurements at 15K. The microwave frequency was calibrated with a microwave frequency counter (Takeda Riken Co., Ltd., Model TR5212). The strength of the magnetic field was determined with an NMR field meter (ECHO Electronics Co., Ltd., Model EFM 2000AX). The accuracy of the g-values was approximately +0.01. Cyclic voltammetry All electrochemical measurements were done as previously described [ 25 , 32 ] using a water-jacketed conical cell that allowed measurements to be made at controlled temperatures using volumes as small as 150 μL. An ALS electrochemical analyzer (model 611A) was used for all measurements. All sample solutions (100 μM, heme basis, in 50 mM sodium phosphate buffer pH 7.0) were purged with Ar gas before use and blanketed with Ar during the electrochemical determinations. For the measurements of the full-length form (1-134 aa) of human cytochrome b 5 , 50 mM sodium-phosphate buffer (pH 7.0) containing 0.5% (v/v) Triton X-100 was used as the buffer. The Au electrode was derivatized with 100 mM of 3-mercaptopropionate, as previously described [ 25 , 32 ]. Poly-L-lysine was added to a final concentration of 50~300 μM just before the measurements. Concentration of poly-L-lysine solution was calculated assuming the formal mol. wt. = 4,000. Therefore, actual concentration of poly-L-lysine in the sample solution might be higher than the indicated values. The average of the cathodic and anodic peak potentials was taken as the formal potential. All potentials were measured at 25°C versus an Ag/AgCl electrode with an internal filling solution of 3 M KCl saturated with AgCl and are then converted versus the standard hydrogen potential (SHE). Spectroscopic redox titrations Spectroscopic redox titrations were performed essentially as described by Dutton [ 36 ] and Takeuchi [ 37 ], using a Shimadzu UV-2400PC spectrometer equipped with a thermostatted cell holder connected to a low temperature thermobath (NCB-1200, Tokyo Rikakikai Co, Ltd, Tokyo, Japan). A custom anaerobic cuvette (1-cm light path, 5-ml sample volume) equipped with a combined platinum and Ag/AgCl electrode (6860-10C, Horiba, Tokyo, Japan) and a screw-capped side arm was used. Purified HLMW b 5 sample or its site-specific mutants (final, 15 μM) either in the presence or absence of poly-L-lysine (200 μM) in 50 mM sodium-phosphate buffer (pH 7.0) was mixed with redox mediators (anthraquinone-2,6-disulfonate, 20 μM; 1,2-naphthoquinone, 20 μM; phenazine methosulfate, 20 μM; duroquinone, 20 μM; 2-hydroxy-1,4-naphtoquinone, 20 μM; riboflavin, 20 μM). For the redox measurements of the full-length form of human cytochrome b 5 , 50 mM sodium-phosphate buffer (pH 7.0) containing 0.5% (v/v) Triton X-100 was used as the buffer. The sample was kept under a flow of moistened Ar gas to exclude dioxygen and was continuously stirred with a small magnetic stirrer (CC-301, SCINICS, Tokyo, Japan) inside. Reductive titration was performed at 25°C by addition of small aliquots of sodium dithionite (4 or 16 mM) solution through a needle in the rubber septum on the side arm; for a subsequent oxidative titration, potassium ferricyanide (4 or 16 mM) was used as the titrant. In an appropriate interval, visible absorption spectra and redox potentials were recorded. The changes in absorbance (A555.0 minus A565.6; the peak in reduced form minus isosbestic point of HLMW b 5 ) were corrected considering the dilution effect and analyzed with Igor Pro (v. 6.03A2) employing a Nernst equation with a single redox component.
Results Purification of soluble domain of human cytochrome b 5 (HLMW b 5 ) and its mutants Purification of HLMW b 5 and its site-specific mutants was successful except for L51T mutant. Failure of purification for the L51T mutant was due to the inability to obtain a heme-bound holo-form. We confirmed that enough amounts of the protein corresponding to HLMW b 5 was produced in E. coli cells upon addition of IPTG based on the SDS-PAGE analysis and CBB-250 staining. Addition of excess amounts of heme solution during the disruption of the E. coli cells to reconstitute the holo-form was unsuccessful, suggesting that the heme-pocket of the L51T mutant was perturbed significantly and not suitable for the accommodation of the heme prosthetic group, leading to the denatured form. Thus, we did not pursue the L51T mutant further in the present study. Properties of soluble domain of human cytochrome b 5 (HLMW b 5 ) and its mutants The purified HLMW b 5 showed characteristic visible absorption spectra as a native form of cytochrome b 5 by showing absorption peaks at 413 nm for oxidized form and at 555, 526, and 423 nm for reduced form (spectra not shown). Purified HLMW b 5 showed a single protein-staining band (CBB-250 staining) upon SDS-PAGE (12% gel) analysis with an apparent molecular size of 16.5 kDa. This value was, however, much larger than the expected value (13548.91 Da) for the NH 2 -terminal extension (20 amino acid residues, containing the 6x-His-tag moiety) plus the soluble domain (1-99 aa) of human cytochrome b 5 . To clarify the biochemical nature of the HLMW b 5 , we conducted MALDI-TOF-MS analyses. Untreated HLMW b 5 sample showed a single peak at 13418 m/z corresponding to a mono-protonated form. A doubly-protonated form showed a weak peak at 6709 m/z. This result suggested that a post-translational modification ( i.e. , removal of the initial Met residue) had occurred in HLMW b 5 . MALDI-TOF-MS analyses on the tryptic peptides of HLMW b 5 (data not shown) proved that the Met residue at the initiation site was missing. We concluded that the purified HLMW b 5 protein is a form with the sequence corresponding to 2-119 aa of HLMW b 5 (theoretical molecular weight; 13471.72 Da). All the purified mutants showed very similar UV-visible absorption spectra with those of HLMW b 5 , indicating that those site-specific mutations around the heme-binding pocket (except for the L51T mutant) did not affect significantly on the coordination or the electronic structure of the heme moiety. EPR spectroscopy of HLMW b 5 and its mutants The EPR spectrum of oxidized HLMW b 5 measured at 15K showed g z = 3.03, g y = 2.22, and g x = 1.43 (Figure 2A ; trace a ), very close to those reported for rat [ 38 ], rat outer mitochondrial membrane (OM) [ 39 ] and pig [ 40 ] cytochromes b 5 and human LMW b 5 [ 32 ] in which the 6xHis-tag sequence (20 aa) at the NH 2 -terminal region is not present, or human erythrocyte cytochrome b 5 [ 41 ]. However, it was slightly different from the report for the recombinant human erythrocyte cytochrome b 5 (g z = 3.06, g y = 2.22, and g x = 1.42) [ 42 ]. It must be noted that there was no high-spin signals around g~6 nor the signals from adventitiously bound non-heme iron at g = 4.3 in the spectra (spectra not shown) [ 38 ]. All the purified mutants showed very similar EPR spectra to that of HLMW b 5 as shown in Figure 2A . Closer examinations indicated that G67A mutant showed a slight perturbation on its heme coordination by showing g z = 3.06 and g y = 2.20, close to the values for house fly cytochrome b 5 [ 43 ]. These results confirmed that the site-specific mutations introduced around the heme-binding pocket to modulate the hydrophobicity did not affect significantly on the coordination or the electronic structure of the heme prosthetic group. For HLMW b 5 and the G67A mutant, effects of the addition of poly-L-lysine (final concentration, 400 μM) on the EPR spectrum were examined. However, there was no apparent shift of their respective g-values (spectra not shown). Cyclic voltammetry of LMW b 5 and its mutants The Au electrode pre-treated with 3-mercaptopropionic acid gave reversible voltammetric responses for the HLMW b 5 solution but only in the presence of poly-L-lysine. Without poly-L-lysine, there was no peak current. At least 50 μM of poly-L-lysine was required to observe a stable peak current (data not shown). In Figure 3A , a typical voltammogram for HLMW b 5 in the presence of 200 μM of poly-L-lysine is shown. A plot of the square root of the scan rate vs . peak current (I pa ) (or I pc , result not shown) was linear for scan rates up to and greater than 200 mV/sec (Figure 3B ), indicating a diffusion-controlled reaction. The half-wave potential (corresponding to the midpoint potential) was estimated as -19.5 mV ( vs . SHE), which was close to the values for the full-length human cytochrome b 5 (-20.5 mV) and LMW b 5 without the 6xHis-tag moiety (-21 mV) [ 32 ] and for bovine liver cytochrome b 5 (-6 mV, -14 mV) [ 44 ] measured under similar experimental conditions (Table 1 ). These results indicated that presence of 6xHis-tag moiety or COOH-terminal hydrophobic transmembrane segment does not affect significantly on the redox properties of the hydrophilic heme-binding domain of HLMW b 5 . However, it must be noted that, in the case of full-length human cytochrome b 5 (-20.5 mV), we observed relatively large peak separation values and, more significantly, the plot of the square root of the scan rate vs . peak current was not clearly linear. This might be due to the presence of detergent Triton X-100 (0.5~1.0%), which may interfere the smooth diffusion of cytochrome b 5 molecules at the electrode surface by forming micelles with the COOH-terminal hydrophobic segments incorporated. As noted previously, the voltammetric response of outer mitochondrial membrane (OM) cytochrome b 5 measured by the Au electrode pre-treated with 3-mercaptopropionic acid (or similar thiol-containing reagents) were very dependent on the concentration of multivalent ions in the sample solution [ 25 ]. ‪It was postulated that multivalent cations could bind to the protein surface and to the electrode surface simultaneously and allow the negatively charged protein to approach the negatively charged electrode [ 25 ]. This phenomenon was termed as "ion gating" [ 45 ]. Therefore, we conducted detailed analyses concerning the dependency of half-wave potential (E 1/2 ) of HLMW b 5 on the concentration of poly-L-lysine in a range of 50~300 μM (Figure 4 ). Results showed that half-wave potential (E 1/2 ) shifted in the positive direction as the concentration of poly-L-lysine increased and, around 200 μM of poly-L-lysine, it reached a plateau with a value about -20 mV (Figure 4 line (a)). Rivera et al . reported that the electron transfer between the negatively charged electrode and the negatively charged OM cytochrome b 5 was promoted by the addition of Mg 2+ or Ca 2+ , instead of poly-L-lysine [ 25 ]. However, in the present study, we could not observe any effects of Mg 2+ or Ca 2+ (~20 mM) to produce a reversible cyclic voltammogram of HLMW b 5 ; rather it caused a precipitation of the protein in the sample solution. Therefore, we did not pursue further on the effects of these cations on the cyclic voltammogram in the present study. We, then, measured the cyclic volatmmogram for the five site-specific mutants (L51I, A59V, A59 S, G67A, G67S) in the presence of poly-L-lysine in different concentrations (50~300 μM) and the apparent half-wave potentials (E 1/2 ) were calculated (Figure 4 ; Table 1 ). A typical result for the A59 S mutant is shown in Figure 4 line (b). In this case, half-wave potential shifted positively as the concentration of poly-L-lysine increased and, at 200 μM of poly-L-lysine, it reached a plateau as observed for wild-type HLMW b 5 (Figure 4 line (a)). The maximum value was around -30 mV. Similar concentration dependency was also observed for the G67 S and G67A mutants (Figure 4 lines (e) and (f)), although the G67A mutant showed a significant negative shift in its half-wave potentials (Figure 4 line (e)). It is noteworthy that the concentration required to reach a plateau was around 200 μM in most of the samples measured in the present study. This value was consistent with the previous proposal for the formation of the OM cytochrome b 5 -poly-L-lysine complex (1:2) [ 25 ]. However, for the L51I and A59V mutants, dependency of the half-wave potential on the poly-L-lysine concentration was not observed (Figure 4 lines (c) and (d)). In these two mutants, the half-wave potential was around -30 mV irrespective of the concentration of poly-L-lysine (Figure 4 lines (c) and (d)). Spectroscopic electrochemical titrations of HLMW b 5 and its mutants Spectroscopic redox behavior of HLMW b 5 (Figure 5 ) showed a good agreement between the points obtained during reductive and oxidative titrations (Figure 5 ; solid circles for the reductive phase and × for the oxidative phase). The apparent midpoint potentials were estimated to be around 0 mV at pH = 7.0. Least square fitting analysis using the Nernst equation with a single redox component showed the midpoint potential as -3.2 mV (Figure 5 ; a solid curve fitted for solid circles), consistent with a previous report on human erythrocyte cytochrome b 5 (-2 mV) determined by a similar method [ 46 ]. We also measured the midpoint potential for the full-length form of human cytochrome b 5 (under an identical buffer condition but in the presence of 0.5% (v/v) Triton X-100) and found it as -2.6 mV (data not shown). This result confirmed that presence of 6xHis-tag sequence (20 aa) at the NH 2 -terminal region or COOH-terminal hydrophobic transmembrane segment does not affect significantly on the redox properties of the hydrophilic heme-binding domain of HLMW b 5 . Midpoint potentials of the site-specific mutants were obtained similarly. The values were tabulated in Table 2 . The lowest value was found for the L51I mutant; but all the midpoint potentials were found within a relatively narrow range of 7 mV difference. This fact indicated that the site-specific mutations introduced in the present study did not affect significantly on their static redox properties. In the next stage, we examined the effect of addition of poly-L-lysine (final 200 μM) on the redox potentials of HLMW b 5 and its site-specific mutants determined by a static equilibrium method. In the case of HLMW b 5 , the effect was evident (Figure 5B ; solid squares for the reductive phase and + for the oxidative phase). The least square fitting analysis using the Nernst equation with a single redox component showed that the addition of poly-L-lysine caused a positive shift of its midpoint potential by ~20 mV (from -3.2 mV to +16.5 mV). Similar positive shifts of the midpoint potential upon addition of poly-L-lysine were found for all the samples examined in the present study including the full-length cytochrome b 5 and five site-specific mutants (Table 2 ). It is noteworthy that the shifts were close to +20 mV except for the G67A mutant.
Discussion Relative importance and roles of the three conserved residues Three conserved hydrophobic amino acid residues (Leu51, Ala59, and Gly67) consisting of the heme-binding pocket of cytochrome b 5 were not investigated in the past, despite of their relatively high conservation among the cytochrome b 5 protein family (Figure 1A ). The most significant effect of the mutation was observed for the L51T mutant, in which the heme-pocket moiety might be perturbed significantly and would not be suitable for the accommodation of a heme prosthetic group, leading to an apo-form (or a denatured form) when expressed in E. coli cells. Introduction of a hydrophilic Thr residue in the bottom of the hydrophobic heme-pocket might be too harsh to maintain the original native structure, suggesting the critical role of this hydrophobic residue (Figure 1B ). Our computer modeling study indicated that the L51T mutant would have a larger cavity in the heme pocket above the heme plane, being consistent with this view (see Fig. S1(A and B); additional file 1 ). On the other hand, introduction of a Ser (or Ala) residue by replacing Gly67 residue did not cause such an effect within the heme-pocket, indicating that a hydrophilic residue at the entrance of the pocket might be tolerable and, therefore, did not cause significant influences (Figure 1B ). Results of the computer modeling study were consistent with this view (see Fig. S1(A and C); additional file 1 ). Ala59 residue resides in the lowest bottom of the heme pocket. The computer modeling study indicated that substitution with Ser (or Val) did not cause any substantial change in the heme pocket as well. EPR spectra of the oxidized forms of these mutants (except for the L51T) showed, indeed, similar spectra with that of HLMW b 5 (Figure 2 ). However, only for the G67A mutant, its EPR spectrum indicated a slight but distinct perturbation (g z = 3.06, g y = 2.20) (Figure 2 ), suggesting some important role(s) of Gly67 residue as an adjacent one to the axial His68 residue. As a whole, these observations indicated that the three conserved hydrophobic amino acid residues (Leu51, Ala59, and Gly67) were not particularly important in having direct interactions with the heme prosthetic group but were very important for maintaining the hydrophobic and structurally-organized environments around the heme prosthetic group. It might be noteworthy that naturally occurring human cytochrome b 5 T60A mutant [ 12 ] displayed an enhanced susceptibility to proteolytic degradation, indicating the destabilized structure around its heme pocket. Cyclic voltammetry of cytochrome b 5 In our present study, we observed a just reverse phenomenon reported for OM cytochrome b 5 [ 25 ], in which the half-wave potential was about 110 mV higher than the midpoint potential determined by the equilibrating method (Table 1 & 2 ). In our present case, the half-wave potential of HLMW b 5 (-19.5 mV; in the presence of 200 μM of poly-L-lysine) was about 16 mV lower than the midpoint potential measured by an equilibrium method (-3.2 mV) (Table 1 & 2 ), although the half-wave potential itself showed a positive shift as the concentration of poly-L-lysine was increased, as found for OM cytochrome b 5 [ 25 ], reaching the plateau of -17.5 mV. A similar redox behavior to our HLMW b 5 was reported previously for bovine liver cytochrome b 5 tryptic fragment, in which midpoint potential determined by the equilibrating method (in the presence of 20 mM Mg 2+ ) showed +15 mV, whereas the half-wave potential under a similar condition was -6 mV, leading to a negative shift of -21 mV (Table 1 & 2 ) [ 44 ]. The difference between the half-wave potential and midpoint potential determined by the equilibrating method observed for bovine liver cytochrome b 5 tryptic fragment was ascribed to the different surface properties of the electrodes used [ 44 ]. Following the proposal by Wang et al . [ 44 ], our present results can be explained reasonably. In the cyclic voltammetry, poly-L-lysine binds simultaneously with the protein moiety and the carboxy group of β-mercaptopropionic acid on the surface of the electrode. In the spectroscopic equilibrating method, poly-L-lysine binds only to the protein and the electron transfer occurs directly between the electrode and the protein. Therefore, in the cyclic voltammetry, the interaction of poly-L-lysine with the carboxylates of the electrode-coated β-mercaptopropionic acid decreased its effective density of positive charge and, therefore, the half-wave potential is more negative than those measured by the spectroscopic equilibrating method. Additionally, dehydration of the heme edge by excluding water from the complex interface might also contribute significantly on the positive shift of the half-wave potential [ 29 ]. However, the differences between the half-wave potential and midpoint potential determined by the equilibrating method were so much different each other among OM cytochrome b 5 , human cytochrome b 5 , and bovine liver cytochrome b 5 . This fact suggested that the exact mechanism for determining the redox potential is very complex. Reality might exist between the two simplified possibilities. The gross tertiary structures around the heme moiety would be conserved well among OM cytochrome b 5 , human cytochrome b 5 , and bovine liver cytochrome b 5 (Figure 1B and 1C ) and, therefore, the distributions of acidic residues on the surface of the heme domain are also well conserved (Figure 1A and 1C ). Therefore, the proposed scheme for the formation of the complex between OM cytochrome b 5 and poly-L-lysine occurs on the protein surface of HLMW b 5 delineated by the exposed heme propionate and corresponding acidic residues (Glu49, Glu53, Glu61, and Asp65) as well. Therefore, slight conformational differences around the heme propionate group would be a very important factor for controlling the heme redox potentials. Effects of site-specific mutations within the heme pocket on the cyclic voltammetry Other factor(s) important for the regulation of heme redox potential is the hydrophobicity around the heme pocket [ 29 ]. To evaluate such a hydrophobic effect within the heme pocket on the redox potential, we produced five site-specific mutants in expecting to have different modulations on the hydrophobicity. However, the midpoint potentials for these mutants showed only slight variations ranging from -5 to -9 mV. This result might be consistent with the results of our computer modeling study, which indicated that the site-specific mutants did not cause any substantial changes in the heme pocket except for the L51T mutant (see Fig. S1(A and B); additional file 1 ). On the other hand, the half-wave potentials for these mutants showed a much larger variation (-29~-43 mV) and a more negative value than that of HLMW b 5 (-19.5 mV). More interestingly, the half-wave potentials for these mutants were categorized into two groups, one showing clear dependency on the poly-L-lysine concentration (HLMW b 5 , A59 S, G67A, and G67S), and the other showing independency on the poly-L-lysine concentration (L51I and A59V) (Figure 4 ). The curvature of the titration curves for those showing the dependency on the poly-L-lysine concentration was somewhat similar each other (Figure 4 ), indicating a similar mechanism for controlling the redox potential being operative within those. Therefore, for these mutants, very similar interactions between poly-L-lysine and the protein surface of HLMW b 5 delineated by the exposed heme propionate and the acidic residues (Glu49, Glu53, Glu61, and Asp65) (Figure 1C ) might occur, as proposed originally for rat OM cytochrome b 5 . Following this scenario, one may argue that the large variation in the half-wave potential might be ascribed to the difference in the dehydration around the heme moiety upon the complex formation with poly-L-lysine [ 29 ]. On the other hand, the mutants showing an independency on the poly-L-lysine concentration ( i.e ., L51I and A59V) might be reflecting the difference in microenvironment around the heme propionate group itself caused by the slight change in the heme cavity structure. Alternatively, since both Leu51 and Ala59 locate in the bottom of the heme cavity (Figure 1B ), slight conformational change upon the mutations might propagate to the local negative surface structure around Glu49, Glu53, Glu61, and Asp65 (Figure 1C ), resulting in the independency on the poly-L-lysine concentration. However, our computer modeling study did not support any of these possibilities, indicating the limitation of this kind of modeling study. One may argue about the cause of the significant negative shift in the half-wave potential of the G67A mutant (Figure 4 line (e); Table 1 ). The likely explanation for the negative shift would be a change in the hydrophobicity within the heme-pocket. But we should not exclude the possibility of a slight structural change caused by the replacement. Indeed, the G67A mutant showed a distinct negative value compared to HLMW b 5 in the midpoint potential measurement as well (Table 2 ). However, the G67 S mutant, that might be expected to cause just a reverse of the G67A mutant, actually showed an intermediate value between those of HLMW b 5 and the G67A mutant. Therefore, the significant negative shift would be caused not only by changes in the hydrophobicity but by other factors including changes in the heme coordination (as evidenced by the slight shifts of g-values in its EPR spectrum) (Figure 2 trace d ). Further, the binding mode of poly-L-lysine itself might be altered due to a slight change in local negative surface structure, resulting in lowering of the dehydration effect upon the complex formation at the heme edge [ 29 ]. Correlations between the half-wave potential and midpoint potential Interestingly, when the midpoint potential measured in the absence of poly-L-lysine was plotted against the half-wave potential for each of HLMW b 5 and mutants, there was a good correlation between these two values (Figure 6 line a ), in which the former were always 16~32 mV more positive than the latter. When the midpoint potential measured in the presence of poly-L-lysine (200 μM) was plotted against the half-wave potential similarly, there was a good correlation as well, in which the midpoint potential values were further up-shifted by 10~20 mV (Figure 6 line b ). This fact suggested that both the binding of poly-L-lysine and the changes of the hydrophobicity around the heme moiety (both within the heme-pocket and the exposed heme edge) regulate the half-wave potential of cytochrome b 5 and that the overall redox potentials were modulated by both factors in similar extents.
Conclusions Present study showed that simultaneous measurements of the midpoint potential and the half-wave potential could be a good evaluating methodology for the analyses of static and dynamic redox properties of various hemoproteins, including cytochrome b 5 , if we took them with an appropriate precaution. In the actual biological electron transfer, the reduction potential of cytochrome b 5 might be modulated differently upon the formation of a transient complex with a partner protein (cytochrome c , hemoglobin, or cytochrome b 5 reductase). The modulations might be mediated by a gross conformational change in the tertiary structure, by a slight change(s) in the local structure including surface charges, or by the change(s) in the hydrophobicity around the heme moiety (both within the heme-pocket and the exposed heme edge), as found for the interaction with poly-L-lysine. Therefore, the system consisting of cytochrome b 5 and its partner protein(s) or small peptide(s) might be a good paradigm for the study of biological electron transfer reactions.
Background Cytochrome b 5 performs central roles in various biological electron transfer reactions, where difference in the redox potential of two reactant proteins provides the driving force. Redox potentials of cytochromes b 5 span a very wide range of ~400 mV, in which surface charge and hydrophobicity around the heme moiety are proposed to have crucial roles based on previous site-directed mutagenesis analyses. Methods Effects of mutations at conserved hydrophobic amino acid residues consisting of the heme pocket of cytochrome b 5 were analyzed by EPR and electrochemical methods. Cyclic voltammetry of the heme-binding domain of human cytochrome b 5 (HLMW b 5 ) and its site-directed mutants was conducted using a gold electrode pre-treated with β-mercarptopropionic acid by inclusion of positively-charged poly-L-lysine. On the other hand, static midpoint potentials were measured under a similar condition. Results Titration of HLMW b 5 with poly-L-lysine indicated that half-wave potential up-shifted to -19.5 mV when the concentration reached to form a complex. On the other hand, midpoint potentials of -3.2 and +16.5 mV were obtained for HLMW b 5 in the absence and presence of poly-L-lysine, respectively, by a spectroscopic electrochemical titration, suggesting that positive charges introduced by binding of poly-L-lysine around an exposed heme propionate resulted in a positive shift of the potential. Analyses on the five site-specific mutants showed a good correlation between the half-wave and the midpoint potentials, in which the former were 16~32 mV more negative than the latter, suggesting that both binding of poly-L-lysine and hydrophobicity around the heme moiety regulate the overall redox potentials. Conclusions Present study showed that simultaneous measurements of the midpoint and the half-wave potentials could be a good evaluating methodology for the analyses of static and dynamic redox properties of various hemoproteins including cytochrome b 5 . The potentials might be modulated by a gross conformational change in the tertiary structure, by a slight change in the local structure, or by a change in the hydrophobicity around the heme moiety as found for the interaction with poly-L-lysine. Therefore, the system consisting of cytochrome b 5 and its partner proteins or peptides might be a good paradigm for studying the biological electron transfer reactions.
List of abbreviations used Abbreviations used are: LMW b 5 : human liver microsomal cytochrome b 5 soluble domain (amino acid residues from Met1 to Leu99); HLMW b 5 : human liver microsomal cytochrome b 5 soluble domain with an additional extension of the sequence of MGSSHHHHHHSSGLVPRGSH at the NH 2 -terminus of the LMW b 5 protein; EPR: electron paramagnetic resonance; OM: outer mitochondrial membrane; MALDI-TOF: matrix-assisted laser desorption ionization-time of flight; SHE: standard hydrogen electrode. Competing interests The authors declare that they have no competing interests. Authors' contributions This study was designed and supervised by FT and MT. Experiments were performed by AT and YS. Analysis of the data was performed by AT, YS, MM and MT. EPR experiments and the data analysis were performed by HH. MT drafted the manuscript and all authors read and approved the final version. Supplementary Material
Acknowledgements This work was supported by Grants-in-Aid for Scientific Research on Priority Areas (System Cell Engineering by Multi-scale Manipulation; 18048030 and 20034034 to M.T.) from the Japanese Ministry of Education, Science, Sports and Culture and by Grant-in-Aid for Scientific Research (C) (22570142 to M.T.) from Japan Society for the Promotion of Science. We thank Dr. Park (Yokohama City University, Kanagawa, Japan) for helping us to perform the computer modeling study on cytochrome b 5 mutants.
CC BY
no
2022-01-12 15:21:37
J Biomed Sci. 2010 Dec 4; 17(1):90
oa_package/5f/71/PMC3014896.tar.gz
PMC3014897
21108791
Background The organs of vertebrates are typically composed of epithelial and mesenchymal tissues. Signaling between these two tissues governs many aspects of organogenesis, from the initiation of organ development to the terminal differentiation of organ-specific cell types. The development and differentiation of the mouse tooth germ, like many other organs, depends on such inductive interactions. A large number of genes have been proven to be related to tooth morphogenesis [ 1 - 8 ]. However, the precise signaling pathway which is involved in the initiation, growth, and differentiation of the tooth germ has not yet been fully elucidated. There may be additional odontogenesis-related genes that have not yet been identified. A cDNA subtraction between the mandibles of embryonic day 10.5 (E10.5) and E12.0 mice was conducted to identify genes which might be related to the tooth morphogenesis. Thirty-five of the highly expressed positive clones were obtained from the E10.5 mandible by a colony array screening. In addition, 47 of the highly expressed positive clones were also obtained from the E12.0 mandible [ 9 ]. The expression of several of those genes is closely associated with the developing tooth germ [ 7 , 8 , 10 - 12 ]. Protogenin (Prtg) [ 13 , 14 ], which we first designated as Clone 15 , is one of the highly expressed genes in the mouse mandible at E10.5 [ 9 ]. Prtg belongs to the immunoglobulin superfamily (IgSF), which is one of the largest protein families in the mammalian genome [ 15 , 16 ]. This family is comprised of transmembrane and cell surface proteins and its members are characterized by immunoglobulin (Ig) domains in their extracellular regions. The IgSF members act as adhesion molecules, and can also transduce signals upon ligand stimulation. Many members of the IgSF are involved in tissue formation and morphogenesis during embryonic development [ 15 , 16 ]. However, thus far the functions of Prtg have not been elucidated. The constituents of a subgroup of the IgSF have recently received attention because of their roles in the migration and guidance of axon growth during development of the vertebrate nervous system. One of the representative genes in this subgroup is the Deleted in Colorectal Cancer (DCC) gene, and therefore this subgroup is referred to DEAL (DCC et al.), and includes DCC, Neogenin [ 17 ], Punc [ 18 ], and Nope [ 19 ]. DCC was originally identified as a tumor suppresser gene [ 20 ], but it has been recently shown to act as a Netrin receptor for cell migration and axon guidance cues [ 19 ]. Like DCC, Neogenin is a Netrin receptor. Punc [ 21 ] and Nope are prominently expressed by differentiating neurons in the central nervous system. They are involved in the early stages of nerve tissue morphogenesis. Prtg belongs to DEAL because their structures are highly homologous. There are two reports in which the expression of Prtg was described in chick [ 13 ], mouse, and zebrafish [ 14 ]. These reports demonstrated that Prtg is expressed in the central nervous system in the early developmental stages of the embryo. Vesque et al. [ 14 ] demonstrated that this gene is expressed in the first branchial arch as well as in the central nervous system. This finding supported a previous study [ 9 ] in which Prtg was preferentially expressed in the first branchial arch prior to tooth germ formation. Therefore, it is possible that the Prtg gene is related to the morphogenesis of the tooth germ because the tooth germ develops under the influence of cells in the first branchial arch. This study characterized the expression pattern of Prtg in the developing tooth germ, and examines the possible functional implications of this gene in tooth germ morphogenesis.
Methods Animals The embryos of BALB/c mice at E10.5, E12.0, E14.0, E16.0, and E18.0 after gestation were used in this study. The adult BALB/c mice were obtained from Charles River Laboratories (Charles River Japan Incorporated). Female BALB/c mice (10-30 weeks) were caged together with male mice. After 3 hr, successful insemination was determined based on the presence of a post-copulatory plug in the vagina. The embryonic day was defined as E0 after the post-copulatory plug was recognized. Male mice (3, 5, and 10 weeks) were also used to examine the expression of Prtg. All mouse experiments and housing were performed in accordance with the guidelines of the Animal Center of Kyushu University. cDNA subtraction and cloning procedures Prtg was identified as a novel gene, termed Clone 15 , in a previous study [ 9 ]. Based on the sequence of a fragment of this gene, 5'-/3'-RACE was performed to determine the full-length sequence (SMART RACE cDNA Amplification Kit; Clontech). Prtg DNA sequencing was performed with the dideoxynucleotide termination method using a DNA sequencer 373 S (Applied Biosystems). A search of the GenBank online database (using NCBI/BLAST/blastn suite: BLASTN programs) revealed no information at that time. In a later search, it corresponded with part of GenBank Accession numbers AK036172 , AK083540 , and NM_175485 . AK036172 includes a polyadenylation signal site, and NM_175485 includes an arrangement of a signal peptide (SP). The sequencing data identically correlated to that of the recently updated NM_175485.4. Structural analysis based on the amino acid alignment A domain analysis of the Prtg protein sequence was carried out based on the amino acid alignment using an NCBI conserved domain search with the online NCBI program http://www.ncbi.nlm.nih.gov/Structure/cdd/cdd.shtml . The signal peptides and a transmembrane region were predicted using the SOSUI system http://bp.nuap.nagoya-u.ac.jp/sosui/ . Intracellular localization of recombinant Prtg Three different plasmids expressing an EGFP-fusion protein were prepared. Prtg cDNA with a full-length, or the Prtg cDNA with a deleted SP1 or SP2 region were inserted in pEGFP-N1 vectors (Clontech), as shown in Figure 2A . These subcloned vectors were termed as Prtg-full, Ptrg-ΔSP1, and Prtg-ΔSP2. MISK81-5, which is an oral squamous cell carcinoma cell line established in our laboratory [ 22 ], was stably transfected with Prtg-full, Prtg-ΔSP1, Prtg-ΔSP2, or an empty vector using Lipofectamine 2000 (Invitrogen). These transfectants were isolated after selection with 800 μl/ml G418 for 2-3 weeks. Immunofluorescent staining with an anti-cadherin antibody and Alexa Fluor 594 rabbit anti-mouse IgG (Invitrogen) was performed on the cells transfected with the Prtg-full plasmid. The fluorescent images were observed under a fluorescent microscope and acquired using the digital imaging software program, AxioVision version 3.1 (Carl Zeiss). Specific antibodies against Ptrg Rabbit anti-Prtg polyclonal antibodies were generated against a synthetic peptide based on the regions as follows: 1) PKDASESNQRPKRLDSSNAKV (Entrez Protein database accession number NP_780694 aa 910-930), 2) STPPTSNPLAGGDSDGDAAPKKHGD (aa 1139-1163), and 3) DAAPKKHGDPAQPLPA (aa 1156-1171). The first amino acid sequence is present in the extracellular domain near the transmembrane region, whereas the two latter sequences correspond to sequences in the cytoplasmic domain. First, three antibodies were tested for their dye-affinity. The antibody for aa 1156-1171 was selected and used for all subsequent experiments because it gave a more specific signal. Western blot analysis A Western blot analysis for Prtg protein levels was performed on resolved proteins isolated from the homogenates of E10.5 and E12.0 mandibles and E18.0 tooth germ. These tissues were lysed in RIPA buffer (50 mM Tris pH 8.0, 150 mM NaCl, 1% Triton X-100, 1 mM EDTA pH 8.0, 0.1% SDS) supplemented with a protease inhibitor cocktail (50 μM), lactacystin (20 μM), and PMSF. The protein samples were separated on a 7% SDS-polyacrylamide gel and electrotransferred to an Immun-Blot PVDF Membrane (Bio-Rad). The membrane was probed with antibody against Prtg for 1 hr at room temperature, and incubated for 1 hr with secondary anti-rabbit IgG conjugated with horseradish peroxidase (Amersham). The membrane was developed using the enhanced chemiluminescence (ECL) Plus system (Amersham). Emitted light was detected using a cooled CCD-camera (LAS-1000), (Fujifilm). Glycosidase digestion was performed with an N-glycosidase F deglycosylation kit (Roche) according to the manufacturer's instructions before loading the samples in the gel. Temporal expression analysis of Prtg mRNA by semi-quantitative RT-PCR Total mRNA was extracted from E10.5, E14.0, and E18.0 mice, and from various organs of the 3-, 5-, or 10-week-old mice using an SV Total RNA Isolation system (Promega). Reverse transcription was performed to synthesize cDNAs using the Superscript III reverse transcriptase (Invitrogen). The cDNAs were amplified by PCR to compare the expression with the manifestation quantity. The forward and reverse primer pairs for Prtg and glyceraldehyde-3-phosphate dehydrogenase (GAPDH) were: Prtg 5'-CGA AGC AAA GCC AGG AAG TC-3' and 5'-GCT TGT TGT GAA TCC CTG AGC G-3', and GAPDH 5'-ACC ACA GTC CAT GCC ATC AC-3' and 5'-TCC ACC ACC CTG TTG CTG TA-3'. The PCR products were separated by electrophoresis on a 2% agarose gel. To confirm that the PCR products were derived from cDNA and not from genomic DNA, the primer pairs were designed upstream and downstream of introns. In Situ Hybridization The section preparation, the probe labeling, the specificity of the DIG-labeled in situ RNA probes, and ISH methods were carried out as described in our previous studies [ 6 , 7 , 12 ]. Prtg antisense probes were designed against a sequence in the C-terminal region and the 3'-UTR corresponding to nucleotide positions 3487 to 4917 (NM_175485.4). A Prtg sense probe was applied to the tissue specimens as a control. However, no hybridization signal was detected. Immunohistochemistry The preparation of serial cryosections was processed in the same way as for ISH. After the dried cryosections were rinsed with PBS containing 0.1% Triton X-100 for 10 min, and IHC was performed with a CSA II Biotin-free Tyramide Signal Amplification System according to the manufacturer's instructions (Dako). The primary anti-Prtg antibody diluted 1:500 in PBS was used. For the negative control, the application of the primary antibody was omitted from the procedure. Inhibition assay for Prtg by AS-S-ODN in organ culture The detailed procedures of the inhibition assay for Prtg by AS-S-ODN in organ culture have been shown in previous studies [ 6 , 7 ]. Briefly, the mandibles were dissected from E10.5 embryos. These explants were mounted on a filter (0.8 μm pore size, Millipore, MA), and then were incubated in Fitton-Jackson's modified BGJb medium (Invitrogen) supplemented with 5% fetal bovine serum (Filtron, Brooklyn, Australia), 100 μg/ml ascorbic acid (Invitrogen), and 100 units/ml penicillin/streptomycin (Invitrogen) in a 5% CO2 atmosphere at 37°C [ 6 , 7 ]. The HVJ-liposomes (GenomeOne series, Ishihara Sangyo Kaisha, LTD., Osaka, Japan) were purchased for this study. The HVJ-liposome complex was prepared according to the manufacturer's instructions (Ishihara). ODNs were: sense-S-ODN (SE-S-ODN): 5'-TGA ATG GCG CCT CCC GT-3', antisense-S-ODN (AS-S-ODN): 5'-ACG GGA GGC GCC ATT CA-3'. The SE/AS-S-ODNs corresponded to nucleotide positions 181-197 (GenBank accession number NM_175485.4 ). The treatment with AS- or SE-S-ODN or HVJ-liposome alone was performed at 24 hr intervals. Histological analysis of cultured mandibles On the 8th day after cultivation, the cultured mandibles were fixed with 4% paraformaldehyde and embedded in paraffin to be used for histological analysis. Five-μm-thick sections were cut in an antero-posterior direction, and were stained in hematoxylin and eosin. The sections were examined under light microscopy. Cell proliferation analysis of the cultured organs treated with AS-S-ODN In order to address the involvement of the Prtg protein in tooth morphogenesis, a cell proliferation was analyzed in the cultured organs treated with AS-S-ODN for Prtg. Immunohistochemistry using a rabbit polyclonal antibody to the Ki67 (Abcam, Cambridge UK) was performed to evaluate Ki67-positive cells in the DE, DM and SM areas. More than one hundred objective cells were examined as a population in at least three different microscopic fields of each area. The number of the stained cells was divided by the total number of stained and non-stained target cells to calculate the Ki67-positive ratio. Effects of Prtg suppression by AS-S-ODN on odontogenesis-related gene transcription Real-time quantitative PCR was performed to estimate the subsequent expression of selected genes using Thermal Cycler Dice Real Time System (TaKaRa, Shiga, Japan) with SYBR Premix Ex Taq II (TAKARA) according to the manufacturer's instructions. At 24 hr after the inhibition assay for Prtg by AS-S-ODN in organ culture, Bmp-4, Fgf8 , Lef-1 , Pitx2 , and Shh were analyzed in this study. Gapdh was used as a referable gene. The specific primer sets were as follows: Prtg forward 5'-ATC GCA GTA GGC GTT GGC ATA-3', reverse 5'-CGC TGT CTT AGA GGC GGA TGA-3'. Bmp-4 forward 5'-AGC CGA GCC AAC ACT GTG AG-3', reverse 5'-TCA CTG GTC CCT GGG ATG TTC-3'. Fgf8 forward 5'-CAT CAA CGC CAT GGC AGA A-3', reverse 5'-TCT CCA GCA CGA TCT CTG TGA ATA C-3'. Lef1 forward 5'-TCA CTG TCA GGC GAC ACT TC-3', reverse 5'-TGA GGC TTC ACG TGC ATT AG-3'. Pitx2 forward 5'-AGC TGT GCA AGA ATG GCT TT-3', reverse 5'-CAC CAT GCT GGA CGA CAT AC-3'. Shh forward 5'-AGC AGA CCG GCT GAT GAC TC-3', reverse 5'-TCA CTC CAG GCC ACT GGT TC-3'. Gapdh forward 5'-TGT GTC CGT CGT GGA TCT GA-3', reverse 5'-TTG CTG TTG AAG TCG CAG GAG-3'. The relative expression levels of each targeted gene were normalized using the ΔΔC T comparative method, based on the referable gene threshold cycle (CT) values [ 52 ]. Statistical analysis Significant differences within a group and between groups were determined by a chi-square test for the independence of the Prtg inhibition assay, and by unpaired Student's t-test or one-way ANOVA with the Tukey-Kramer comparison test for real-time PCR data and cell proliferation assay using the Statcel2 software program. When necessary, Welch's t-test was used for unequal variances. A P- value of less than 0.05 or 0.01 was considered to be statistically significant.
Results Characterization of predicted Prtg protein A DNA sequence analysis was performed by using of the 5'-RACE and 3'-RACE methods. Based on the DNA sequence analysis, the Prtg protein comprises 1191 amino acids, a signal peptide (SP), 4 Ig domains, 5 fibronectin (FN)-type III repeats, a single transmembrane (TM), and a cytoplasmic domain (CD). The deduced molecular structure is shown in Figure 1 . In sequencing the five independent Prtg cDNA clones from E10.5 mice, no alternatively spliced variant was found within the coding region of the amino acids. Full-length (Prtg-full) cDNA and mutant cDNA with a complete deletion of the SP region were inserted into an enhanced green fluorescent protein (EGFP) (Clontech) vector, and was then transfected into MISK81-5 cells, which is an oral squamous cell carcinoma cell line established in our laboratory [ 22 ], to characterize the intracellular localization of the Prtg protein. The MISK81-5 cells transfected with Prtg-full showed a localization of EGFP-fusion protein in the cell membrane by fluorescence microscopy (Figure 2B ). Meanwhile, the other transfectants with Ptrg-ΔSP1, Prtg-ΔSP2, or empty vector showed a diffuse intracellular Prtg distribution (Figure 2B ). Immunofluorescent staining for cadherin, a marker of the cell membrane-associated protein, in the transfectants with Prtg-full showed that the fluorescence images of Prtg-EGFP fusion protein and cadherin were merged (Figure 2C ). These results indicated that Prtg was localized in the cell membrane. A Western blot analysis using a Prtg affinity polyclonal anti-body demonstrated the molecular mass ( Mr ) of Prtg to be 180 kDa (Figure 3 , lane 2). The mature mouse Prtg contains 1191 amino acids, and therefore the estimated Mr is approximately 130 kDa. Because the molecules which have SP in the extracellular domain (ECD) are often highly glycosylated, it is thought that differences in Mr might be caused by glycosylation at the ECD (Figure 1 ). Prtg protein was purified from the mandible at E10.5, and was treated with N-glycosidase F and assayed by an immunoblotting analysis to confirm ECD glycosylation and to identify the accurate size of Prtg. A Western blot analysis after the treatment of N-glycosidase F showed a reduced Prtg size (Figure 3 , lane 3). Temporal expression analysis of Prtg mRNA and protein during odontogenesis Because Prtg was highly expressed in the mouse mandible at E10.5 [ 9 ], the temporal expression pattern of the Prtg mRNA during embryogenesis was examined by semi-quantitative RT-PCR with the total RNA from whole embryos. Prtg is highly expressed at E10.5 (Figure 4A ). However, there were considerable decreases in the level of mRNA in the whole body at E14.0 and E18.0 (Figure 4A ). Thereafter, the Prtg mRNA expression was examined in adult organs, and compared to the expression level of the E10.5 embryo. A weak expression was demonstrated in the central nervous system of the adult mice. Meanwhile, no expression was detected in the other organs (Figure 4B ). These results indicated that Prtg mRNA is primarily expressed at the early-middle stages of embryogenesis [ 13 , 14 ]. The expression level of Prtg protein in the mouse mandible was also examined by a Western blotting analysis. As shown in Figure 4C , the Prtg protein levels in the mandible were higher at E10.5 than that at E12.0. The Prtg mRNA levels also decrease in the E12.0 mandible compared to the E10.5 mandible in the cDNA subtraction analysis [ 9 ]. The expression level of the Prtg protein was dramatically reduced in the tooth germ at E18.0 (Figure 4C ). Expression of the Prtg mRNA and protein in the developing tooth germ and other organs An in situ hybridization analysis was performed by using a Prtg antisense cRNA probe to examine the temporal and spatial expression pattern of the Prtg mRNA in the course of the developing mouse embryonal organs. In situ hybridization (ISH) for Prtg mRNA showed diverse signal intensity within the same tissue section. Therefore, the terms ''strong'' and ''weak'' were used only for the relative evaluation of the signal intensity in the same section. The whole mount in situ expression of the Prtg mRNA at E10.5 revealed that this signal was present in the maxilla and mandible as well as central nervous system and eye, and thus the expression pattern of Prtg mRNA seemed to correspond to the distribution of the arch ectodermal cells (Figures 5A and 5B ). This appeared to be similar to the results of the study by Vesque et al. [ 14 ]. In addition, in the study by Chai et al. [ 23 ], the Prtg function appeared to be involved in the cranial neural crest cells. At E10.5, a strong expression of in situ signal of Prtg was seen in the mesenchymal cells which were widely distributed in the first branchial arch including the developing mandible (Figure 5C ). A signal was also found throughout the oral epithelial layer. At E12.0, in situ signal was observed in the oral epithelial layer, including the thickened area and underlying mesenchymal cells (Figure 5D ). At E14.0, the Prtg mRNA signal was rather restricted to the enamel organ and the dental mesenchyme (Figure 5F ). At E16.0, in situ signal of Prtg was detected in the enamel organ, in the dental papilla and in the dental sac (Figure 5G ), but the intensity appeared to be reduced. At E18.0, a faint in situ signal of Prtg was found in the inner enamel epithelium. The faintly positive cells were localized in the presumptive cuspal areas. Weak mRNA expression was also observed in the outer enamel epithelium. However, the in situ signal was markedly reduced in the dental papilla (Figure 5H ). A Prtg sense probe was applied to the tissue specimens as a control. However, no hybridization signal was detected (Figures 5E and 5I ). An immunohistochemical analysis (IHC) was also carried out using an anti-Prtg antibody. Both the protein expression and gene expression were detected in the first branchial arch in a widespread pattern, in both the epithelium and mesenchyme. Strong signals were noted near the oral epithelial layer (Figure 6A ). A higher magnification showed the immunohistochemical signal of the Prtg protein to be observed surrounding the cells with a punctate appearance (Figure 6B ), suggesting protein localization on the cell surface. This staining pattern was common in all the sections of each embryonic day. Although immunolocalization of the protein was present in both the epithelium and mesenchyme at E12.0, the signal intensity was reduced in comparison to that at E10.5 (Figures 6C and 6D ). At E14.0, the Prtg protein signal was conspicuously detected in the enamel organ and the surrounding condensed mesenchymal cells (Figures 6E and 6F ). The signal in the epithelium was stronger than that in the mesenchyme. At E16.0, immunohistochemical signal of Prtg were marginally detected in the enamel organ and in the dental papilla (Figures 6G and 6H ). At E18.0, a faint immunohistochemical signal was detected in the inner and outer enamel epithelia, and in the dental papilla (Figures 6I and 6J ). Thus, both the mRNA and protein of Prtg demonstrated similar expression patterns during odontogenesis. In addition, the expression of Prtg is localized in the developing nervous system, especially in the neural tube, the retina, the lens, and the brain throughout the embryonic period. Functional analysis of Prtg during development of the tooth germ The results of the in situ hybridization and immunohistochemical analyses suggested that Prtg might be involved in tooth morphogenesis. Therefore, an inhibition assay for the translation of Prtg mRNA was performed using Prtg antisense-phosphorothioated-oligodeoxynucleotide (AS-S-ODN) according to the same experimental design in our previous studies [ 6 , 7 ]. The expression of Prtg was time-dependently examined in the organ-cultured E10.5 mandibles because the expression level of Prtg markedly decreased after E12.0 in comparison to that at E10.5 (shown in the Figures 4A and 4C ). The real-time PCR showed a marked decrease of the Prtg expression by 48 hr in mandible culture (Figure 7A ) as well as that in E10.5 and E12.0 mandibles (Figure 7B ). The Prtg expression in the mandibles cultured for 24 hr and 48 hr significantly decreased to less than 40% and 15% of that in the E10.5 mandible, respectively. The Prtg expression in the E12.0 mandible also showed a marked reduction to less than 5% of that in the E10.5 mandible. A histological analysis was performed to evaluate the effects of Prtg knockdown on enamel organ formation in the cultured E10.5 mandible after phosphorothioated-oligodeoxynucleotide (S-ODN) treatment for 8 days. The period of organ culture was based on previous studies [ 6 , 7 ] in which the normal development of the tooth germ showed the cap stage by the 8th day of organ culture. As shown in Table 1 , most of the cultured mandibles at E10.5 treated with Prtg AS-S-ODN showed an apparent inhibition of tooth germ development after being cultured for 8 days (Figure 7D and Table 1 ). In contrast, the mandibles treated with Prtg sense-S-ODN (SE-S-ODN) showed normal cap-like tooth germ (Figure 7C ) as did the untreated mandibles and mandibles treated with only hemagglutinating virus of Japan (HVJ)-liposome (Table 1 ). The development of the enamel organs treated with Prtg AS-S-ODN on day 8 of culture was significantly inhibited in comparison to that in the other groups ( p < 0.05; Table 1 ). A cell proliferation analysis was performed to address the involvement of Prtg in the tooth morphogenesis. The Ki67-positive ratio was evaluated in the cultured organs treated with AS-S-ODN for Prtg. In the cultured organs, the objective cells were examined in three areas; the "dental epithelium (DE)", "dental mesenchyme (DM)" and "surrounding mesenchyme (SM)". The DM was either the "dental papilla and follicle" in the cultured organs showing the normal cap-like tooth germ, or the "odontogenic ectomesenchyme" in the samples with the inhibition of tooth germ development. As shown in Figure 7G , no significant difference in the Ki67-positive ratio was observed in any of the objective areas (DE, DM or SM) between the control mandibles (Figure 7E ) and the cultured mandibles at E10.5 treated with Prtg AS-S-ODN (Figure 7F ). While significant differences in cell proliferation were noted between the DE and SM in the control sampled and in the mandibles treated with SE-S-ODN, this difference was not observed in the areas in samples treated with AS-S-ODN (Figure 7G ). No apparent inhibition of cell proliferation by Prtg perturbation was observed in the samples at day 8 of culture in this study. Down-regulation of Bmp-4 expression by the depletion of Prtg mRNA by AS-S-ODN Based on the findings of the Prtg inhibition assay, we performed a real-time PCR analysis to examine Bmp-4, Fgf8 , Lef-1 , Pitx2 and Shh expression between AS-S-ODN-treated E10.5 mandibles and the others. These genes are expressed in mouse E10.5 mandible and are associated with odontogenesis [ 24 - 28 ]. The samples treated with S-ODN for 24 hr were used to examine the changes in gene expression in the early phase after Prtg inhibition. Mouse Gapdh was used as an internal control. At 24 hr after AS-S-ODN treatment, Bmp-4 mRNA expression in the E10.5 mandible was reduced to approximately 40% by AS-S-ODN treatment, and was significantly lower with AS-S-ODN treatment than without treatment, than with RS-S-ODN treatment, or with SE-S-ODN treatment samples ( p < 0.05, p < 0.05 and p < 0.01, respectively; Figure 8 ). Meanwhile, the expression levels of Fgf8, Lef-1, Pitx2 , and Shh showed no significant differences following treatment with Prtg AS-S-ODN (data not shown).
Discussion This study showed that Prtg , which is a highly expressed gene in the E10.5 mouse mandible, using a cDNA subtraction method between mandibles at E10.5 and E12.0 [ 9 ]. The gene belongs to the immunoglobulin superfamily according to a structural analysis, and was expressed in the early stages of the developing tooth germ. The temporal and spatial expression of this gene suggested that this gene is involved in the development of the mouse lower first molar. This is the first report to describe the relationship between the expression of Prtg and tooth morphogenesis. The amino acid alignment of Prtg is comprised of an SP, 4 Ig domains, 5 FNIII repeats, a single TM, and a CD, thereby showing the typical structure of an IgSF. Treatment with N-glycosidase F revealed that Ptrg is a highly N-glycosylated transmembrane protein. However, because the Prtg protein is bigger than the assumed size even after the treatment with N-glycosidase, this modification is also associated with O-glycosylation and phosphorylation. The structural characteristics of Prtg are similar to DCC, Neogenin [ 17 ], Punc [ 18 ], and Nope [ 19 ], thus suggesting that they are all members of the DEAL subfamily. Some IgSF proteins are important in the early developmental stage of the central nervous system. Meanwhile, these IgSF proteins are also implicated in various organs and tissues such as testis (BT-IgSF)[ 29 ], intestine (Neogenin) [ 17 ] and mesoderm cells (Robo)[ 30 ]. Chuong et al. [ 31 ] suggested that DCC may be involved in the differentiation of stem cells within several epithelial tissues. Nierhoff et al. [ 32 ] reported that the IgSF member Nope is expressed in rat fetal liver stem cells, and this gene might therefore be useful to identify, characterize, and isolate hepatic stem cells from the adult liver. Recently, diverse functions of DEAL have been identified, and include cell migration and axon growth guidance during development of the vertebrate nervous system [ 33 - 37 ] and the morphogenesis of epithelial tissues and the control of apoptosis in non-neural organization [ 17 , 31 - 33 , 38 - 40 ]. Therefore, it is likely that Prtg has multiple functions as a member of the DEAL family, including cell proliferation and cell differentiation during embryogenesis. The RT-PCR and Western blot analyses showed that the Prtg mRNA and protein were expressed in the early developmental stages of tooth germ morphogenesis, as well as in the central nervous system. In the mouse, we did not detect Prtg mRNA in any adult tissues except for the brain, thus suggesting that Prtg function may be required in some non-neural tissues during organogenesis. The ISH and IHC results demonstrated that this gene was highly expressed in the epithelial and mesenchymal cells at E10.5, and this expression pattern is similar to the distribution of the arch ectodermal cells [ 14 ]. The first branchial arch is derived from neural crest cells [ 23 ]. Ptrg is strongly expressed in the first branchial arch in the early stage of embryogenesis (until E9.25) [ 14 ]. Because the mandible develops from the first branchial arch, the findings suggest that Prtg might play a role in the migration of neural crest cells, similar to the function of the other DEALs during development of the nervous system [ 33 - 37 ]. Tooth morphogenesis appears to begin with a signal of an epithelial-mesenchymal interaction [ 3 - 5 ]. The site where the tooth germ is likely to form is determined by the signal of mesenchymal cells which are derived from craniofacial neural crest cells [ 23 ]. The mesenchymal cells in the early phase (E10.5 and E12.0) of tooth germ formation were found to be positive for Prtg. This may indicate that the expression of Prtg is closely associated with epithelial-mesenchymal interactions. An interesting finding in this study revealed that Prtg was expressed in epithelial cells, including the estimated tooth germ formation area at E12.0, the epithelial cells of the tooth bud at E14.0, and the inner enamel epithelium at E18.0. The DEAL proteins have been observed in non-neural systems and are involved in the morphogenesis of epithelial tissues and the control of apoptosis during organization. In fact, there are reports describing their role in the differentiation of the intestine epithelium [ 41 , 42 ], and in the budding and branching of lung alveoli [ 39 ]. Therefore, it is reasonable to consider that Prtg is also involved in the early development of odontogenic epithelial tissue. Once the tooth developmental process is initiated, then Prtg expression may be dramatically downregulated in these regions. In this study, AS-S-ODN was employed in an organ culture system to examine the functional roles of Prtg in the development of the tooth germ. The development of tooth germ was arrested at the bud stage when AS-S-ODN was added to the culture media. However, Prtg perturbation did not lead to any apparent inhibition of cell proliferation in this study. The reason for the developmental arrest of the tooth remains unknown at present. It is possible that the result may have been observed because the comparison was made between the tooth germs with different developmental stages at the 8th day after cultivation of the controls or the samples treated with SE-S-ODN and the samples treated with Prtg AS-S-ODN. Recently, Wong et al. reported that Prtg might have the potential to act before the onset of circulation to coordinate the rate of proliferation and the time of differentiation between the three primary germ layers [ 43 ]. In our study, the expression level of Prtg markedly decreased after E12.0 in comparison to that at E10.5. Similar results were shown during nerve development in the study of Wong et al. [ 43 ]. Therefore, it seems likely that the Prtg participates in the development of the tooth germ during the process of odontogenesis. Furthermore, Bmp-4 mRNA expression was decreased following Prtg depletion. This result suggests that the Prtg is related to the direct or indirect regulation of Bmp-4 gene transcription. Because Bmp-4 plays an important role during odontogenesis as well as embryogenesis [ 24 , 26 ], Prtg depletion by treatment with AS-S-ODN may induce developmental arrest of the tooth germ. However, there have been no reports thus far that describe the mechanism of Prtg regulation of Bmp-4 expression. The other genes, Fgf8, Lef-1, Pitx2, and Shh, also play important roles in determining and/or budding tooth germ in the early development. Bmp-4 induces Lef-1 expression [ 44 ]. Pitx2 transcription is partially regulated by Lef1 [ 45 ]. Meanwhile, the Lef-1 expression overlaps that of Pitx2 at approximately E1.5 after Pitx2 expression, and Pitx2 regulates the Lef-1 isoform expression [ 46 ]. Interactions between these products are complex, and further studies will likely clarify the interrelationship between them and their role in development. Although Bmp-4 downregulation would be expected to extend to other genes, Prtg depletion did not significantly affect the expression of these genes in the early phase (within 24 hr). While Pitx-2 is expressed within the entire left atrial chamber of E12.5 mouse hearts [ 47 ], the Prtg protein is not detectable within cardiac cells of the atrial and ventricular chambers from E8.25 to E10.5 mouse hearts [ 43 ]. The interaction between Prtg and these proteins, and signal transduction pathways associated with Prtg have not been identified so far. Therefore, future studies will be needed to clarify the interaction between Prtg and Bmp-4 during signal transduction and subsequent gene expression, as well as that among the other genes during tooth germ development. Alternatively spliced variants were not identified within the coding region of Prtg cDNA from E10.5 mice in this study. In contrast, mouse Neogenin (mNeogenin) has four alternatively spliced exons, three within the extracellular domain and a fourth within the cytoplasmic domain. Three of these alternatively spliced exons are developmentally regulated [ 48 ]. Interestingly, DCC also contains an alternatively spliced exon within the extracellular domain [ 49 , 50 ]. The expression of DCC with this alternative exon is also regulated throughout embryogenesis, as seen with the alternative forms of mNeogenin [ 51 ]. Thereafter, when more clones from various embryonic stages are analyzed, alternatively spliced forms of Prtg may be found in the neuron tissue. However, Prtg expression was dramatically downregulated in the tooth germ after E12. The regulation of Prtg function might therefore be different from the mechanism of DCC and Neogenin.
Conclusion This study demonstrated that Ptrg is preferentially expressed in the early stage of organogenesis. This study characterized the expression pattern of Prtg in the developing tooth germ, and thus shows the possible functional implications of this gene in tooth germ morphogenesis through an inhibition assay for Prtg by AS-S-ODN in organ culture. Prtg, an IgSF family member is involved in the initial development of the tooth germ and in the differentiation of the inner enamel epithelial cells in the mouse lower first molar. Future investigations of organ cultures of the mandible earlier than E10.5 are therefore expected to clarify this process.
Background Protogenin (Prtg) has been identified as a gene which is highly expressed in the mouse mandible at embryonic day 10.5 (E10.5) by a cDNA subtraction method between mandibles at E10.5 and E12.0. Prtg is a new member of the deleted in colorectal carcinoma (DCC) family, which is composed of DCC, Neogenin, Punc and Nope. Although these members play an important role in the development of the embryonic central nervous system, recent research has also shed on the non-neuronal organization. However, very little is known regarding the fetal requirement of the non-neuronal organization for Prtg and how this may be associated with the tooth germ development. This study examined the functional implications of Prtg in the developing tooth germ of the mouse lower first molar. Results Ptrg is preferentially expressed in the early stage of organogenesis. Prtg mRNA and protein were widely expressed in the mesenchymal cells in the mandible at E10.5. The oral epithelial cells were also positive for Prtg. The expression intensity of Prtg after E12.0 was markedly reduced in the mesenchymal cells of the mandible, and was restricted to the area where the tooth bud was likely to be formed. Signals were also observed in the epithelial cells of the tooth germ. Weak signals were observed in the inner enamel epithelial cells at E16.0 and E18.0. An inhibition assay using a hemagglutinating virus of Japan-liposome containing Prtg antisense-phosphorothioated-oligodeoxynucleotide (AS-S-ODN) in cultured mandibles at E10.5 showed a significant growth inhibition in the tooth germ. The relationship between Prtg and the odontogenesis-related genes was examined in mouse E10.5 mandible, and we verified that the Bmp-4 expression had significantly been decreased in the mouse E10.5 mandible 24 hr after treatment with Prtg AS-S-ODN. Conclusion These results indicated that the Prtg might be related to the initial morphogenesis of the tooth germ leading to the differentiation of the inner enamel epithelial cells in the mouse lower first molar. A better understanding of the Prtg function might thus play a critical role in revealing a precious mechanism in tooth germ development.
Authors' contributions KFT and TK carried out the experimental work. KFT performed the immunoassays, in situ hybridization and the organ culture studies. KFT also helped to draft the manuscript. TK carried out the molecular genetic studies, and performed the statistical analysis. TK also participated in the design of the study and coordination, and helped to draft the manuscript. IK, MX, HF, YO and HW helped to conduct the organ culture studies with AS-S-ODN, and helped to make the histological analysis. HY participated in the sequence alignment and structure analysis. IK, KN and HW participated in the histological analysis and cell proliferation analysis. TS and YT helped to review the data. HS conceived the study, participated in its design and coordination, and drafted the manuscript. All authors read and approved the final manuscript.
Acknowledgements The authors would like to thank Mrs. S. Ono for the excellent technical assistance in this study. The authors also would like to thank Dr. K. Matsuo for valuable discussions. This research was funded in part by Grant-in-Aid from the Ministry of Education, Culture, Sports, Science and Technology of Japan, to H.S. (#17390487 and #20390466).
CC BY
no
2022-01-12 15:21:37
BMC Dev Biol. 2010 Nov 25; 10:115
oa_package/b9/da/PMC3014897.tar.gz
PMC3014898
21159184
Background Hepatocellular carcinoma (HCC) represents the commonest primary cancer of the liver. Incidence is increasing and HCC has risen to become the 5th commonest malignancy worldwide and the third leading cause of cancer related death, exceeded only by cancers of the lung and stomach [ 1 , 2 ]. Surgery is the only potentially curative treatment for HCC. In carefully selected patients, resection and transplantation allow in fact a survival ranging from 60% to 70%, and should be considered as the preferred treatment options in early-stage disease with the assessment of hepatic functional reserve being essential for treatment planning [ 3 ]. The percutaneous treatment for HCC, percutaneous alcohol injection (PEI) and the radiofrequency thermal ablation (RF), are an alternative to surgery in patients with early stage disease who are not candidates to resection or transplantation [ 4 , 5 ]. The majority of patients in Western countries presents an intermediate or advanced stage at diagnosis. These patients are therefore candidates treatment including transarterial embolization and chemoembolization and systemic treatments including chemotherapy, immunotherapy and hormonal therapy [ 6 ]. Only recently, a molecular targeted drug, Sorafenib, has been proved effective in these patients [ 7 - 9 ]. TACE represents a crucial treatment option for HCC, however comparative assessment of clinical findings resulted often hampered by the considerable variability in patients selection criteria and modalities of execution of therapy [ 10 - 12 ]. Nonetheless meta-analyses of clinical trials suggested a favorable impact of this procedure on survival [ 13 , 14 ] and the reports of Lo and Llovet independently showed a significant increase in survival in patients treated with TACE compared to control group [ 15 , 16 ]. In the last few years pTACE (precision TACE with drug-eluting microspheres) presented as a possible further improvement in the treatment of HCC, but few data are available about its role, particularly in comparison with traditional TACE, for the global treatment strategy in HCC patients. Primary aim of our analysis was to evaluate the role of transarterial chemoembolization, either with lipiodol (traditional TACE) or drug-eluting microspheres (precision TACE, pTACE), in terms of response rate (RR), time to progression (TTP) and overall survival (OS), in patients with advanced HCC. Secondary aim of the study was to evaluate the role of pTACE compared to TACE and toxicity deriving from treatment.
Materials and methods Patients selection We have retrospectively analyzed a population of HCC patients, treated with TACE (lipiodol or drug-eluting microspheres) from 2002 to 2009, at our institution. The study included all patients consecutively treated with TACE (in our institution, patients were treated with TACE with lipiodol from 2002 until 2006 and with TACE with microspheres from 2007 to 2009). All patients studied were suffering by liver cirrhosis, 70% on viral etiology (HBV and HCV chronic hepatitis), 15% on toxic etiology (alcohol), 15% caused by genetic and metabolic diseases. Patients were divided into two groups. The first group included patients who received, as the sole treatment for HCC, either traditional TACE (selective TACE with infusion of chemotherapeutic agents associated with lipiodol, without the use of microspheres) or pTACE (superselective TACE with drug-eluting microspheres). The second group included patients who received TACE or pTACE in addiction to other treatments, such as liver resection, liver transplantation, alcoholic or laser ablation, radiofrequency thermal ablation, systemic therapies. Furthermore, we analyzed, separately the group of patients treated with traditional TACE or pTACE. Patients were classified according to ECOG performance status and were staged using different staging systems to assess patients general clinical condition, extent of disease and liver function: TNM, Child-Pugh, CLIP, BCLC, Okuda, JIS, MELD, MELD-Na. For each patient the dose of chemotherapy of each treatment were recorded, and the dose to the first treatment and the cumulative dose were assessed. Patients were then divided into two groups (high and low dose) in relation to the median dose of drug. Clinical outcome evaluation and statistical analysis Treatment response was assessed through CT and MRI, α-FP assay, performed after one month of treatment and then every 3 months, according to the new RECIST criteria (New Response Evaluation Criteria in Solid Tumors 1.1). Radiological images were reviewed in double-blind by two radiologists. The distribution curves of survival and time to progression were estimated using the Kaplan-Meier method. Overall survival (OS) was calculated as the time interval between the date of radiological or histological diagnosis of HCC and the date of death or last follow-up. The time to progression (TTP) was calculated as the time interval between the date of the traditional TACE or pTACE and the date of progression or last follow-up. Treatment toxicity was evaluated according to NCI-CTC 3.0 (National Cancer Institute - Common Toxicity Criteria 3.0). Toxicity profiles were grouped by severity (G1-G2 vs. G3-G4) and the time (early <1 week vs delayed >1 week) The clinical variables analyzed were: gender (male vs. female), age (≤69 years vs. >69 years), ECOG performance status (0-1 vs. 2-3), TNM stage (I-IIIB vs IIIC - IV), the Child-Pugh score (A vs. B), the CLIP stage (0-1 vs >1), BCLC stage (A vs. B-C), Okuda stage (I vs. II vs. III), stage JIS (0-1 vs >1), the MELD score (≤10 vs. 11-15 vs. >15), the MELD-Na score (≤10 vs. 11-15 vs. >15), exclusive TACE vs. TACE + other treatments, the type of TACE (traditional TACE with lipiodol vs. pTACE with drug-eluting microspheres) and the number of re-treatments (1 vs. 2 vs. ≥3). The association between variables was estimated using the chi-square test. The Cox multiple regression analysis was used for those variables that were found significant at the univariate analysis. Any differences between the groups were considered significant if the significance level was less than 0.05.
Results One hundred and fifty patients were available for our analysis: 122 (81%) males and 28 (19%) females. Median age was 69 years (range 49-89) (Table 1 ). Eighty-two patients (55%) received TACE or pTACE as the only therapeutic approach, while 68 patients (45%) received also other treatments. In the group of patients treated with TACE only, 50 (61%) underwent traditional TACE, while 32 (39%) received pTACE with microspheres. All groups of patients showed similar clinical characteristics according to all staging systems used (Table 2 ). In the whole group, median survival was 32 months, while median time to progression was 24 months. Patients treated with TACE only showed a median survival of 30 months, compared to 32 months for patients treated with other treatments in addition to TACE (p = 0.69). The time to progression was 26 months versus 24 months respectively in patients treated with TACE only and in those treated with other therapies (p = 0.85). Median overall survival was 46 months for patients undergoing traditional TACE and 19 months for those who were treated with pTACE (p < 0.0001) (Figure 1 ) and time to progression was 30 months versus 16 months for patients receiving either traditional TACE or pTACE respectively (p = 0.003) (Figure 2 ). These results were confirmed also among the group of patients who received exclusive traditional TACE or pTACE as the only treatment approach. In particular median overall survival was 46 months for patients treated with lipiodol TACE compared to 14 months for patients treated with pTACE (p = 0.0002) (Figure 3 ). Median time to progression was 32 months for patients treated with traditional TACE compared to 13 months for patients treated with pTACE (p = 0.014) (Figure 4 ). At the univariate analysis, age (p < 0.0001), Okuda stage (p = 0.046) (Figure 5 ), type of TACE (P < 0,0001) and number of TACE treatments (p = 0.003) were found to be prognostic factors influencing overall survival. Type of TACE (p = 0.0003) and the number of TACE treatments (p = 0.004) were also found to be prognostic factors influencing the time to progression. At multivariate analysis, age, the Okuda stage, type of TACE and number of TACE treatments proved to be independent prognostic factors influencing overall survival (p < 0.0001). Only type and number of TACE treatments proved to be independent prognostic factors influencing time to progression (p < 0.0001). Overall response rate for patients treated with lipiodol TACE or pTACE respectively was: complete response in 17 (20%) and 14 (24%) patients, partial remission in 32 (39%) and 19 (33%) patients, stable disease in 16 (19%) and 7 (12%) patients, and progressive disease in 18 (22%) and 18 (31%) patients. No statistically significant differences in terms of objective response (assessed according to RECIST criteria) was found between the groups of patients treated with lipiodol TACE or pTACE with microspheres (Table 3 ). The toxicity profiles (were not statistically different between the groups of patients treated with lipiodol TACE or pTACE (Table 4 ). In the overall series, 32 (21%) patients underwent a minimum of 3 TACE treatments, 39 (26%) underwent 2 treatments and 79 (53%) received a single treatment. In these groups a statistically significant difference was noted for overall survival (p = 0.003) (Figure 6 ) and time to progression (p = 0.0042) (Figure 7 ). No correlations could be noticed between the number of treatments performed, stage of disease and liver function. Fifteen (19%) patients who received traditional TACE or pTACE only were treated with at least 3 TACE sessions and showed a median survival of 74 months, 24 (29%) received 2 treatments with a median survival of 29 months (range 3-43) and 43 (52%) were subjected to a single treatment with a survival of 25 months (range 3-87) (p = 0.0286). The difference in time to progression was not statistically significant (p = 0.057). In the whole patients population statistically significant differences were noted in relation to the dose of chemotherapy administered (< 53 mg or ≥53 mg) at the time of the first TACE or pTACE, for both median overall survival (46 months, vs 24 months, p < 0.0001) and time to progression (30 months vs 17 months, p = 0.0061).
Discussion Several studies have demonstrated the efficacy of TACE with lipiodol, for the treatment of HCC. However comparative assessment of results is often hampered by the considerable variability in patients selection criteria and in modalities of treatment administration. Favorable results on overall survival for treatments with lipiodol TACE, reported by retrospective studies were initially questioned by randomized controlled clinical trials with groups of patients treated conservatively [ 10 - 12 ] with subsequent meta-analyses of previous clinical trials suggesting a favorable impact of this procedure on survival [ 13 , 14 ]. More recently the reports of Lo and Llovet independently showed a significant survival improvement for patients treated with TACE compared to control groups [ 15 , 16 ]. These results are probably attributable to the stringent criteria for patient selection and to the maintenance of results over time through repetition of the procedure, with an average of 2.8 TACE treatment per patient. In the last years the treatment of pTACE with microspheres is increasingly arguing for the management of patients with HCC and recent studies have validated the effectiveness of pTACE with microspheres, in terms of objective response rate [ 17 ]. Two recent trials presented at the American Society of Clinical Oncology annual Meeting 2009, one retrospective [ 18 ], and one prospective [ 19 ] have shown an advantage in terms of overall survival and objective complete responses in favor of pTACE with microspheres for patients with unresectable HCC. In our experience treatment with microspheres could not confirm these findings, in particular for overall survival and time to progression. On the contrary in our series median overall survival resulted improved in the group of patients treated with lipiodol TACE compared to the group of patients treated with microspheres, while no significant differences were noticed in terms of response rate. Although these apparently conflicting results may be related to the retrospective nature of our study, differences in the patients population investigated and to inevitable selection bias, we should note that the sample size analyzed in the present study is considerably larger than the sample size presented in the analog retrospective trial by Dhanasekaran et al. The enrollment time itself (11 years in the study by Dhanasekaran vs 7 years in our analysis) could have influenced results as well, with the longer enrollment time in the trials by Dhanasekaran possibly putting at stake sample homogeneity. Unfortunately the trial by Lencioni et al does not include information about overall survival and time to progression, but only data about response rate., which resulted improved for pTACE. Nevertheless although not significant in our study response rate for TACE and pTACE are comparable to those reported by Lencioni, thus suggesting an effective reproducibility of our results in the clinical practice. It is possible that pTACE with microspheres could have a greater embolizant effect than TACE with lipiodol, and this would lead to increased tumor growth factors release in response to hypoxia, with consequently probability of recurrence and reduced overall survival and time to progression. The response rate, assessed at one month after treatment, however, is similar between the two groups, because these molecular mechanisms would not be able to influence it, resulting in a statistically significant difference in such a short time. In this setting treatment with sorafenib may represent a valuable asset to further improve clinical results. Our analysis also showed a more pronounced treatment benefit for older patients. This observation may be related to either a more aggressive tumor behavior in younger patients or a more indolent tumor progression in older age (or to a combination of both considerations). Many patients in our series received more sessions of TACE or pTACE treatments during their medical history. These patients seem to have obtained an advantage in terms of overall survival and time to progression compared to those treated with a single TACE or pTACE session. This seems to imply that certain biological characteristics could make certain HCC more or less responsive to treatment with TACE. These considerations should of course be considerate merely speculative. Further studies focusing on biological and clinical characteristics of HCC should be conducted before definitive conclusion could be drawn. The observation that patients who received a sub-median dose of drug may have an advantage in terms of overall survival and time to progression compared to those who received a dose over-the median deserves further comments. It is possible that a higher dose of chemotherapy would result in an additional damage to a liver function already heavily compromised due to the underlying disease, rather than an advantage, measurable with a tumor shrinkage. Another crucial point of discussion in HCC is the use of a staging system which effectively reproducible. In our study none of the staging systems commonly used in clinical practice has proven to be able to classify patients from a prognostic point of view, with the exception of the Okuda system, which proved able to influence the overall survival (p = 0.046). Unlike most other malignancies, for which the staging systems are well codified and universally accepted the staging systems proposed for HCC are not universally adopted and shared. One of the reasons that makes it difficult to obtain reliable results, is related to the fact that in most cases, the tumor occurs in patients with liver cirrhosis. Therefore tumor stage, liver function and clinical characteristics may differently concur to define subgroups of HCC in different patients. In this perspective, the results of our analysis proved to agree with the majority of studies in the literature.
Conclusion The clinical management of HCC is becoming increasingly complex as therapeutic options are expanding. The patient has, in most cases, two diseases, cancer and the underlying liver disease that often heavily influenced, by mechanisms not yet completely clear, the response to cancer therapy and prognosis. So it is clear how crucial is a multi-specialist management of patients with HCC. In this framework, loco-regional treatment still plays an important role and appears to be an essential point of comparison even, and maybe even more, in the era of biological therapies.
More data about TACE and pTACE seem necessary to better define the global treatment strategy for HCC. Aim of our analysis was to evaluate the role of TACE, either with lipiodol (traditional) or drug-eluting microspheres in terms of response rate (RR), time to progression (TTP), overall survival (OS) and toxicity in HCC. Patients with HCC undergoing traditional TACE or pTACE (either alone or in combination with other treatment options) were eligible One hundred and fifty patients were analyzed. In the global patient population median OS was 46 months for lipiodol TACE and 19 months for pTACE (p < 0.0001), TTP was 30 months versus 16 months for patients receiving TACE or pTACE respectively (p = 0.003). These results were confirmed also among the group of patients who received exclusive TACE or pTACE. Neither RR nor toxicity was different between TACE or pTACE. At multivariate analysis, age, the Okuda stage, type of TACE and number of TACE proved to be independent prognostic factors influencing overall survival. In our experience, lipiodol TACE showed a better OS and TTP over pTACE, without difference in toxicity profile and RR. Among the staging systems analyzed only the Okuda stage seemed able to reliably predict patients outcome.
Abbreviations (TACE): Transarterial chemoembolization; (traditional TACE): TACE with lipiodol; (precision TACE, pTACE): TACE with drug-eluting microspheres; (RR): response rate; (TTP): time to progression; (OS): overall survival; (HCC): hepatocellular carcinoma; (PEI): percutaneous alcohol injection; (RF): radiofrequency thermal ablation. Competing interests The authors declare that they have no competing interests. Authors' contributions MS: conception, design, analysis and interpretation of data, revising the manuscript. GSB: conception and design. LF: conception, design, acquisition analysis and interpretation of data, writing of the manuscript. MDPP: acquisition analysis and interpretation of data. CP: acquisition analysis and interpretation of data. RC: acquisition of data. RB: acquisition analysis and interpretation of data. SA: acquisition analysis and interpretation of data. CM: acquisition of data. AR, CM, EA, and AB: revised the study. SC: conception, design, analysis and interpretation of data, revising the study. All authors read and approved the final manuscript.
CC BY
no
2022-01-12 15:21:37
J Exp Clin Cancer Res. 2010 Dec 15; 29(1):164
oa_package/dc/0e/PMC3014898.tar.gz
PMC3014899
21126335
Background There is growing interest in the effects of physical disease on patients with schizophrenia. Physical disease can affect psychiatric signs and symptoms, response to psychoactive drugs, life expectancy, and use of healthcare services [ 1 - 5 ]. However, there is no consensus about how to treat or prevent physical disease in patients with schizophrenia [ 6 , 7 ]. This is mainly because of the difficulties involved in selecting and analyzing representative samples of such patients [ 8 ]. In Spain, following psychiatric reform and the de-institutionalization process in the 1980 s, most people with schizophrenia live in the community and receive public universal healthcare in the same centers used by the general population [ 9 - 11 ]. In this scenario, our hypothesis is that the study of physical disease in hospitalized people with schizophrenia may provide relevant information for clinical practice and healthcare planning. The objective of the present study was to examine physical disease in hospitalized people with schizophrenia, describe the epidemiological characteristics, and identify the most prevalent physical diseases as well as their impact on mortality by analysis of a national administrative database.
Methods This study used the National Hospital Discharge Registry of Spain, the official database of the Ministry of Health [ 12 ]. This information is derived from discharge reports from all acute-care hospitals, and is representative of the national population, as it includes data on over 90% of all annual hospital admissions nationwide. The registry, mandated by law, includes demographic data; clinical data, including diagnoses (one main or primary diagnosis and up to 12 additional diagnoses all of which are considered as secondary diagnoses) coded according to the International Classification of Diseases, 9 th Revision, Clinical Modification (ICD-9-CM); dates of admission and discharge; type of admission; and characteristics and disposition upon hospital discharge. We used population figures from the 2003-04 National Health Survey [ 13 ] to compare physical-disease prevalence between our study population and the general population. We also obtained demographic national population data from the National Statistics Institute [ 13 ]. Period of analysis We used data from the national database that covered the period January 1 to December 31, 2004. Case selection Cases were selected by identification of codes corresponding to schizophrenia (ICD-9 codes 295.xx), among hospitalized subjects aged ≥15 years. Afterward, to avoid an overestimation of comorbidities and outcomes for each case, we performed a process of filtering and depuration of the database in which we explored the number of hospital discharges in the analysis period. To carry this out, the following identification variables were chosen: birth date, gender, admission date, discharge date, postal code and readmission (coded as a binary variable). In the database refinement process used in this study, 1,776 cases were identified, within the analysis period, as having been readmitted into the same hospital and with the same main diagnosis. Subsequently, among the admissions identified for each patient, the most complete one, in terms of coding with respect to the coding for the main diagnosis and the secondary ones, was chosen. Comorbidities For each case, physical disease was characterized by ICD-9 codes: infectious diseases (001-139); neoplasms (140-239); endocrine diseases (240-279); hematological diseases (280-289); neurological diseases (320-389); diseases of the circulatory system (390-459); respiratory diseases (460-519); diseases of the digestive system (520-579); diseases of the genitourinary tract (580-629); complications of pregnancy, childbirth, and the puerperium (630-677); diseases of the skin and subcutaneous tissue (680-709); diseases of the musculoskeletal system and connective tissue (713-739); and injury and poisoning (800-999). Within each category of physical illnesses, those considered of special clinical relevance, such as Chronic Obstructive Pulmonary Disease (COPD); Ischemic Heart Disease (IHD); Myocardial Infarction (MI) and Diabetes will be analyzed specifically. In addition, specific codes were used to identify abuse or dependency on drugs (codes ICD-9: 304.8, 304.2, 305.9), alcohol (305.0, 303.9), and tobacco (305.1, 989.84, E869.4) given their known capacity for generating or complicating the course of physical disease in patients with schizophrenia [ 14 ]. Finally, to determine the extent of physical comorbidities of known prognostic value, we used a validated ICD-9 version [ 15 ] of the Charlson comorbidity index [ 16 ]. In accordance with prior literature [ 17 ], four different score groups (0, 1-2, 3-4, > 4) were employed. Ethical issues The study was exempt from institutional review board approval, because only de-identified administrative data were used. Data analysis This descriptive study analyzed the prevalence (and 95% CIs) of selected physical morbidities and their distribution according to gender and age quartiles. Data are summarized as frequencies and percentages for categorical variables. Continuous variables are presented as means and standard deviations (SD). For between-group comparisons, we used χ 2 tests for categorical data. Continuous variables were analyzed with Snedecor's F test or the Mann-Whitney U test. Odds ratios (ORs) with 95% confidence intervals were computed where appropriate. We used an exploratory logistic regression analysis to identify the impact of physical illness upon in-hospital mortality. The age-and gender-standardized rates were calculated by direct standardization based on the 2004 Spanish population aged ≥15 years [ 18 ]. We estimated expected values for mortality by 10 years-age groups based on the 2004 Spanish population aged ≥15 years. Observed and expected numbers of deaths were used to calculate the standardized mortality ratios (SMRs) versus the general population [ 2 ]. In order to explore the impact of physical disease on the risk of death, SMRs were calculated for the subgroups of cases with/without ICD-9 codes of physical disease. All analyses were performed with SPSS version15.0 for Windows (SPSS Inc., Chicago, IL). A p-value < 0.05 was considered significant.
Results Of the 3,951,214 hospital discharges registered in subjects ≥15 years of age across the nation in 2004, and after carrying out several depuration processes for the database, 16776 records with schizophrenia were eligible for analysis (incidence rate 46.23 cases per 100,000 population/year). Of the total number of cases, 64% of hospitalisations (n = 10745; 29.6 cases per 100,000 population/year) appeared to be directly associated with schizophrenia, inasmuch as schizophrenia is shown as the principal diagnosis, whereas in the remaining 36% (n = 6031; 16.63 cases per 100,000 population/year), the primary cause of hospitalisation was stated to be some other disease. Stratification by age-groups is shown in Figure 1 . As shown in Table 1 hospital admissions were mostly for acute conditions through emergency departments and more frequent in men. Mean age of the cases was 43 years and men were significantly younger than women (41.2 ± 14.7 years vs . 47.9 ± 17 years; p < 0.001). The mean number of physical ICD-9 codes was 1.22 ± 1.38 (range 0-8), and women had significantly more codes than men (1.26 ± 1.4 vs . 1.20 ± 1.3; p = 0.002). Overall, 61% of patients had at least one ICD-9 code, and 32% had more than one ICD-9 code. Stratification by age showed that 50% of cases were younger subjects (15-31 years of age), and that the number of ICD-9 codes increased with age. Thus, for patients aged 53 years or more, 84% had at least one physical ICD-9 code (17% had one code; 24% two codes; and 43% three or more codes). This increase with age occurred for both men and women. Furthermore, 20% of the cases had Charlson indices greater than zero, although there was no statistically significant difference between the genders. The severity of this index rose significantly with age, independent of gender. Addiction to drugs, alcohol, or tobacco was the most significant problem, and about one-third of cases had a code indicating substance abuse or dependency (Table 2 ). With respect to defined ICD-9 groups, the most frequent were endocrine problems, circulatory and respiratory diseases, and injury-poisoning. Within these categories, diabetes mellitus and chronic obstructive pulmonary disease (COPD) were pre-eminent, with an overall prevalence of 8% and 5.5%, respectively. Ischemic heart disease was present in 338 cases: of these, 185 had myocardial infarction, corresponding to an overall prevalence of 1.1% (95% CI: 0.94-1.26). The number of cases in all ICD-9 groups increased with age. In the case of endocrine and circulatory diseases, almost 40% of the population over the age of 53 was affected. In addition, there were gender-related differences in the prevalence of several ICD-9 groups (Table 3 ). Approximately 13% of cases (n = 2,210) underwent surgical procedures during hospitalization, where digestive (n = 349) and musculoskeletal (n = 351) procedures were the most common. A comparison between our data and the official data for the Spanish population provided by the National Health Survey [ 13 ] indicate that diabetes rates are clearly higher, twice as high in the case of women, than those observed for the general Spanish population (5.02% overall, 5.29% in women and 4.73% in men). The frequency of neoplasms observed in our study is also higher than that reported in the population data for Spain (2.37% overall, 1.92% in men and 2.79% in women). Similarly, our cases register a higher rate of tobacco and alcohol abuse/dependence than does the general Spanish population with 12.8% and 2.55% of the population (4.5% of men and 0.6% of women) being classified as heavy smokers and excessive drinkers respectively, and also a high rate of AIDS (1.61 per 1000 population). On the other hand, we found no pronounced differences in the frequency of COPD (5.33% in the overall Spanish population) and ischemic heart disease (2.39% overall, 3.2% in men and 1.62% in women). Regarding outcomes, 88% of the cases (n = 14,701) returned home; 6.2% (n = 1,036) were transferred to other hospitals; 2.1% (n = 356) were transferred to a socio-health care center; and 2.3% (n = 387) died in hospital. No significant differences were observed in hospital mortality rates between women and men (2.6% vs . 2.1%; OR: 1.22, 95% CI: 0.99-1.50). The mean age of patients who died was 63 years. For men and women there were statistically significant differences in mortality according to age, with a rise in mortality after the age of 40. The mortality in cases without an associated physical illness is 0.2% (n = 12) while that in cases with one or more codes of physical disease is 3.7% (n = 375). Furthermore, cases with schizophrenia as the main discharge diagnosis presented a hospital mortality of 0.3% (n = 27 cases), whereas that in cases with physical illness in the primary diagnosis was of 6% (n = 360 cases). The distribution of the main discharge diagnoses by gender is shown in Table 4 . To investigate factors associated with in-hospital mortality, we performed an exploratory logistic regression analysis which included gender, age-quartiles, ICD-9 main diagnostic blocks at discharge, and categorized Charlson index scores. The results indicated that the risk of in-hospital death was significantly correlated with age, Charlson index score, and several main ICD-9 codes of physical disease (Table 5 ). There were no significant interactions between these variables (p = 0.094). Moreover, there were no significant differences by gender in mortality after controlling for age, Charlson Index, and ICD categories. Our study found 387 observed deaths, whereas expected deaths based on general population estimates for the same calendar time amount to 167 [ 13 ]. The calculated average SMR is 2.32 (95%CI: 2.09-2.56). Furthermore, the excess over expected mortality relative to the general population was disproportionately higher in the subgroup of cases with physical disease (Figure 2 ). In fact, the calculated overall SMR in this group of cases is 3.68 (95%CI: 3.31-4.07).
Discussion This study is, to the best of our knowledge, one of the first to provide nationally representative estimates of the prevalence and characteristics of physical disease in hospitalized patients with schizophrenia in Spain. Our results indicate that schizophrenia is associated with a substantial burden of physical comorbidities; that these comorbidities appear early in life; and that they have a severe impact on mortality. Also, in agreement with prior reports [ 3 , 4 , 19 , 20 ], our data indicate that hospitalized schizophrenic individuals often have numerous physical diseases, and that several of these diseases are of known clinical severity and prognostic relevance, however, the study design do not allow us for making causal inferences. As for the most prevalent ICD-9 groups, our data were in general agreement with international figures. Thus, the rates of endocrine, circulatory, respiratory, and digestive diseases in our population were comparable to those in other studies that analyzed whole populations or used administrative data [ 19 , 21 , 22 ]. Certain physical diseases (substance abuse, injury-poisoning, and infections such as AIDS) were much more frequent in young people [ 20 , 23 ]. In contrast, circulatory and endocrine diseases were more common in patients over the age of 40 years. Our data also identify gender-related differences in the physical diseases of patients with schizophrenia. Thus, compared with women, men had significantly higher rates of substance abuse (alcohol, drugs, and cigarette-smoking [ 24 ]), and a higher prevalence in certain diagnostic groups, such as chronic respiratory processes, digestive diseases, and infectious diseases [ 6 , 23 , 25 ]. Although men had a lower overall risk of circulatory disease, they suffered more from ischemic heart disease and myocardial infarction. On the other hand, women were more likely to have endocrine diseases, musculoskeletal and connective tissue diseases, neurological diseases, and neoplasms, possibly related to their older age [ 25 - 27 ]. The results of our study add to the existing controversy about differences in the rates of physical illnesses in patients with schizophrenia compared to the general population [ 1 , 7 ]. Thus, a comparison of our data and official figures for the Spanish general population provided by the National Health Survey [ 13 ] suggests that subjects with schizophrenia have higher rates of substance abuse/dependency, diabetes mellitus, digestive diseases, neoplasms, and AIDS. On the other hand, we found no pronounced differences in the frequency of COPD and ischemic heart disease. These findings, though noteworthy in view of the high prevalence of related risk factors, such as diabetes and smoking, are comparable to results of previous studies. Thus, Carney and colleagues [ 19 ] found that whereas a somewhat higher percentage of persons with schizophrenia had ischemic heart disease than did controls from the general population (2.3% vs . 1.9%), the adjusted odds ratio was not significant. Regarding COPD, our data also agree with previous reports [ 25 ]. However, it should be noted that diagnosis of early-stage COPD can be difficult [ 28 ] and the reported rate of COPD may be biased by a failure to perform diagnostic spirometry [ 29 ]. In addition, a tendency to ignore a diagnosis of COPD in patients with schizophrenia has been reported [ 30 ]. Concerning mortality, this study highlights the impact of physical disease on the risk of death in people with schizophrenia. Our analysis indicates that the Charlson index score and the presence of certain physical diseases ( e.g. , respiratory, circulatory, tumoral, infectious, digestive, and injury-poisoning) significantly increase the risk of death during hospitalization. In addition, our data underscore that physical disease in schizophrenia was associated to disproportionately high mortality risks relative to the general population. These results, that agree with prior reports [ 2 , 31 - 33 ], raise concerns about the consequences and causes of physical disorders in patients with schizophrenia and identify a compelling need for a specific approach aimed to detect physical comorbidities, especially those that are most common and closely related to mortality. In this regard, the high risk of mortality from respiratory diseases in our population suggests that a specific approach be used to monitor and control modifiable risk factors, such as smoking [ 34 ]. Furthermore, as the prevalence and the type of physical comorbidity show significant differences between both genders, the preventive and therapeutic measures to reduce such a disease burden and associated mortality must, in addition, have a specific gender orientation. Our study was observational, and thus we cannot definitely identify the influence of factors linked to lifestyle, adverse drug effects, or socio-economic level to explain the pattern of physical diseases in our population. Likewise, it is impossible for us to analyze the adequacy of healthcare received by the patients. Several authors have suggested that people with schizophrenia may receive inadequate medical treatment and experience inequalities and difficulties in accessing various medical procedures, even when free and universal healthcare is available, as in Spain, at the point of care [ 35 , 36 ]. The results of this study extend previous work by providing a comprehensive overview of medical disorders associated with schizophrenia. In particular, our study has several strengths compared to previous reports: (a) we characterized physical diseases in a large population-based representative sample of patients with schizophrenia; and, (b) we included all diseases and objectively classified them according to the ICD-9 system. In addition, our data represent clinical practice patterns for professionals nationwide, allowing us to generalize the results. Furthermore, the use of the Charlson comorbidity index, which is widely used to predict hospitalization outcome, increases the validity of observations. We must also acknowledge possible limitations of our work. This study is subject to the limitations inherent in retrospective studies using administrative databases. Data from these databases lack of many measures obtainable only from chart review or survey with the attendant potential for omitting important prognostic factors. Furthermore, these data do not allow causal inferences to be made. However, the use of such databases is well-established in psychiatric epidemiology and health services research and has been shown to furnish valuable information for assessing the need for preventive and therapeutic care and for service planning [ 37 - 39 ]. Additionally, the Spanish Ministry of Health [ 12 ] systematically performs assurance audits of the National Hospital Discharge Registry to verify coding adequacy. Moreover, we followed the guidelines for reporting observational studies, as outlined by the STROBE Initiative [ 40 ]. We also recognize that the presence of a control group would have resulted in more far-reaching results and a more precise determination of the risk and time of development of physical comorbidities in patients with schizophrenia. Currently, however, it was not possible for us to consider a sufficiently large control group representative of the patients studied. Nevertheless, given that, to our knowledge, this work constitutes the first study with this range of diagnosis and population size carried out in our country, we hope that our results will be the base of other studies with a stronger methodological design. Finally, it is inarguable that hospitalized subjects will have a disease load and severity greater than the outpatient population. However, our results clearly show the distribution and prevalence of different physical diseases that can contribute to the deterioration of health, causing patients to be admitted into hospital and increasing their risk of death. As with several recent studies [ 36 , 41 ], our results have implications for the design of preventive and therapeutic programs and services for people with schizophrenia that aim to reduce the prevalence and negative impacts of physical diseases in this population.
Conclusions In summary, we analyzed a nationwide database to determine the prevalence and characteristics of physical diseases in hospitalized patients with schizophrenia. Our results indicate that physical illness is a major burden for such patients, that these comorbidities appear early in life and that they have a serious impact on mortality. This information raises concerns about the consequences and causes of physical disorders in patients with schizophrenia and may prove useful in the design and implementation of preventive and therapeutic programs and for health-care service planning.
Background Physical disease remains a challenge in patients with schizophrenia. Our objective was to determine the epidemiological characteristics and burden of physical disease in hospitalized patients with schizophrenia. Methods We analyzed the 2004 Spanish National Hospital Discharge Registry, identified records coded for schizophrenia (295.xx) and characterized the physical diseases using the ICD-9 system and the Charlson Index. We also calculated standardized mortality ratios (SMRs) versus the general population adjusted by age and calendar time. Results A total of 16, 776 cases (mean age: 43 years, 65% males) were considered for analysis. Overall, 61% of cases had at least one ICD-9 physical code and 32% had more than one ICD-9 code. The Charlson index indicated that 20% of cases had a physical disease of known clinical impact and prognostic significance. Physical disease appeared early in life (50% of cases were 15-31 years of age) and increased rapidly in incidence with age. Thus, for patients aged 53 years or more, 84% had at least one physical ICD-9 code. Apart from substance abuse and addiction, the most prevalent diseases were endocrine (16%), circulatory (15%), respiratory (15%), injury-poisoning (11%), and digestive (10%). There were gender-related differences in disease burden and type of disease. In-hospital mortality significantly correlated with age, the Charlson Index and several ICD-9 groups of physical disease. Physical disease was associated with an overall 3.6-fold increase in SMRs compared with the general population. Conclusions This study provides the first nationally representative estimate of the prevalence and characteristics of physical disease in hospitalized patients with schizophrenia in Spain. Our results indicate that schizophrenia is associated with a substantial burden of physical comorbidities; that these comorbidities appear early in life; and that they have a substantial impact on mortality. This information raises concerns about the consequences and causes of physical disorders in patients with schizophrenia. Additionally, it will help to guide the design and implementation of preventive and therapeutic programs from the viewpoint of clinical care and in terms of health-care service planning.
Competing interests The authors declare that they have no competing interests. Authors' contributions Authors CB and JM Amate designed the study, wrote the protocol, and managed literature searches and analysis. Authors CB and TL performed the statistical analysis. Author CB wrote the first draft of the manuscript. All authors contributed to and have approved the final manuscript. Pre-publication history The pre-publication history for this paper can be accessed here: http://www.biomedcentral.com/1471-2458/10/745/prepub
Acknowledgements Funding for this study was provided by the Spanish R&D Grant no. PI06/90571. The funding body had no further role in study design, data collection, analysis, interpretation, writing of the report, or the decision to submit the paper for publication.
CC BY
no
2022-01-12 15:21:37
BMC Public Health. 2010 Dec 2; 10:745
oa_package/c3/6e/PMC3014899.tar.gz
PMC3014900
21126339
Background Obesity and type 2 diabetes have been a major global health problem [ 1 , 2 ]. It is during the past few decades that obesity and type 2 diabetes have rapidly reached epidemic proportions not only in adults but also in children and adolescents [ 3 , 4 ]. The fact that human genome has not changed markedly in such a short time has led to the hypothesis that obesity and diabetes are the result of gene-environment/diet interaction [ 5 , 6 ]. Increasing evidence has indicated that diet factors may play a crucial role in promoting obesity and type 2 diabetes [ 6 , 7 ]. Recently, considerable attention has been focused on dietary carbohydrates, for the epidemiological studies from the United States (US) have shown that the rising prevalence of obesity and type 2 diabetes has been accompanied by a significant increase in carbohydrate consumption during the past three decades [ 8 - 10 ]. However, the nature of risk factors in carbohydrate diets remains unclear [ 11 , 12 ]. Since around the mid-20th century, one of the major changes in diet has been the significant increase in the content of niacin (vitamin B 3 , either nicotinic acid or nicotinamide), thiamin and riboflavin in grain (including flour) products, because of the worldwide spread of B-vitamins fortification, i.e., the addition of the B vitamins to a food above the level normally to prevent the deficiency of the vitamins [ 13 ]. One likely possibility is that long-term intake of B-vitamins-fortified foods may lead to a chronic overload of the B vitamins. For example, the amount of daily niacin consumption per capita in the US has increased from 16 mg in the late 1930s (just before the implementation of niacin fortification) to 33 mg in the early 2000s [ 14 ], a level of more than 2 times higher than the recommended dietary allowance (RDA) by the US Food and Nutrition Board (RDA: 14 and 16 mg/d for adult women and men, respectively) [ 15 ]. Obviously, a largely ignored fact is that the rapid increase in the global prevalence of obesity and type 2 diabetes has occurred following the worldwide spread of the B-vitamins fortification of foods. The overall trend is that the global increasing prevalence of obesity and type 2 diabetes occurred first in the earliest fortified-countries, and then spread to the latter fortified-countries. Whether the increased prevalence of obesity and type 2 diabetes in the past few decades involves excess consumption of the B vitamins is not known. Moreover, with the implementation of B-vitamins fortification of grains, the major source of carbohydrates, the effect of high carbohydrate diets on obesity and diabetes has significantly changed in the past three decades. Before the introduction of B-vitamins fortification of grains, a high-carbohydrate dietary pattern had been known to be associated with a low prevalence of obesity and type 2 diabetes, and a low-fat, high-carbohydrate diet is the traditional recommendation for treating type 2 diabetes [ 16 - 18 ]. However, this traditional dietary recommendation has been challenged by the epidemiological evidence from the US that adopting high-carbohydrate, low-fat diets in the past three decades have been unexpectedly followed by a sudden sharp increase of obesity prevalence starting from the early 1980s. Although it is argued that high carbohydrate intake may increase the risk for obesity and type 2 diabetes [ 8 - 10 ], what underlies the change in the effect carbohydrate diets remains unanswered. It should be noted that the sudden sharp increase in the nationwide prevalence of obesity in the US has occurred soon after the update of B-vitamins fortification standards in 1974, which has led to a significant increase in the B-vitamins contents in grain products since then. However, the relationship between these two events remains to be investigated. Among the three fortified B vitamins, niacin is well known to induce severe adverse effects, including glucose intolerance, insulin resistance and liver injury [ 15 , 19 , 20 ], all of which are the major hallmarks of obesity and type 2 diabetes [ 2 , 4 ]. Our previous study suggested that type 2 diabetes and children obesity may involve excess niacin intake [ 21 , 22 ]. Thus, a high prevalence of impaired glucose tolerance, insulin resistance, and subsequent obesity and type 2 diabetes is expected to occur in the population exposed to long-term high intake of the B vitamins. To test this possibility, this ecological study examined the associations between the prevalence of adult obesity and diabetes in the US and the per capita consumption of niacin, thiamin and riboflavin, and main macronutrients (carbohydrate, protein, saturated fat, dietary fiber) as well.
Methods Data sources This ecological study investigated the association between an exposure (B-vitamins consumption) and outcomes (the prevalence of obesity and diabetes) using aggregated data on a population level (the US population), and was conducted by analyzing obesity and diabetes prevalence data from all of the participants in the US National Health Interview Survey (NHIS), National Health Examination Survey (NHES) and National Health and Nutrition Examination Surveys (NHANES). The data on the per capita nutrient and energy consumption in 1909-2004 [ 23 ], the per capita grain consumption in 1909-2007 [ 24 ], the grain contribution to niacin consumption and the energy consumption from major food groups in 1909-2000 [ 14 ] were derived from the databases of the Economic Research Service (ERS) of the US Department of Agriculture. ERS annually calculates the amounts of several hundred foods available for human consumption in the US and provides estimates of per capita availability. In brief, the food consumption (or food disappearance) is calculated at the national level by adding total annual production, imports, and beginning stocks of a particular commodity and then subtracting exports, ending stocks, and nonfood uses. Per capita estimates are calculated using population estimates for that particular year. To estimate grain contribution to total niacin, total niacin contributed from daily per capita consumption of a variety of food items, mainly including meat, poultry, fish, grain products, milk, cheese, legumes, fruits, vegetables, etc, is calculated according to the niacin content of each food, and then the amount of niacin from grains was divided by the total niacin amount (see Table nineteen in Ref. [ 14 ]). ERS's food availability (per capita) data serve as indirect measures of trends in food use. The Food Availability (Per Capita) Data System provides an indication of whether Americans, on average, are consuming more or less of various foods over time. Also, the estimates of nutrients in the food supply reflect Federal enrichment and fortification standards and technological advances in the food industry [ 14 ]. The prevalence of diagnosed diabetes in the US population in 1958-2008 and in the all age groups of 0-44, 45-64, and 65-74 years of both sexes in the US population in 1980-2006 were derived from the NHIS of the National Center for Health Statistics (NCHS), Centers for Disease Control and Prevention (CDC) [ 25 - 27 ]. Conducted continuously since 1957, the NHIS is a health survey of the civilian, noninstitutionalized population of the US. The survey provides information on the health of the US population, including information on the prevalence and incidence of disease. The multistage probability design of the survey has been described elsewhere [ 28 , 29 ]. During 1980-1996 NHIS, each year, a one-sixth sub-sample of NHIS respondents was asked whether in the past 12 months they or any family member had diabetes. Three-year averages were used to improve the precision of the annual estimates. The NHIS was redesigned in 1997. In the redesigned survey, all sampled adults are asked whether a health professional had ever told them they had diabetes. To exclude gestational diabetes, women were asked whether they had been told they had diabetes other than during pregnancy. Also, parents of sampled children were asked whether their child had diabetes. Diabetes prevalence estimates are presented by age, race, ethnicity, and sex. Prevalence estimates were age-adjusted using NCHS estimates of the 2000 US population as the standard. Detailed descriptions of the survey methods are available on-line [ 26 , 27 ]. The data on the prevalence of obesity (body mass index ≥30.0) in adults aged 20-74 years in the US of both sexes were derived from the CDC's NHES (1960-1962), NHANES I (1971-1974), NHANES II (1976-1980), NHANES III (1988-1994), and the continuous NHANES 1999-2000, 2001-2002, 2003-2004 [ 30 ]. NHANES includes a series of cross-sectional nationally representative health examination surveys beginning in 1960. Beginning in 1999, NHANES became a continuous survey without a break between cycles. Each cross-sectional survey provides a national estimate for the US population at the time of the survey, enabling examination of trends over time. The survey examines a nationally representative sample of about 5,000 persons each year. The participants are located in counties across the country, 15 of which are visited each year. All participants visit the physician. Dietary interviews and body measurements are included for everyone. Health interviews are conducted in respondents' homes. Health measurements are performed in specially-designed and equipped mobile centers, which travel to locations throughout the country. The study team consists of a physician, medical and health technicians, as well as dietary and health interviewers. Detailed descriptions of the survey methods are available elsewhere [ 31 , 32 ], and on-line http://www.cdc.gov/nchs/nhanes/about_nhanes.htm . Statistical analyses A time-lag regression analysis for the prevalence of diabetes and adult obesity as a function of per capita nutrient intake was carried out to determine if per capita nutrient intake had a time-delayed effect on the prevalence of diabetes and obesity. Using SPSS software (SPSS Inc., Chicago, USA), each lag regression analysis was performed with an initial lag value of zero, and then the lag value for changes in the prevalence of diabetes and adult obesity was increased by a step of one year until the maximum coefficient of determination ( R 2 ) was obtained. Thus the lag time between a given nutrient intake and the prevalence of diabetes or obesity was determined. Then, graph of the prevalence of diabetes and adult obesity against a given nutrient intake was plotted according to the lag time. A similar regression analysis was used to examine the possible relationship between intake changes of different nutrients. Statistical significance was set at P < 0.05. The data used for this study are available upon request.
Results The per capita niacin consumption and the prevalence of diabetes in the US As shown in Figure 1 (open cycles), there were two sharp increasing periods in the per capita niacin consumption in the US: one started in 1938 and continued to the early 1940s, after which the daily per capita niacin consumption increased from 16 mg in 1939 to 20 mg in 1949; the other occurred since 1974, which has further increased the daily per capita niacin consumption from 22 mg in the early 1970s to 33 mg in the early 2000s [ 23 ]. Following the two sharp increases in the per capita niacin consumption, there were also two periods of rapid increase in the prevalence of diabetes in the US in the latter half of the 20th century: the first one started from the early 1960s to the mid-1970s, after which the prevalence abruptly increased from 0.87% in 1959 to 2.49% in 1979; the second one began in the mid-1990s. By 2008, the prevalence had increased from 2.52% in 1990 to 6.29%. Between the two periods, there was a relatively constant prevalence of diabetes. Most evidently, the prevalence of diabetes in the US population during 1958-2008 increased in striking parallel with the per capita niacin consumption in 1932-1982 (Figure 1A ). Lag-regression analysis revealed that the prevalence of diabetes was determined by the per capita niacin consumption with a lag of 26 years (Figure 1B ). The associations between the prevalence of diabetes in the US in 1980-2006 [ 26 ] and the per capita niacin consumption in 1954-1980 were similar in both sexes ( R 2 = 0.958 and 0.935 for the male and female respectively, both P < 0.001). Such a significant correlation was also found in each adult age group with a distinct lag time (24, 26 and 26 years lag for the age groups of 65-74, 45-64 and 0-44 years, respectively) (Figure 1C-H ). The per capita grain consumption and the prevalence of diabetes in the US Grains, the vehicle for niacin fortification, have become the major contributor to dietary niacin since the implementation of niacin fortification in the US. Niacin fortification has made the grain contribution to dietary niacin increase from 22.5% in 1930s (i.e., before niacin fortification) to 44.8% in 2000 [ 14 ]. As shown in Figure 2A , each of the two sharp increases in the grain contribution, occurred respectively in 1940s and in mid-1970s, was followed by a subsequent rapid increase in the prevalence of diabetes with a lag of 26 years. The prevalence of diabetes in the US in 1958-2008 was significantly correlated with the grain contribution to niacin in 1932-1982 (Figure 2B ). The associations were similar not only in both sexes (Figure 2C and 2D , R 2 = 0.833 and 0.791 respectively for the male and female populations, both P < 0.001), but also in different age groups ( R 2 = 0.801, 0.835 and 0.875 for the groups aged 0-44, 45-64 and 65-74 years, respectively, all P < 0.001). Figure 2E shows that, in the early 20th century, the dietary pattern of high grain consumption was associated with a low prevalence of diabetes in the US. However, the re-increase in the consumption of niacin-fortified grains has been followed by a more rapid increase rather than decrease in the prevalence of diabetes in the US. The prevalence of diabetes in the US in 1997-2008 was significantly correlated with grain consumption in 1971-1982 (Figure 2F ). The per capita niacin and grain consumption and the prevalence of obesity in US adults Obesity is a known risk factor for type 2 diabetes. The present results showed that the prevalence of diabetes in the US is significantly correlated with the prevalence of obesity with a time lag of 16 years (Figure 3A ). The prevalence of obesity was significantly positively correlated with the daily per capita niacin consumption with a 10-year lag (Figure 3B ). The correlations were similar in both sexes (Figure 3C and 3D ), in the female age groups of 20-39 and 40-59 years ( R 2 = 0.971 and 0.977, respectively, both P < 0.001), and in the male age groups of 20-39 years and 40-59 years ( R 2 = 0.842 and 0.98, respectively, both P < 0.001). Figure 4 (A and B) shows that the prevalence of obesity among the US adult population in 1960-2004 increased in parallel with the increase in the grain contribution to niacin in 1950-1994. Similar relationships were observed in both sexes (Figure 4C and 4D ). Moreover, as shown in Figure 4E , the re-increase in the consumption of niacin-fortified grains since the early 1970s has been followed by a more rapid increase in the prevalence of obesity in the US. The prevalence of obesity in the US adult population in 1988-2004 was significantly correlated with the per capita grain consumption in 1978-1994 (Figure 4F ). The per capita thiamin and riboflavin consumption and the prevalence of obesity and diabetes in the US Both thiamin and riboflavin, other B vitamins, have also been used to fortify grains in the US since the initiation of grain fortification in the early 1940s [ 14 ]. As shown in Figure 5 the implementation of mandatory fortification of grains had also led a rapid increase in per capita consumption of thiamin and riboflavin since the early 1940s, and the update of the fortification standards in 1974 led to a further sudden increase in the consumption of these two B vitamins (Figure 5A and 5C , open cycles). The present analysis revealed that the prevalence of obesity and diabetes increased in parallel with the increase in the consumption of thiamin and riboflavin, with a time lag of 10 and 26 years, respectively (Figure 5 ). The per capita macronutrient consumption and the prevalence of obesity and diabetes in the US Figure 6A shows that the trends in the per capita carbohydrate consumption and the obesity prevalence in the 20th century. Re-increase in carbohydrate consumption since the late 1960s was followed by a significantly increasing prevalence of obesity. Grain and sugar (including sweeteners) are two major contributors to dietary carbohydrate. Since the early 1970s, the per capita carbohydrate consumption has shown a trend of increase in the fortified-grain contribution and decrease in sugar contribution (Figure 6B ). However, as shown in Figure 6C and 6D , this regimen did not prevent the increasing trend, but rather was followed by a steep increase in the prevalence of obesity. Protein is another important macronutrient. The per capita consumption of protein and energy has showed a significantly increasing trend since the late 1960s, and increased with the increase in the obesity prevalence with a one-year lag (Figure 6E and 6F , respectively). Moreover, there was a decreasing trend in US per capita consumption of dietary saturated fats in 1970s-1990s (Figure 7A , open cycles) and cholesterol from mid-1940s to late 1990s (Figure 7C , open cycles), with an increasing trend in the consumption of dietary fiber from mid-1960s to the early 2000 (Figure 7E , open cycles). Unexpectedly, all of these regimens have failed to prevent the increasing trends in the prevalence of obesity and diabetes in the last three decades of 20th century (Figure 7 ). Relationships between per capita energy consumption contributed from major food groups and the prevalence of adult obesity in the US As shown in Figure 8A , most of the energy consumed by the US population is obtained from grains, sugars, meat and fats/oils. Among the contributions, the biggest is the grain contribution which has undergone a dramatic change in the last century. The data showed that high grain contribution to energy consumption was associated with very low obesity prevalence in the US population in the early 20th century. However, re-increase in the contribution of grains fortified with more B-vitamins since the early 1970s was followed by a sharp increase in the obesity prevalence. There is a significant correlation between the grain contribution to the energy consumption in 1969-1994 and the obesity prevalence in 1979-2004 (Figure 8B ). In contrast, there were no significant positive correlations between the increased obesity prevalence and the contributions from other main energy contributors, including the known risk factors such as animal fats (Figure 8A ), sugars (Figure 8C ) and meat (Figure 8D ). Relationships between per capita B-vitamins consumption and the consumption of energy and protein in the US As shown in Figure 9 there was a sudden increase in the consumption of both energy and protein in the US around mid-1980s, about 10 years after the update of the fortification standards, and since then, the per capita consumption of energy and protein has been showing an increasing upward trend. Lag-regression analysis revealed that there were significant correlations between the consumption of energy and protein and the consumption of niacin, thiamin or riboflavin with a time lag of 11 years, one year longer than that between the adult obesity prevalence and the per capita consumption of the B vitamins (see Figure 3B , 5A and 5C for comparison).
Discussion Obesity and type 2 diabetes are closely linked to diet. The increasing global prevalence of obesity and type 2 diabetes implies that there might have been some common, worldwide changes happened in diet. Indeed, a significant change, happened since around the mid-20th century, is food fortification with B-vitamins. The present ecological study found that the nationwide prevalence of obesity and diabetes in the US in the past 50 years increased in close parallel with the per capita consumption of niacin, thiamin or riboflavin, with a 10-and 26-year lag, respectively. It is obvious that the B-vitamins fortification has been followed first by an increase in the prevalence of obesity, and then by an increase in the prevalence of diabetes. Thus, it seems that the high level consumption of the B vitamins, primarily due to the mandatory grain fortification with the vitamins, may be an attractive candidate for the dietary changes responsible for the increased prevalence of obesity and diabetes. B-vitamins fortification of grains and the change in high-carbohydrate diet effect The prevalence of diabetes began to rapidly increase in the US in the 1960s, which was thought to be possibly due to a shift towards a dietary pattern characterized by low fiber and high saturated fats and sugar. Since around the early 1970s, a series of preventive measures have been taken, including reducing consumption of saturated fats and sugar and increasing intake of grains and dietary fiber. Moreover, the standards of B-vitamins fortification were updated in 1974, which has led to a further significant increase in the B-vitamins contents in grain products [ 14 ]. Unexpectedly, all of these preventive measures have been followed by a sharp nationwide increase in the prevalence of obesity started from mid-1980s and a second rapid increase in the prevalence of diabetes in the late 1990s. Because one of the most significant changes during this period was the significant increase in the per capita grain consumption, it is suspected that there must be something happened in dietary carbohydrates [ 11 ]. Carbohydrates are the main energy source for the body. The role of carbohydrate in the diabetes meal plan remains controversial. Traditionally, high carbohydrate, low fat diets were associated with lower prevalence of obesity and type 2 diabetes in the US in the early 20th century (i.e., before the mandatory B-vitamins fortification), and high-carbohydrate and low-fat diets were used for treating type 2 diabetes [ 16 , 17 ]. High carbohydrate diets, although inducing hyperlipidemia [ 33 , 34 ], were still found to improve glucose tolerance in the late 1960s and the early 1970s [ 35 ] (i.e., more than 20 years after the implementation of mandatory fortification). However, recently, increasing studies have shown that high carbohydrate diets increase the risk for obesity and type 2 diabetes, and that low carbohydrate diets may be beneficial for preventing obesity and type 2 diabetes in the past decade [ 8 - 10 , 36 ] (i.e., about 20 years after update of the fortification standards). Low-carbohydrate diets became a major weight loss and health maintenance trend in the US during the late 1990s and early 2000s [ 9 , 37 ]. The present study also revealed that the abrupt increase in prevalence of adult obesity was in parallel with the re-increase in the per capita carbohydrate consumption started from the early 1970s with a 10-year lag. It is assumed that the change in carbohydrate-diet effect may involve a change in consuming different type of carbohydrate [ 11 ]. Gross et al. suggested that the increased prevalence of diabetes in the US may be due to a high consumption of refined carbohydrates and a lack of fiber [ 8 ]. However, the fact is that there was a significant increasing trend in the per capita grain and fiber consumption (Figure 4E and Figure 7E , open cycles) and a decreasing trend in sugar contribution to per capita carbohydrate consumption (Figure 6D , open cycles) since about the early 1970s, which has been followed by a sharp increase, rather than by a decrease, in the prevalence of obesity and diabetes in the following three decades. Moreover, although it is suspected that increasing the consumption of fructose (mainly from beet or cane, high fructose corn syrup, fruits, and honey) may play a role in the prevalence of obesity and type 2 diabetes, there is, however, no unequivocal evidence that fructose intake at moderate doses is directly related with adverse metabolic effects [ 12 ]. Thus, it seems unlikely that the change in sugar consumption is responsible for the sharply increasing nationwide prevalence of obesity in the US started since the early 1980s. Grain products, a major source of carbohydrate, are used as vehicles for the mandatory B-vitamins fortification. The mandatory grain fortification has led to a nationwide increase in B-vitamins intake [ 14 ]. Therefore, the adverse effects of B-vitamins fortification, if there are any, should be nationwide. Indeed, the present study found that the increased prevalence of adult obesity and diabetes in the US is highly correlated with the consumption of B-vitamin-fortified grains. Each of the two sharp increases in the vitamin contents, induced respectively by the initiation of the fortification and the update of the fortification standards, was followed by a nationwide increase in the prevalence of diabetes with a 26-year lag. More significantly, the update of grain fortification in 1974 and the subsequent increase use of fortified-grain products was followed by an abrupt increase in the prevalence of obesity among both the adults (Figure 3 and 5 ) and the children in the US with a 10-year lag [ 22 ]. Obesity is known to be associated with excessive energy intake. Indeed, the present population-based study also revealed a high correlation between the obesity prevalence and the per capita energy consumption (Figure 6F ). Most of energy consumed by the US population is derived from grains, sugars, meat and fats/oils. The present data clearly showed that the contributions to per capita energy consumption from the known dietary risk factors for obesity and type 2 diabetes, such as meat and animal fats, are not increased or even decreased since the early 1970s. Therefore, it seems unlikely that these known dietary risk factors alone are responsible for the nationwide sharp increase in the prevalence of obesity since the late 1970s. An interesting finding from this analysis was the strong lag-correlation between high obesity prevalence and high fortified-grain contribution to the per capita energy consumption since the early 1970s, which is totally different from the association pattern of high unfortified-grain contribution to energy consumption with very low obesity prevalence in the early 20 century. Increase in fortified-grain contribution to the total energy consumption means an increase in the intake of fortified-grain and B-vitamins, which may lead to an excessive B-vitamin intake. Because B-vitamins can stimulate appetite [ 15 ], chronic excess B-vitamins may trigger excessive energy intake, which may contribute to the different outcomes of unfortified-grains and fortified-grains. This interpretation was further supported by the finding that the per capita B-vitamin consumption was lag-correlated not only with the per capita energy consumption but also with the prevalence of obesity and diabetes. Taken together, it seems quite possible that the nationwide increased prevalence of obesity and type 2 diabetes in the US in the late half of 20th century may involve an increase in B-vitamin consumption primarily due to the implementation of mandatory grain fortification with B-vitamins. It should be noted that the standards of B-vitamins fortification vary from country to country in the world. For example, the level of wheat flour fortification with niacin in the US and the UK is 52.9 mg/kg and 16 mg/kg [ 13 ], respectively. Moreover, unlike in the US, the fortification in the UK is voluntary [ 13 ]. These differences may underlie regional differences in the study of carbohydrate effect. For example, even in the early 2000, a study from the UK still found that a low-fat, high-carbohydrate diet in overweight individuals with abnormal intermediary metabolism led to moderate weight loss and some improvement in serum cholesterol [ 38 ]. Thus, it seems necessary that the content of vitamins should be taken into consideration in the study of the relationship between carbohydrates and the development of obesity and diabetes. Excess niacin consumption and the obesity and diabetes prevalence Niacin, one of the most stable of B vitamins, is resistant to heat, light, air, acid, and alkali [ 39 ], which means that, once added to grains, little is lost during food processing and cooking [ 40 ]. The well-known common adverse effects of niacin are metabolic disturbances, such as insulin resistance and glucose intolerance, and liver injury [ 15 , 19 - 22 ], all of which are the hallmarks of obesity and type 2 diabetes [ 2 , 4 ]. Our previous studies suggested that type 2 diabetes and obesity may involve excess niacin intake [ 21 , 22 ]. Although the prevalence of obesity and diabetes is also highly correlated with thiamin and riboflavin, however, so far as we know, there is no evidence yet indicating that either thiamin or riboflavin may induce glucose intolerance or insulin resistance [ 15 ]. Thus, it seems that the high prevalence of obesity and diabetes may involve niacin consumption. Human dietary niacin comes mainly from two major sources: animal flesh (meat, poultry and fish) and grains, which accounts for about 70% of dietary niacin consumption in the US in the early 20th century [ 14 ]. The amount of the daily per capita niacin consumption from grains and animal flesh in the US was estimated to be 3.7 and 6.8 mg, respectively, in 1930s (just before the introduction of mandatory niacin-fortification), and has increased to 14.8 and 11.8 mg, respectively, in 2000, according the contribution of meat and grain to total niacin given in the literature [ 14 , 22 ]. The per capita niacin consumption from grains has increased four-fold since the implementation of niacin fortification. By the early 2000s, the US per capita daily niacin consumption has reached 33 mg [ 23 ], which is much higher than the RDA (see Introduction) [ 15 ]. Thus, long-term excess niacin intake may be very common in the US population after the implementation of mandatory niacin fortification, which may be mainly responsible for rapid increase in the prevalence of obesity and diabetes. According to the regression equations given in Figure 1B and Figure 3B , if the per capita niacin consumption is remained at the current levels (33 mg/d per capita), the prevalence of diabetes in the US would increase from the current about 6% to 7.6% by 2025, whereas the prevalence of obesity in the adults has reached its peak level. In agreement with this prediction, the recent NHANES data have shown that there was no significant change in the prevalence of obesity between 2003-2004 and 2005-2006 for either men or women [ 41 ]. B-vitamins fortification and the global increasing obesity and diabetes prevalence Grain fortification with B vitamins, a strategy for preventing B-vitamins deficiency, was first mandatorily implemented in the US in the early 1940s [ 42 ]. Soon after, many other industrialized countries, following the US model, have set up their own B-vitamins fortification programs [ 13 , 39 ]. During the last few decades, B-vitamins fortification of grains has also been introduced to developing countries [ 13 ]. Nowadays, vitamin fortification has become so popular in the world that, besides of grain fortification, other foods, such as most powdered milk and infant milk [ 43 ], have already been fortified with niacin and other B vitamins. Moreover, niacin has also been widely used in meat processing to maintain the bright red color of meat [ 44 ]. Although there are no data available concerning the relationships between B-vitamins fortification of a variety of foods and the global increasing prevalence of obesity and type 2 diabetes, the notable facts are that: (1) compared with formula-feeding, breastfeeding is associated with a reduction in risk of later overweight and obesity [ 45 - 47 ]; (2) high consumption of processed meat is a strong risk factor for type 2 diabetes [ 48 ]; and (3) the global prevalence of obesity and type 2 diabetes showed a trend of spreading from the earliest fortified-countries to the latter fortified-countries, whereas the non-vitamin-fortified countries including western developed countries, such as Norway, have a low prevalence of obesity and diabetes, compared with the earliest fortified-countries, such as the US and Canada [ 49 , 50 ]. The relevant evidence shows that, when the US was experiencing a rapid increase in the prevalence of diabetes in 1960s (i.e. more than 20 years after the introduction of B-vitamins fortification), the trend in the incidence of diabetes in Norwegian adults was fairly constant, or even significantly decreased in Norwegian women aged 40-59 years [ 51 ]. It has been well-recognized that physical inactivity also contributes substantially to the global epidemic of obesity and diabetes. A large scale investigation among the US physicians found that sweat-inducing exercise once a week may effectively reduce the risk of diabetes [ 52 ]. Although the exact mechanism of sweat-inducing physical activity is unclear, a well-known fact is that water-soluble B vitamins can be eliminated through sweat [ 53 , 54 ]. Our recent study also demonstrated that sauna-induced sweating may also effectively eliminate excess nicotinamide from the body and thus reduce the generation of toxic metabolites of nicotinamide [ 21 ]. In contrast, excess nicotinamide cannot be effectively eliminated through urine because of its reabsorption by the renal tubules [ 55 ]. Therefore, sweating is expected to play an important role in eliminating excess B-vitamins from the body. Unfortunately, modern lifestyle makes the sweat gland less active due to low physical activity and an increase in time spent in air-conditioned environments. The combination of high B-vitamins intake and low sweat elimination of excess B-vitamins may lead to chronic B-vitamins overload. Thus, there is a strong possibility that the worldwide spread of B-vitamins fortification of foods may play a role in the global prevalence of obesity and type 2 diabetes. The limitations of the current study Diagnostic criteria are important for estimating the prevalence of diabetes. Because the NHIS survey was redesigned, two changes may have affected trends. First, the diabetes question was changed. Second, proxy respondents (i.e., household members responding for absent adult members) who tend to under report disease were no longer used in the survey [ 26 , 27 ]. The NHIS survey was redesigned in 1997, and since then gestational diabetes has been excluded. All diagnosed cases of diabetes in the US consist of type 2 diabetes (90% to 95%), type 1 diabetes (5% to 10%), gestational diabetes (2% to 5%) and other specific types of diabetes (1% to 2%) [ 56 ]. In this case, the prevalence of diabetes estimated should be lower after 1997 than before 1997 because of the exclusion of gestational diabetes. However, the fact is that the prevalence of diabetes has shown a steadily increasing trend since the late 1990s, and is significantly correlated with the per capita niacin consumption either before or after 1997 (Figure 1A ). Moreover, before the significant increase in diabetes prevalence started from the mid-1990s, there had been a sudden increase in the prevalence of obesity in mid-1980s (Figure 3A ). The diabetes prevalence is strongly correlated with the obesity prevalence with a time lag of 16 years. Considering that obesity is a major risk factor for type 2 diabetes, it is unlikely that the correlation between the B-vitamins consumption and the prevalence of diabetes is due to a sampling bias. Ecological studies investigate relationships at the level of the group, rather than at the level of the individual. The inherent limitation of ecological studies is the ecological fallacy, i.e., the data of exposure and disease obtained from populations cannot be linked to individuals. To overcome this shortcoming, we used country-level food and nutrient disappearance data only from within the US, thus, the bias, if any, would at least be uniform for the same population. Also, we addressed the issue from different angles, by which the potential biases due to age or sex had been excluded. More distinctly, although this study is an ecological one, the exposure factor (the B-vitamins consumption) analysed in this study involves every adult inhabitant of the US due to the mandatory grain fortification. Moreover, the US standard of flour fortification with niacin is 5.29 mg/100 g, similar to the niacin content in meat, one of the richest sources of niacin (around 3.6 to 8.2 mg/100 g, depending on the particular type of meat product) [ 57 ]. In this case, no matter what dietary pattern is chosen, either high grain diet or high meat diet, the niacin exposure is essentially similar after the implementation of mandatory fortification. It is important to examine evidence from a variety of sources and to look for congruence between epidemiologic, clinical and laboratory research findings before establishing causality between a diet factor and a human disease [ 58 ]. Although the present study provided only correlative evidence linked the B-vitamins consumption to the prevalence of obesity and type 2 diabetes, the following clinical and laboratory research findings provide support for a likely causal relationship between the high level of consumption of the B vitamins and the development of obesity and diabetes: (1) High grain intake-induced increase in the risk of obesity and type 2 diabetes in the US has occurred only after the implementation of B-vitamins fortification, especially after the update of the fortification standards in 1974. However, traditionally, low-fat, high-carbohydrate diets were beneficial for treatment of diabetes. (2) High intake of meat, a niacin-rich food, increases the risk for diabetes [ 7 , 48 ]. (3) Obesity is a well-known risk factor of type 2 diabetes, and the prevalence of obesity precedes the prevalence of diabetes. The present results showed that the time lag for the prevalence of obesity was much shorter than that for the prevalence of diabetes under the same exposure to the B-vitamins. (4) Niacin is well known to induce glucose intolerance and insulin resistance which are the key features of obesity and type 2 diabetes. (5) The prevalence of obesity and type 2 diabetes has spread in a way similar to that of B-vitamins fortification spread in the world, i.e., from developed countries to developing countries, but the non-vitamin-fortified countries including the developed countries have been less affected. (6) High-niacin feeding has been demonstrated to induce fatty liver in rats [ 59 ], and excess niacin intake may be involved in the development of nonalcoholic fatty liver disease, a disease closely associated obesity and type 2 diabetes [ 60 ]. (7) Niacin is a potent stimulator of appetite and may play a role in the development of obesity [ 22 ]. From the findings of this study, it seems that prospective studies are needed to evaluate the possible role of the high-level consumption of niacin, thiamin and riboflavin in the prevalence of obesity and type 2 diabetes.
Conclusions The present study revealed that the increased prevalence of obesity and diabetes in the US in the past 50 years was closely correlated with the increased daily per capita consumption of niacin, thiamin and riboflavin of with distinct time lags, and suggested that long-term exposure to high level of the B vitamins may be involved in the increasing prevalence of obesity and diabetes. The present findings, together with the evidence that niacin may induce glucose intolerance, insulin resistance and liver injury, imply the possibility that, among the fortified B-vitamins, excess niacin consumption may play a major role in the development of obesity and type 2 diabetes. Since the high level consumption of niacin in the US is mainly due to the implementation of mandatory grain fortification, therefore, it may be of significance to carefully evaluate the long-term safety of food fortification.
Background The global increased prevalence of obesity and diabetes occurred after the worldwide spread of B-vitamins fortification, in which whether long-term exposure to high level of B vitamins plays a role is unknown. Our aim was to examine the relationships between B-vitamins consumption and the obesity and diabetes prevalence. Methods This population based ecological study was conducted to examine possible associations between the consumption of the B vitamins and macronutrients and the obesity and diabetes prevalence in the US population using the per capita consumption data from the US Economic Research Service and the prevalence data from the US Centers for Disease Control and Prevention. Results The prevalences of diabetes and adult obesity were highly correlated with per capita consumption of niacin, thiamin and riboflavin with a 26-and 10-year lag, respectively ( R 2 = 0.952, 0.917 and 0.83 for diabetes, respectively, and R 2 = 0.964, 0.975 and 0.935 for obesity, respectively). The diabetes prevalence increased with the obesity prevalence with a 16-year lag ( R 2 = 0.975). The relationships between the diabetes or obesity prevalence and per capita niacin consumption were similar both in different age groups and in male and female populations. The prevalence of adult obesity and diabetes was highly correlated with the grain contribution to niacin ( R 2 = 0.925 and 0.901, respectively), with a 10-and 26-year lag, respectively. The prevalence of obesity in US adults during 1971-2004 increased in parallel with the increase in carbohydrate consumption with a 10-year lag. The per capita energy and protein consumptions positively correlated with the obesity prevalence with a one-year lag. Moreover, there was an 11-year lag relationship between per capita energy and protein consumption and the consumption of niacin, thiamin and riboflavin ( R 2 = 0.932, 0.923 and 0.849 for energy, respectively, and R 2 = 0.922, 0.878 and 0.787 for protein, respectively). Conclusions Long-term exposure to high level of the B vitamins may be involved in the increased prevalence of obesity and diabetes in the US in the past 50 years. The possible roles of B-vitamins fortification and excess niacin consumption in the increased prevalence of obesity and diabetes were discussed.
Competing interests The authors declare that they have no competing interests. Authors' contributions SSZ contributed to the study conception and design and the data analysis and interpretation. DL, YMZ and WPS contributed to the collection and analysis of the data and drafting the manuscript. QGL participated in the design of the study and performed the statistical analysis. All authors read and approved the final manuscript. Pre-publication history The pre-publication history for this paper can be accessed here: http://www.biomedcentral.com/1471-2458/10/746/prepub
Acknowledgements This study was funded by National Natural Science Foundation of China (No. 30570665), the Foundation of Dalian Technology Bureau (No. 2008E13SF182) and the Foundation of Key Laboratory of Education Department of Liaoning Province (No. 2009S005).
CC BY
no
2022-01-12 15:21:37
BMC Public Health. 2010 Dec 2; 10:746
oa_package/eb/e1/PMC3014900.tar.gz
PMC3014901
21134294
Introduction Smoke inhalation injury is a serious threat to victims of house fires, explosions, and other disasters involving fire and smoke. This type of injury alone can be lethal as shown in the Cocoanut Grove fire, in which 492 people died, most without burns [ 1 ]. In the Rhode Island nightclub fire, 95 people died (out of 350 victims and survivors of this tragedy), and 187 people were treated for smoke inhalation lung injury and burns [ 2 ]. Autopsy series from fire victims show sloughed mucosal cells and a collection of proteinaceous debris obstructing the airways [ 3 ]. There are multiple case reports in adults and children of airway obstruction due to these tracheobronchial casts [ 3 ]. The airway microenvironment is significantly altered by smoke inhalation with lung parenchymal damage occurring because of surfactant denaturation, loss of endothelial and epithelial barrier functions, and influx of inflammatory cells [ 4 - 7 ]. Previously we demonstrated smoke-induced mucus overproduction in a small animal model [ 8 ]. In the healthy lung, MUC1 and MUC4 are expressed on the apical surface of the respiratory epithelium. MUC5AC and MUC2 are expressed in the goblet cells of the superficial airway epithelium, whereas MUC5B is expressed in the mucosal cells of the submucosal glands [ 9 ]. Among them, MUC5AC is considered to be the predominant mucin in airway mucus [ 10 ]. Although mucus overproduction is one of the characteristics of the response to smoke inhalation airway injury, there is only limited information available on the regulation of mucus secretion in such injuries. c-Jun N-terminal kinase (JNK) activation is required for the in vitro transcriptional up-regulation of MUC5AC in response to tobacco smoke [ 11 ]. However, the in vivo activation of JNK in the case of smoke inhalation has not yet been studied. In the present study, we used our previously established small-animal model of smoke inhalation injury [ 7 ] to determine whether the mucin genes were regulated by cotton smoke inhalation, and to test the hypothesis that smoke inhalation induces airway mucus overproduction through activation of the JNK pathway and that treatment with a JNK inhibitor could diminish airway mucus overproduction.
Materials and methods Animal Preparation This study was approved by the Massachusetts General Hospital Subcommittee on Research Animal Care and conducted in compliance with guidelines of United States Department of Agriculture Animal Welfare Act, Public Health Service Policy on Humane Care and Use of Laboratory Animals. Materials The JNK inhibitor II (SP-600125) was purchased from Calbiochem (San Diego, CA). The dose was chosen on the basis of previous in vivo studies that showed 30 mg/kg inhibited JNK activity [ 12 , 13 ]. The mice were treated with SP600125 in dimethyl sulfoxide (Sigma Chemical, St. Louis, MO) or an equivalent amount of dimethyl sulfoxide without inhibitors 1 h after injury. Experimental animals We used a modification of the established rodent model of smoke inhalation injury model as described previously [ 8 ]. Male C57BL/6, either wild-type JNK+/+ or JNK1-/- that have been backcrossed for five generations on a C57BL/6 background, weighing between 20 and 25 g were obtained from Jackson Laboratories (Bar Harbor, ME). The constructs pJNK1-/- was transfected into W9.5 embryonic stem (ES) cells. Chimeras were generated by injecting these ES cells into C57BL/6 (B6) blastocysts. Heterozygotes (+/-) were intercrossed to generate homozygous mutant mice (-/-) [ 14 ]. Animals were orally intubated with a polyethylene catheter under general anesthesia with intraperitoneal ketamine (50 mg/kg) and diazepam (5 mg/kg) while spontaneously breathing room air and then placed in the smoke chamber for 15 min. Following 15 min of smoke inhalation, animals were allowed to recover. Animals were extubated 10 min after smoke. Intubation lasted for 30 min. One hour after smoke exposure, some animals received an injection of JNK inhibitor or Dimethyl sulfoxide (DMSO) as a vehicle subcutaneously. Experimental design Wild-type JNK1 -/- mice and the wild-type mice injected with the JNK inhibitor were assigned to one of 3 groups: one was the control group; mice in the second group were subjected to cotton smoke inhalation for 15 min followed by a 4-h recovery period; and mice in the third group were subjected to cotton smoke inhalation for 15 min followed by a 24-h recovery period. A JNK inhibitor dose of 30 mg/kg was selected on the basis of previous in vivo studies that showed that this dose inhibits JNK activity [ 8 , 15 , 16 ]. Four and twenty-four hours after exposure, the animals were anesthetized and killed by exsanguination. The mice in the control group were killed 4 h after extubation, and their lungs were removed en bloc. The control group mice were further divided into 3 groups: wild-type, WC ; JNK1-/-, JKOC ; and wild-type administered the JNK inhibitor, JIC . In addition, the mice subjected to 15 min of smoke inhalation followed by a 4-h recovery period were divided into 3 groups: wild-type, WS4 ; JNK1-/-, JKOC4 ; and wild type administered the JNK inhibitor, JIS4 . The third group of mice subjected to 15 min of smoke inhalation followed by a 24-h recovery period were also divided into 3 groups: wild-type, WS24 ; JNK-1-/-, JKOC24 ; and wild type administered the JNK inhibitor, JIS24 . Each group was assigned 7 mice, and a total of 63 mice were studied. Western blot analysis For determination of MUC1, MUC4, MUC5AC, and JNK protein expression, Western blot analysis was performed with MUC1 (Abcam, Cambridge, UK), MUC 4 (Invitrogen, Carlsbad, California), MUC 5AC antibody and JNK antibodies (Santa Cruz Biotechnology, Santa Cruz, CA, and Cell Signaling Technology, Beverly, MA). Blots were developed by enhanced chemiluminescence (NEN Life Science Products, Boston, MA). Assessment of mucus Paraffin-embedded samples were sectioned at 5 μm and stained with Alcian blue (AB) at pH 2.5 and periodic acid-Schiff (PAS) for the localization of acidic and neutral mucin distribution in the airway epithelium of control mice (anesthetized and intubated for 30 min while spontaneously breathing room air without smoke exposure) and in mice with smoke injury (anesthetized, intubated, and exposed to smoke for 15 min). Both wild type and JNK-1 -/- mice were allowed to recover from smoke inhalation and they were killed 4 h or 24 h after exposure. Intubation lasted 30 min in both groups. For quantitative analysis of the airway mucous secretion, all histological slides of the left lung were randomly sorted and masked before observation. The quantity of mucin production in the airway was assessed by measuring the percentage of PAS-positive cells in the airway epithelium. The numbers of PAS-positive cells were counted on longitudinal lung sections of the proximal to distal airways. Each section had 4 randomly selected regions evaluated, two segments of the proximal airway and two segments from the distal airway. A minimum of 100 sequential airway epithelial cells were counted from each region and the total number of PAS positive cells per total epithelial cells was determined for each region. These regional values were then averaged to give a final PAS score per animal. For quantitation of airway obstruction, each slide was systematically scanned using ×4 objective magnification, and for each cross-sectioned airway, a score of 0-100% was made as an estimate of the degree of luminal obstruction for each cross-sectioned airway present. A mean obstruction score was determined for each animal and then for each group [ 6 ]. Pathology scoring The pathological changes were compared using a modification of a previously described scoring system for pathological changes after smoke inhalation [ 8 ]. Briefly, we examined four fields (2 peripheral and 2 central) for five injurious variables on each slide. Injurious variables included 1) airway epithelial shedding, 2) airway epithelial edema, 3) increased cellularity in the airway and parenchymal tissues, 4) increased peribronchial and perivascular cuff area, and 5) alveolar atelectasis. The total lung injury score was calculated as the sum of each variable (0 for none or normal to 3 for severe). Lung immunohistochemistry The paraffin sections were cut to 5 μm in thickness, mounted on silane-coated glass slides, and stored for 1 h at 60°C. The slides were deparaffinized with xylene, three times, 5 min each, and were rehydrated with graded alcohols (100, 95, 70 and 50%) for 5 min, respectively. After washing with 0.01 M phosphate buffered saline (PBS) for 5 min, the sections were digested with Proteinase K (20 μg/ml) at room temperature for 20 min, and were washed twice with distilled water for 2 min each. The endogenous peroxidase activity was blocked with 3% hydrogen peroxide (H2O2) in PBS for 5 min; the slides were rinsed twice with PBS for 5 min. Sections for positive control were treated with 3% H2O2, then washed twice with PBS. For negative controls, sections were covered with reaction buffer alone and incubated following same conditions. The sections were incubated 1.5 h with monoclonal antibody against MUC5AC (Santa Cruz Biotechnology, Santa Cruz, CA) at a concentration of 10 μg/ml. The sections were then incubated with biotinylated goat anti-mouse Ig antibody as the secondary antibody, and the antibody reactions were visualized by using diaminobenzidine as chromagen (DAKO, Carpinteria, CA). For microscopic observation, the sections were counterstained lightly with hematoxylin for one min. The quantity of MUC5AC protein production in the airway was assessed by measuring the percentage of MUC5AC positive cells in the airway epithelium. The method for evaluating the numbers of MUC5AC positive cells was same as PAS positive cell counting. Quantitative real-time PCR Total RNA was isolated by the phenol and guanidine isothiocyanate method using Trizol ® (Invitrogen, Carlsbad, CA). Genomic DNA was removed from the extracted total RNA using the RNeasy kit (Quiagen, Austin, TX). cDNA was made with equal amounts of mRNA (2 μg), using the Superscript III reverse transcriptase (Invitrogen, Carlsbad, CA), as per manufacturer's instructions. The primer sequence for mucin genes were as follows: MUC5AC , 5'-ACTGTTACTATGCGATGTGTAGCCA-3' (sense) and 5'-GAGGAAACACATTGCACCGA-3' (antisense) (GenBank accession no. NM_010844 ); MUC5B , 5'-GAACGCCATATTCCCGACACT-3' (sense) and 5'-GCCCCAGGTGGAGGGACATAA-3' (antisense) (GenBank accession no. NM_028801 ); MUC2 , 5'-ACGATGCCTACACCAAGGTC-3' (sense) and 5'-CCATGTTATTGGGGCATTTC-3' (antisense) (GenBank accession no. NM_023566 ); MUC6 , 5'-CACACAACCAACACCAATTC-3' (sense) and 5'-TGAGAAAGGTAGGAAGTAGAGG-3' (antisense) (GenBank accession no. NM_181729 ); GAPDH , 5'-CAACTACATGGTCTACATGTTC-3' (sense) and 5'-CGCCAGTAGACTCCACGAC-3' (antisense) (GenBank accession no. NC_000072 ). Quantitative real-time reverse transcription polymerase chain reaction (qRT-PCR) was performed on the samples by using Applied Biosystems Assays-On-Demand primer/probe sets and TaqMan Universal PCR Mix (PE Applied Biosystems, Foster City, CA). The samples were analyzed on the Stratagene MX3000P sequence detection system under the following conditions: 94°C for 3 min, 45 cycles at 94°C for 30 s, 50°C. The fold change was determined as described in the Applied Biosystems manufacturer's instructions (4371095 Rev A, PE Applied Biosystems, Foster City, CA). Briefly, the average crossing threshold (CT) of the target genes for each group minus the average housekeeping gene (GAPDH) CT was used to determine the relative expression (ΔCT). The average ΔCT of the experimental animals (smoke inhalation) was subtracted from the average control (intubation only) ΔCT to determine the ΔΔCT. The ΔΔCT was then used in the formula 2 ΔΔCT to determine the fold change in mRNA expression. The upper and lower limits of fold change were determined by taking the averaged standard deviations of each experimental group through the above calculations [ 17 , 18 ]. Immunofluorescence Paraffin-embedded lung tissue samples were de-waxed in xylene twice for 5 min each time, rehydrated in an ethanol series (100-70%) for 3 min each followed by rehydration in phosphate-buffered saline (PBS) for 30 min. The rack is transferred into 200 ml of pre-warmed (94°C-96°C) Dako (DAKO, Carpinteria, CA) target retrieval solution. Following antigen retrieval, the sections were washed three times with PBS, blocked in 4% skimmed milk for 1 hr, and then stained using the kit mentioned below according to the manufacturer's recommendations but with the following modifications. Sections were incubated with the primary antibody pJNK (1 : 400, Cell Signaling Technology, Beverly, MA) at 4°C overnight and secondary antibody, Alexa488-cojugated goat anti-mouse IgG 1 (1:2000. Invitrogen, Carlsbad, CA) for 60 minutes prior to viewing with a Nikon Eclipse E600 microscope using an NCF Fluor 40 objective lens. Visualization of the nuclei was by 4',6-diamidine-2'-phenylindole, dihydrochloride (DAPI) staining. Statistical analysis Analyses were performed using SPSS (Version 13.0 software). For comparison between groups, analysis of variance(ANOVA) followed by multiple comparisons by Scheffé's test with Bonferroni post hoc analysis. Significance was set at P < 0.05. All values were expressed as means ± SE.
Results Pathologic score and airway obstruction Fifteen minutes smoke inhalation caused an increase in pathologic score in wild type mice either 4 h or 24 h recovery compared with control. The pathological scores 4 h and 24 h after smoke inhalation was significantly decreased by use of the JNK inhibitor or JNK -/-. Although the score was decreased with 24 h after recovery compared with 4 h in wild type mice, the results did not reach to statistical significant (Table 1 ). Mucous plugging was assessed periodic acid-Schiff (PAS) staining. The average percentage of airway obstruction with mucous plugging was decreased in JNK inhibitor treatment and JNK -/- mice. Although three was a trend to less obstruction in JNK -/- mice than JNK inhibitor, the results did not reach to statistical significant (Table 1 ). Smoke-induced mucus production in the airway of mice through JNK activation Since smoke inhalation during fires is associated with mucus hypersecretion, we evaluated mucin secretion in the airway of mice by using the PAS stain. The PAS stain is mainly used for staining structures containing a high proportion of carbohydrate macromolecules (glycogen, glycoprotein, and proteoglycans), typically found in mucus. Four and twenty-four hours after smoke inhalation, the wild-type mice clearly showed increased PAS stained cells in their airways (Figure 1 ). We observed minimum or no PAS staining in the mice in the control group, JNK1 KO group, and JNK inhibitor group. Semi-quantitative scale values for the percentage of PAS-positive cells were significantly increased in the WS4 and WS24 mice compared with the WC, JIC, and JKOC mice (Table 1 ). Mucin gene and protein expression MUC1 and MUC4 are important membrane-bound mucins. These mucins generate the sol layer of mucus. In the present smoke inhalation mouse model, we observed no difference in MUC1 and MUC4 protein expression between mice in the control and smoke inhalation groups (Figure 2 ). Gel-forming mucin genes such as MUC2, MUC5AC, MUC5B, and MUC6 were evaluated by quantitative PCR. Only MUC5AC gene expression, which was also evaluated by immunoblotting (Figure 3 ) and immunohistochemistry (Figure 4 ), was found to be increased in the wild-type mice subjected to smoke inhalation. Semi-quantitative scale values for the percentage of MUC5AC-positive cells were significantly increased in the WS4 and WS24 mice compared with the WC, JIC, and JKOC mice (Table 1 ). Smoke-induced activation of JNK Immunoblotting data suggested that p JNK was activated in the mice 4 and 24 h after smoke exposure (Figure 5 ). Immunofluorescence imaging further contributed to these results by showing that smoke induced the phosphorylation of JNK, especially in the small airway epithelium. Smoke-induced phosphorylation of JNK suggested that this kinase might participate in the induction of MUC5AC gene expression in the lung cells. To investigate this possibility, we manipulated JNK activity and assessed the effects of this treatment on the responsiveness of MUC5AC to smoke. JNK -/- or mice injected with the JNK inhibitor SP600125 attenuated both MUC5AC protein expression and JNK activity (Figure 5 ).
Discussion Airway mucus production is observed in burn trauma victims [ 19 ] and also in a combined burn and smoke inhalation injury model [ 6 ], but the mechanism by which smoke damages the airway still remains unclear. In our mouse model of smoke inhalation injury, we found that smoke inhalation induced the mucus overproduction was associated with an increase in epithelial MUC5AC protein expression, and this was dependent on the activation of the JNK pathway. Four and twenty-four hours after exposure to smoke from burning cotton, we observed that MUC5AC mRNA levels were elevated in the mouse lungs, and MUC5AC protein was expressed predominantly in the surface cells of the mouse airway. This elevated expression was abrogated by JNK1 mutation and the JNK inhibitor, indicating the dependence of MUC5AC expression on JNK activity. JNK activation was prominent in the airway epithelial cells (Figure 5 ). Although the JNK inhibitor was introduced 1 h after smoke inhalation injury, we still observed a decrease in mucus production. These results suggested that the JNK pathway can be a potential target for regulating mucus overproduction in smoke inhalation injury. In the present study, MUC5AC protein expression was increased within 4 hour after 15 min smoke inhalation. The expression was sustained after 24 hour recovery. Similar to the present study, MUC5AC can be induced within 24 hour of inflammatory or bacterial stimulation. Intratracheal instillation of IL-13 elicited huge amount of induction of MUC5AC mRNA within 24 hour in wild-type mouse lung [ 20 ]. Up-regulation of MUC5AC mucin transcription was induced by 7 hour of Streptococcus pneumoniae incubation [ 21 ]. Twelve hour of human neutrophil peptide-1 or lipopolysaccharide incubation caused an increase in MUC5AC mRNA levels [ 22 ]. However, MUC5AC can be up-regulated different time course in relation to different stimulation. In murine asthma model, airway MUC5AC gene was over-expressed after 24 hour sensitization of ovalbumin [ 23 ]. In the present mouse model of smoke inhalation, MUC5AC was the predominant gel-forming mucin gene that was expressed. We observed no differences in MUC5B, MUC2, or MUC6 mRNA expression between mice in the control and the smoke injury groups (data not shown). The membrane-associated mucins, MUC1 and MUC4, were found to be highly expressed in both the control and smoke inhalation group mice. MUC5AC gene expression was found to be increased 4 h after smoke exposure, and it remained elevated throughout the 24-h recovery period. This suggested that in the case of smoke inhalation exposure, even for short periods of time, mucus overproduction may persist for more than 24 h after initial exposure. Hence, we concluded that MUC5AC can be a potential target for reducing mucus overproduction after smoke inhalation injuries.
Conclusions In this study, we showed that MUC5AC protein overexpression in response to cotton smoke inhalation is tightly regulated via the JNK signaling pathways. These findings suggested that smoke inhalation can cause the overall up-regulation of MUC5AC production by JNK activation in the bronchial mucosal cells. These findings can contribute to the development of new therapeutic strategies to treat smoke inhalation injuries.
Background Increased mucus secretion is one of the important characteristics of the response to smoke inhalation injuries. We hypothesized that gel-forming mucins may contribute to the increased mucus production in a smoke inhalation injury. We investigated the role of c-Jun N-terminal kinase (JNK) in modulating smoke-induced mucus secretion. Methods We intubated mice and exposed them to smoke from burning cotton for 15 min. Their lungs were then isolated 4 and 24 h after inhalation injury. Three groups of mice were subjected to the smoke inhalation injury: (1) wild-type (WT) mice, (2) mice lacking JNK1 (JNK1-/- mice), and (3) WT mice administered a JNK inhibitor. The JNK inhibitor (SP-600125) was injected into the mice 1 h after injury. Results Smoke exposure caused an increase in the production of mucus in the airway epithelium of the mice along with an increase in MUC5AC gene and protein expression, while the expression of MUC5B was not increased compared with control. We found increased MUC5AC protein expression in the airway epithelium of the WT mice groups both 4 and 24 h after smoke inhalation injury. However, overproduction of mucus and increased MUC5AC protein expression induced by smoke inhalation was suppressed in the JNK inhibitor-treated mice and the JNK1 knockout mice. Smoke exposure did not alter the expression of MUC1 and MUC4 proteins in all 3 groups compared with control. Conclusion An increase in epithelial MUC5AC protein expression is associated with the overproduction of mucus in smoke inhalation injury, and that its expression is related on JNK1 signaling.
Abbreviations JNK: c-Jun N-terminal kinase; DMSO: Dimethyl sulfoxide; WT: wild-type; AB: Alcian blue; PAS: periodic acid-Schiff; QRT-PCR: Quantitative real-time reverse transcription polymerase chain reaction; CT: crossing threshold; GAPDH: glyceraldehyde-3-phosphate dehydrogenase; PBS: phosphate buffered saline; DAPI: 4',6-diamidine-2'-phenylindole, dihydrochloride; ANOVA: Analysis of variance. Competing interests The authors declare that they have no competing interests. Authors' contributions WIC was responsible for carrying out the experiments, for data analysis, and for drafted this manuscript; KYK was responsible for the analysis and design for the histologic study; OS oversaw the animal experiments, instructed WIC in his implementation; DAQ and CAH are experts in sepsis experiment and assisted in the experimental design and the data analysis and interpretation. All authors contributed to the drafting and revisions of the manuscript.
Acknowledgements This study was supported by funds from Shriners Hospital, Boston #8620 and Susannah Wood foundation (CAH).
CC BY
no
2022-01-12 15:21:37
Respir Res. 2010 Dec 7; 11(1):172
oa_package/37/59/PMC3014901.tar.gz
PMC3014902
21143860
Background Primary ciliary dyskinesia (PCD; MIM #242650) is a multisystem disease characterized by recurrent respiratory tract infections, sinusitis, bronchiectasis and male sub-fertility; in about half of patients it is associated with situs inversus (Kartagener syndrome, KS; MIM #244400), resulting from the randomization of body symmetry (for the clarity we will refer to PCD families without s.i . as CDO, ciliary dyskinesia only). The complex PCD phenotype is caused by the impaired motility of respiratory cilia, embryonic node cilia and sperm tails, due to ultrastructural defects of these structures [ 1 ]. Transmission electron microscopy detects various structural aberrations of the axonemal ultrastructure in over 80% of the patients [ 2 ]. The most commonly reported defects involve absence or shortening of outer (ODA) or inner (IDA) dynein arms-molecular motor complexes composed of several heavy, intermediate and light dynein chains encoded by a number of genes dispersed throughout the genome. The prevalence of PCD is estimated at 1 in 20,000 live births (1/12,500 to 1/30,000), with the prevalence of KS being approximately two times lower [ 1 ]. PCD is usually inherited as an autosomal recessive trait, although pedigrees showing autosomal dominant or X-linked modes of inheritance have also been reported [ 3 - 6 ]. The complexity of the ciliary ultrastructure and the broad variety of cilia defects suggest genetic heterogeneity of the disease. Indeed, genetics of PCD is very complex, as witnessed by numerous linkage studies, which indicated several genomic regions potentially involved in PCD pathogenesis [e.g. [ 7 - 10 ]]; for the reviews see [ 11 , 12 ]. Among several genes confirmed to be directly involved in PCD pathogenesis, the major number of mutations were found in just two: DNAI1 (9p13.3) and DNAH5 (5p15.2), encoding intermediate and heavy chains of the axonemal dynein, respectively [ 13 - 21 ]. Mutations in other genes, coding for proteins involved in the axonemal ultrastructure ( DNAH11 , DNAI2 , TXNDC3 , RSPH9 , RSPH4A ) or assembly ( KTU , CRRC50 ), were reported in singular PCD families only, and mutations in the RPGR gene were reported in rare cases of PCD associated with the X-linked retinitis pigmentosa (reviewed in [ 12 ]; see also [ 4 , 6 , 22 - 25 ]. Mutations in DNAI1 and DNAH5 , both associated with the ODA defect phenotype, were collectively estimated to account for almost 40% (~28% and 10% for DNAH5 and DNAI1 , respectively) of PCD cases [ 2 ]. Recently, other authors [ 20 ] reported much lower involvement of DNAI1 (4%). Here we report the results of DNAI1 screening performed in a large group of predominantly Polish PCD patients, the first large cohort of PCD patients of Slavic origin; the possibility that large, exonic deletions account for monoallelic mutations was also explored. Population specificity of DNAI1 mutation spectra is discussed in light of the SNP haplotype background of the mutations.
Materials and methods Patients A group of 157 PCD families included 185 affected individuals; parents and/or non-affected siblings were available in 115 families. Seventy-four of the families were classified as KS (if at least one affected member displayed s.i .); the remaining 83 were classified as CDO. At least one of the criteria listed in Table 1 had to be fulfilled to include a patient in the PCD cohort. All but six families (Czech/Slovakian) were of Polish origin. No known parental consanguinity was reported in the families (but such a possibility was not formally excluded). PCR amplification, SSCP/heteroduplex analysis and allele-specific hybridization Genomic DNA was isolated from peripheral blood lymphocytes using a standard salting-out extraction procedure. A specific primer pair was designed for each of the 20 DNAI1 exons, the 5' and 3' UTR regions, and for five intronic SNPs; the length of each amplicon was < 300 bp. For the SSCP analysis, PCR-amplified segments were denatured and separated in 7 or 8% polyacrylamide (29:1) in 0.5x or 1xTBE; gels (optionally with ~2 M urea and 10% glycerol) were run at 8-10W for 20-40 h at RT or 4°C. Primer sequences, PCR conditions and detailed conditions used to separate each of the analyzed fragments are available from the authors upon request. The genotyping of SNPs and of newly found mutations was performed using dynamic ASO (allele-specific oligonucleotide) hybridization [ 26 ]. Sequence analysis The nucleotide changes underlying all the detected SSCP migration variants were resolved by direct sequencing of the PCR products (BigDyeTerminator v3.1 on an ABI Prism 3130XL Analyzer, Applied Biosystems); trace files were checked and edited using FinchTV1.3.1. (Geospiza Inc.). Sequences were evaluated manually using Chromas 1.45 software and FASTA sequence comparison algorithm ( http://fasta.bioch.virginia.edu/fasta_www2 ). The reference genomic sequence was ENSG00000122735 ( http://www.ensembl.org ) or NG_008127.1 ( http://www.ncbi.nlm.nih.gov ); the exon boundaries were of the 699 aminoacid DNAI1-101 transcript ENST00000242317 ( http://www.ensembl.org ); the numbering of mutated nucleotide positions used throughout the text is that of cDNA. SNP-haplotype analysis and genetic stratification of the families Seven intragene SNPs (rs11547035, rs4879792, rs2274591, rs3793472, rs11793196, rs9657620, rs11999046) were genotyped and parental origin of the two alleles of each SNP was determined assuming, wherever possible, no recombination among the sites. The family-based information on SNP haplotypes was used to assess the haplotype variability in all the patients. The consistency of the disease cosegregation with the haplotype variants was examined in 79 families where DNAs from proband's siblings and parents were available. MLPA analysis of the DNAI1 gene A subset of PCD patients (~80 families) were analyzed for the potential presence of large exonic mutation(s), using commercially available kit for multiplex ligation-dependent probe amplification (MLPA) in the DNAI1 gene (P237-DNAI1; MRC Holland). The procedure was performed according to the manufacturer's indications (MRC Holland); briefly, hybridization of the multiple SALSA-MLPA probes (20 specific probe pairs targeting all DNAI1 exons) to total genomic DNA sample (50 ng per reaction) was performed at 60°C, followed by ligation at 54°C and PCR with universal, FAM-labeled MLPA primers. The resulting amplicons were separated on ABI-Prism-3130XL Analyzer; peaks were analyzed using PeakScanner v1.0 software (Applied Biosystems).
Results Characteristics of the detected variants SSCP screening of the entire coding region of DNAI1 was performed in patients from 108 PCD families; systematic search for mutations was not executed in twenty-one families where the segregation of the SNP haplotype was inconsistent with that of the disease, as well as in twenty-eight families where mutations were identified in other PCD-related genes [EZ, unpublished data]. SSCP analysis revealed eight sequence variants. Two of them, in exon 1 and 11 (22G > T;A8 S and 1003G > A;V335I, respectively), were frequent SNPs (rs11547035 and rs11793196), present at high frequencies in the general population. The remaining six SSCP variants represented three previously described PCD mutations and three changes never reported before (Figure 1 ). The T insertion at position +2 of intron 1 (IVS1+2-3insT), the most frequent mutation described until now in PCD patients, is known to affect a donor splice site [ 13 ]. It results in retaining 132 bp of intron 1 in the mRNA and the premature termination of translation at amino acid position 25. In our study, the IVS1+2-3insT mutation was found on eleven independent chromosomes. It was homozygous in three families, accompanied by another mutation in three families and was the only mutation found in two families. Another previously reported mutation, the 1612G > A in exon 17 resulting in a missense aminoacid incorporation A538T [ 18 ], was found on eight chromosomes; it was homozygous in three families, and in two others was accompanied by IVS1+2-3insT. The third of the previously reported mutations, the 1543G > A in exon 16 (G515S) [ 15 ], was found on one chromosome in a single patient. The second chromosome of that patient carried a 1538T > C transition in exon 16 (L513P); that change was never reported before. Another new mutation, the 1163G > A transition in exon 13 (C388Y), was found in a single patient (a compound heterozygote, with IVS1+2-3insT). The third new mutation, a G > A transition 245 bp downstream from the STOP codon, was found on one PCD chromosome; no change on the second chromosome was identified in the patient. The data from the transmission electron microscopy were available for two PCD families with the homozygous mutation A538T/A538T and for two with compound mutations IVS1+2-3insT/A538T. In all these cases, the absence of ODA and/or IDA was noted; in three of the families the absence of dynein arms was accompanied by different, non-specific defects of microtubular organization (see Table 2 ). Phenotype penetrance in the families with homozygous or compound mutations was consistent with the recessive mode of PCD inheritance (family members who carried only one mutated chromosome did not exhibit any clinical symptoms); the pedigrees of the families harboring the newly described mutations are presented in Figure 1 . Sequence changes that resulted in a STOP or indel mutation, or affected the two most conserved donor or acceptor consensus splice site positions, were directly assumed to represent causative PCD mutations. In case of the new missense variants (L513P and C388Y), the possibility that the change represented a non-pathological polymorphism was dismissed following a number of analyses. Interrogation of the NCBI database for human single nucleotide polymorphisms (build 131; http://www.ncbi.nlm.nih.gov/SNP ) indicated that no SNPs were reported at the respective gene positions (1163G in exon 13, and 1538T in exon 16). ASO screening of the control population (~200 unrelated chromosomes from healthy Polish individuals) also did not reveal the mutated alleles. Comparison with the DNAI1 homologues from 9 Eutherian mammals, P. troglodytes, P. pygmaeus, G. gorilla, M. mulatta, M. musculus, R. norvegicus, B. taurus, C. famialiris, E. caballus ( http://www.ensembl.org ), indicated 100% conservation of DNA and aminoacid sequence at these two positions. This is consistent with the respective amino acids location within the DNAI1 protein: the 1163G > A substitution alters the C388 codon within the second of five highly conserved second WD-repeats (WD2) [ 13 ], and the 1538T > C in exon 16 changes the S513 codon in the highly conserved inter-repeat region, between WD3 and WD4 (Figure 2 ). The effect of the amino acid changes on the protein stability was examined using SNPs3 D online software http://www.snps3d.org ; the SVM (Support Vector Machine) value smaller than -1.0 was assumed to indicate a deleterious effect of the amino acid substitution on the protein stability [ 27 ]. SVM scores obtained for C388Y and L513P were -2.74 and -2.17, respectively; of note, negative SVM scores (-2.50 and -1.35) were also obtained for two previously reported missense mutations, G515 S and A538T. Based on all the above observations we tentatively assumed that the newly found missense changes in exons 13 and 16 represented the disease-causing mutations. The causative role of the +245G > A transition in the 3'UTR region of DNAI1 was less evident. This substitution was not found in the SNP database and in the analyzed control group, but comparison of the 3'UTR region in ten different species indicated low conservation of the sequence position in question. The variant was therefore analyzed in the context of sequence conservation in this region among different protein coding genes. The 3' regulatory regions are rich in regulatory elements important for the process of mRNA 3'end maturation. Although the sequence conservation and length of these motifs is not very high, some common features have been described [ 28 ]. The most important is the highly conserved polyadenylation signal, AAUAAA, a part of UCPAS (upstream core polyadenylation signal). Downstream from it is the cutting site (CS), where pre-mRNA is cut and the polyadenyl tail added; it is often preceded by a CA dinucleotide. The CS distance (10-30 bp) from two flanking segments, UCPAS and the U/GU-rich downstream core polyadenylation signal (DCPAS), is the most conserved feature of this part of the 3' regulatory region. The G > A transition found in the patient was located 32 bp downstream from the first fully conserved AAUAAA sequence after the stop codon, and within the UUGU sequence that could be a part of the DCPAS, suggesting its possible effect on mRNA polyadenylation (Figure 3 ). However, given the generally poor conservation of the regulatory elements among 3'UTR gene sequences, proving the importance of this mutation cannot be assessed without expression analyses. In addition, no sequence change on the complementary allele was found. The patient with the 3'UTR mutation was therefore not included in the analysis of DNAI mutation prevalence among PCD families. MLPA analysis Direct sequencing of the whole coding sequence (exons, splice sites and UTRs), performed in two unrelated patients with monoallelic mutation (IVS1+2-3insT), did not reveal any additional sequence change. The presence of large deletions, not detectable by SSCP and/or sequencing, could explain the failure to detect the second mutated allele. To examine whether this was the case, a multiplex ligation-dependent probe amplification (MLPA) analysis of all the DNAI1 exons was performed in a subset of ~80 unrelated patients, including two with the monoallelic mutation and fifteen with the homozygous whole-length SNP-haplotype. The differences in the peaks' height between the samples and the control DNA from healthy individuals did not exceed 20% (not shown), indicating that no exonic deletion was present in any of the examined patients. Prevalence of DNAI1 mutations among Polish PCD families The disease-associated changes in the DNAI1 sequence were found on 22 non-related chromosomes from twelve families (including two with the monoallelic mutation), which accounts for 8% of the analyzed cohort of 157 PCD families. Interestingly, when CDO and KS families were considered separately, the proportion of those with DNAI mutations was 5% (4/83) for CDO and 10% (8/74) for KS. Due to the small numbers, this difference was not statistically significant (Fisher exact test [SISA], p ~0.09); however, when the proportion of the affected chromosomes (rather than families) harboring DNAI1 mutation was compared, the difference between KS and CDO was statistically significant ( p ~0.008). The prevalence of the IVS1+2-3insT mutation among the 22 mutated chromosomes was 50%, and that of A538T was 36%. To examine the possibility of founder effect(s) being responsible for their distribution in Polish population, DNAI1 mutations were analyzed in context of SNP haplotype background. SNP haplotype background Variants of the 7-position SNP haplotype (flanked by markers located in exon 1 and intron 18 of the DNAI1 gene) were determined in 142 families, including 32 for which linkage of the disease phenotype with DNAI1 was excluded (all chromosomes from these 32 families were considered non-affected). Among the 142 families, 56 had both parents and one (15 families) or more (41 families) children genotyped, in 22 no genotype data was available for one or both parents, and in 64 only a singleton patient was genotyped. Converting the genotype data into haplotypes was aided by the fact that in almost half families (66 families, including 41 singleton patients) at least one of the members was homozygous or heterozygous only at a single SNP position, i.e., haplotype phase could be solved directly. Of the 142, 78 families were informative with respect to the parental contribution of the chromosomes. In the remaining families and in the multiply heterozygous singleton patients, determination of the alleles' phase from genotype data was based on the maximum parsimony principle, taking into account the frequency of the unambiguously determined haplotypes and assuming no recombination whenever possible. In three of the families, the unambiguous solution couldn't be achieved and these were excluded from further haplotype analysis. The resulting distribution of the haplotypes among 395 non-related chromosomes (183 non-affected and 212 affected) is given in Figure 4 . Sixteen haplotype variants were distinguished. Their frequency did not differ significantly when the affected and non-affected chromosomes were compared. Eight of the haplotypes occurred at relatively high frequencies (4-21%) in the whole analyzed group of 395 chromosomes; the allelic structure of nine rare haplotypes (frequency ≤ 1%) suggested that they represented recent recombinants of the frequent variants. Only one of these recombination events was detected within the analyzed families; the remaining eight recombinants must have already "circulated" in the population. The most prevalent IVS1+2-3insT mutation (found on eleven independent Polish chromosomes), was always found on the G-g-g-t-G-g-c haplotype (lower-case letters indicate SNPs in introns). This common background is consistent with the mutation's identity by descent (i.b.d.) in all the analyzed chromosomes. Another recurring mutation, A538T, was associated with the G-g-g-t-G-g-g haplotype on seven of the eight independent chromosomes, again indicating their recent common origin. Two chromosomes carrying unidentified mutations (in two unrelated patients with the monoallelic IVS1+2-3insT) had an identical haplotype G-g-g-t-A-a-g a (g a at the last position of the haplotype denotes ancestral "g" at rs11999046 linked with the derived "a" 93 nt downstream from rs11999046). The identity of the haplotypes background suggests that both families may share the same unidentified complementary mutation. On the other hand, the background haplotypes for IVS1+2-3insT, A538T and the unknown mutation(s) are relatively frequent also among the non-affected chromosomes (10.9%, 5.5% and 4.9%, respectively). Therefore, the possibility that recurrent mutation events rather than i.b.d. are responsible for the relatively high frequency of these mutations cannot be excluded. In this context it has to be noted that the 1612G > A mutation on one of the eight chromosomes was associated with a different haplotype, G-g-g-c-G-g-g. The structure of the G-g-g-c-G-g-g cannot be explained by a single recombination event between two frequent haplotypes (Figure 4 ), and possible explanations include: 1) a double recombination or a gene conversion involving the frequent haplotype carrying the founder mutation (or its recombination with a very rare variant); 2) the possibility that the less common haplotype is derived from the common haplotype by a mutation at rs3793472; or 3) an independent mutation, which had occurred on a rare haplotype background. The first scenario would suggest the older age of the mutation, since the probability of a recombination or conversion event increases with time (e.g. [ 29 ]). Assuming the genomic average of 44 recombinations per meiosis per generation [ 30 ], the average genomic crossover rate is 10 -8 per bp per generation, and 6x10 -5 per 6 kb of the DNAI1 haplotype. One could therefore expect a single recombination event to occur once in 10 4 generations or once in ~200,000 years, and the probability of double recombination event is even lower. Moreover, the DNAI region flanked by markers rs11547036 (in exon 1) and rs11793196 (in exon 11), where at least one of the purported recombination events would have to take place, is characterized by the high level of linkage disequilibrium (Figure 5 ) in the HapMapCEU sample ( http://hapmap.ncbi.nlm.nih.gov ; [ 31 , 32 ]). The second scenario, of the recurrent mutation at rs3793472, would point to the identity of 1612G > A mutation in both haplotypes (G-g-g-t-G-g-g and G-g-g-c-G-g-g). However, since identical background haplotypes were also found on the unrelated healthy chromosomes (Figure 4 ), the t > c substitution at rs3793472 would have had to occur independently on the chromosomes with and without A538T mutation. With the average mutation rate of 1-4x10 -8 per bp per generation [ 33 , 34 ], this is not highly probable. Regarding the fact that the 1612G > A transition leading to A538T occurred within a CpG dinucleotide, known to mutate 10 times faster than other sequence positions [ 35 ], the third scenario, of an independent recurrent origin of this mutation, appears therefore most plausible.
Discussion Prevalence of DNAI1 mutations among PCD families of various ethnicities The disease-associated DNAI mutations were found in 8% of the analyzed Polish PCD families (12/157). This estimate is consistent with the previously reported DNAI involvement in 9% of PCD families (16/179) [ 18 ]; the earlier, even higher, reported values were based on much smaller study groups [ 13 , 16 ]. On the other hand, DNAI1 mutations were found in only 4% of the 104 PCD families analyzed in another study [ 20 ]. The authors suggested that the previously reported involvement of DNAI1 mutations reflected bias in the recruitment of PCD patients through detection of ODA defects. Indeed, the frequency of DNAI1 mutations in the pre-selected PCD subpopulation with documented ODA defects has been shown to be higher (14%) [ 2 , 18 ]. However, our estimate of 8% is based on the total number of Polish families, recruited without any preselection. Similarly, the criticized estimate of 9% [ 18 ] has been calculated with respect to all the 179 PCD families recruited for that study; even if the proportion of families with ODA defects in that cohort was shown to be ~80%, it did not necessarily reflect biased recruitment but rather the frequent presence of ODA defects among PCD patients in general. Therefore, the lowest reported involvement of DNAI1 [ 20 ] may reflect other factors, for example ethnic differences in the analyzed cohorts. Of the 104 families analyzed by Failly et al. [ 20 ], 101 were Caucasian, with the predominant (3/4) contribution of Swiss ( n = 50) or Italian ( n = 32). The cohort analyzed by Zariwala et al. [ 18 ] was ethnically more heterogeneous: 155 of 179 families were Caucasian, and among 90 families for whom the ethnicity data were provided, the majority were German ( n = 28), French ( n = 23), UK ( n = 18) and Australian ( n = 11); only 6 samples were Italian, and no Swiss samples were reported. Our study group ( n = 157) was predominantly Polish ( n = 151); six families of Chech/Slovakian origin belong to the populations which are geographically and ethnically very close to Poles (all belong to West Slavs). Our results indicate that Poles (West Slavs), do not significantly differ from German, French or British populations when the DNAI1 involvement in PCD pathogenesis is considered [ 13 , 16 , 18 ]. The low prevalence of DNAI1 mutations among patients of Italian and Swiss origin [ 20 ] may either reflect specificity of these two populations or result from a clinal distribution of DNAI1 mutations, with the frequency gradient running in South-North rather than West-East direction. The existence of such gradients in Europe can be exemplified by the frequency distribution of the F508del mutation in the CFTR gene [ 36 ]. Answering the question whether the differences in DNAI1 involvement is due to the possible European clines in the geographical distribution of mutations or to the local founder effects will require studying PCD patients from other European populations. Population spectrum of DNAI1 mutations The spectrum of DNAI1 mutations detected up to date in all the relevant studies is shown in Table 3 . The prevalence of IVS1+2-3insT among the Polish PCD chromosomes harboring DNAI1 mutations (50%, 11/22) is only slightly lower than the respective value based on all the previous reports (56%, 27/48) [ 13 , 15 , 18 , 20 ]. The common background of this mutation in all the Polish chromosomes is consistent with their identity by descent. The origin of the recurring IVS1+2-3insT from the common founder has been suggested in earlier studies based on sharing allele 19CA of the nearby microsatellite D9S1805, located 0.26 Mb upstream of DNAI1 [ 18 ]. The relatively high prevalence of A538T (36%, 8/22) appears to be specific for the Polish population; in all the other studies combined, this mutation represented only 1% (2/48) of DNAI mutations. Polish cohort is the only population where PCD patients homozygous for other alleles than IVS1+2-3T were found: three families without reported consanguinity were homozygous for A538T. The high frequency of A583T among Polish patients most likely reflects two phenomena: the common origin (founder mutation) in most families, and an independent mutation event on a different haplotype background in another family. Further studies involving other Eastern-European PCD cohort/s would be required to elucidate whether the founder mutation is restricted to the Polish population or characteristic for other Slavic groups. The excess of Polish PCD chromosomes harboring A538T was observed among the KS families; in fact, it is this mutation, which mostly contributed to the DNAI1 involvement being higher in KS than in CDO families. Importantly for diagnostic purposes, A538T is located in exon 17, and two new mutations detected in this study (C388Y and L513P)-in exons 13 and 16, respectively, such that most of the mutant alleles remain clustered in intron 1 and exons 13, 16 and 17, as previously reported [ 2 , 18 ]. Chromosomes harboring mutations in these regions make up 80% of all the PCD chromosomes with the reported DNAI1 involvement. Of note, while the rare nonsense mutations or changes introducing a frame shift are distributed along the whole coding sequence, all but one (E174L) missense mutations are found in exons 13, 16 and 17. A question of unidentified DNAI1 mutations Among a total of 38 PCD families with the DNAI mutations found in different studies, six were "monoallelic", with only one mutation identified in spite of direct sequencing of the whole coding region [[ 18 , 20 ] this study]. In four of these families, the single mutation found was the frequent IVS1+2-3insT. Is it possible that the affected members in these four families were just carriers of the detected mutation (with DNAI1 not being involved in PCD pathogenesis)? In such a case, the estimate of DNAI1 involvement in PCD pathogenesis would be slightly lower (7%; 34/487). With the disease prevalence of 1/20,000, DNAI1 involvement of ~7-10%, and IVS1+2-3insT prevalence among DNAI1 mutations of ~50%, the chance of picking up an asymptomatic IVS1+2-3insT carrier in the general population is ~1/535-1/450. Given that 487 independent PCD families were analyzed in all the reported studies, one would expect at most one of the patients to be an asymptomatic carrier of IVS1+2-3insT; the observed number of four carriers is higher, although the difference does not reach the level of statistical significance ( p = 0.15; Fisher test). Nevertheless, we tentatively assume that DNAI1 is actually involved in PCD pathogenesis in the families with monoallelic mutation. In that case, the second mutation must have been undetectable by SSCP screening and direct sequencing of the amplified exonic segments. One of the possible explanations- the presence of long exonic deletion - was excluded, since the MLPA analysis using probes targeting all the DNAI1 exons did not reveal any differences in the amplification intensity of the PCD patients as compared to healthy controls. However, deep intronic or extragenic regulatory mutations remain to be searched for. Finally, the possibility that the inheritance of PCD in some families is di- or trigenic cannot be formally excluded, but so far no evidence exists which could substantiate this hypothesis.
Conclusions The analysis of the Polish PCD patients confirms large genetic heterogeneity of the disease and indicates that the worldwide involvement of DNAI1 mutations in PCD pathogenesis ranges from 7 to 10% in the families not preselected for the ODA defects; however, the involvement in specific populations may differ from this global estimate. In the combined PCD cohorts from all up to date studies, the IVS1+2-3insT remains the most prevalent pathogenetic change in DNAI1 (54% of all the mutations identified worldwide). The increased global prevalence of A538T (14%) is due to the contribution of the Polish cohort, in which the high frequency of this mutation (36%) probably reflects the local (Polish or Slavic) founder effect. The spectrum of mutations detected in the Polish cohort confirms earlier observations of mutations clustering in (or around) exons 1, 13, 16 and 17 of the DNAI1 gene, indicating directions for future diagnostic tests. Finally, with MLPA results indicating that no large exonic DNAI1 deletions are involved in PCD pathogenesis, the question of undetected mutations still remains open.
Background Mutations in the DNAI1 gene, encoding a component of outer dynein arms of the ciliary apparatus, are the second most important genetic cause of primary ciliary dyskinesia (PCD), the genetically heterogeneous recessive disorder with the prevalence of ~1/20,000. The estimates of the DNAI1 involvement in PCD pathogenesis differ among the reported studies, ranging from 4% to 10%. Methods The coding sequence of DNAI1 was screened (SSCP analysis and direct sequencing) in a group of PCD patients (157 families, 185 affected individuals), the first ever studied large cohort of PCD patients of Slavic origin (mostly Polish); multiplex ligation-dependent probe amplification (MLPA) analysis was performed in a subset of ~80 families. Results Three previously reported mutations (IVS1+2-3insT, L513P and A538T) and two novel missense substitutions (C388Y and G515S) were identified in 12 families (i.e. ~8% of non-related Polish PCD patients). The structure of background SNP haplotypes indicated common origin of each of the two most frequent mutations, IVS1+2-3insT and A538T. MLPA analysis did not reveal any significant differences between patients and control samples. The Polish cohort was compared with all the previously studied PCD groups (a total of 487 families): IVS1+2-3insT remained the most prevalent pathogenetic change in DNAI1 (54% of the mutations identified worldwide), and the increased global prevalence of A538T (14%) was due to the contribution of the Polish cohort. Conclusions The worldwide involvement of DNAI1 mutations in PCD pathogenesis in families not preselected for ODA defects ranges from 7 to 10%; this global estimate as well as the mutation profile differs in specific populations. Analysis of the background SNP haplotypes suggests that the increased frequency of chromosomes carrying A538T mutations in Polish patients may reflects local (Polish or Slavic) founder effect. Results of the MLPA analysis indicate that no large exonic deletions are involved in PCD pathogenesis.
Abbreviations ASO: allele-specific oligonucleotide; CDO: ciliary dysfunction only; IDA: inner dynein arms; KS: Kartagener syndrome; MLPA: multiplex ligation-dependent probe amplification; MT: microtubules; ODA: outer dynein arms; PCD: primary ciliary dyskinesia; s.i.: situs inversus Competing interests The authors declare that they have no competing interests. Authors' contributions EZ designed and coordinated the study, performed haplotype analysis and interpretation of data, and drafted the manuscript, BN and KV carried the majority of SSCP and MLPA assays and participated in sequence analysis, US, ZB, KH and HP participated in SSCP assays and sequence analysis, ER was responsible for assembling, maintaining and monitoring the sample collection, AP recruited PCD families and provided clinical assessment of the patients, MW conceived the study and participated in its design. All authors read and approved the final manuscript.
Acknowledgements and Funding We gratefully acknowledge Polish PCD families for contributing blood samples for this study. An informed consent was obtained from all the patients or their parents; the research protocol was approved by the Ethics Committee of the Medical University in Poznan. The study was supported by grants from the Polish Scientific Committee: KBN-3PO5E-03824 (EZ), PBZ-KBN122/P05-1 (EZ), NN401-277534 (EZ), NN401-09-5537 (MW); and by ECFP7 grant HEALTH-PROT-GA No 229676 (MW).
CC BY
no
2022-01-12 15:21:37
Respir Res. 2010 Dec 8; 11(1):174
oa_package/62/69/PMC3014902.tar.gz
PMC3014903
21143922
Introduction Adipose tissue produces adipokines, proteins that regulate inflammation and metabolism in an autocrine, paracrine, and systemic manner [ 1 ]. Adiponectin is an adipokine associated with systemic anti-inflammatory effects and insulin sensitization effects [ 1 ]. Serum adiponectin concentrations are reduced in obesity [ 2 , 3 ]. Adiponectin and all of the known receptors for adiponectin (AdipoR1, AdipoR2, T-cadherin and calreticulin) are expressed on multiple cell types in the lung [ 4 - 7 ]. Adiponectin is also transported from blood into the alveolar lining fluid via the T-cadherin molecule on the endothelium [ 5 ]. Various disease states associated with lower serum adiponectin concentrations (such as obesity, asthma, systemic inflammation, and diabetes mellitus [ 2 , 3 , 8 , 9 ]) are associated with reduced lung function [ 10 - 14 ]. It is therefore possible that lower serum concentrations of adiponectin may be associated with decreased lung function in humans. This hypothesis is supported by a recent study in normal-weight mice with genetic deficiency of systemic adiponectin. These mice demonstrated local (lung) adiponectin deficiency, increased systemic and local inflammation, and "alveolar simplification and/or enlargement due to abnormal post-natal alveolar development" [ 15 ]. In this study, we evaluated the association between serum concentration of adiponectin and lung function in the prospective Coronary Artery Risk Development in Young Adults (CARDIA) study. We hypothesized that lower serum concentrations of adiponectin measured at year 15 after enrollment in the CARDIA study are associated with lower lung function at years 10 and 20 and ten year decline in lung function. We also hypothesized that this effect of serum adiponectin concentrations on lung function is independent of obesity and might be mediated by its systemic anti-inflammatory and insulin sensitization effects.
Methods In the CARDIA study, 5,115 participants aged 18-30 years were recruited for the baseline (year 0) examination in 1985-86, including approximately equal numbers who were black and white, men and women, and ≤ twelve-years of education and > twelve-years of education. Subsequently, 3,950 participants (77%) were followed up in 1995-96 (year 10); 3,672 (72%) in 2000-2001 (year 15); and 3,549 (69%) in 2005-2006 (year 20) examinations. The detailed methods, instruments and quality control procedures for the CARDIA study have been previously described [ 10 , 16 ]. The CARDIA study is reviewed annually by the internal review boards at each participating institution, and participants sign a new informed consent form at every examination. Demographic characteristics, lifestyle habits (e.g. cigarette smoking), physical activity, and medical history were collected by self-report. The diagnosis of asthma was made if the subject at any of the study visits self-reported a doctor or nurse diagnosis of asthma and/or reported taking asthma medications (usually based on examination of medicine containers). Spirometry was performed using a Collins Survey 8-liter water sealed spirometer and an Eagle II Microprocessor (Warren E. Collins, Inc., Braintree, MA) at year 10 examination and a dry rolling-seal OMI spirometer (Viasys Corp, Loma Linda, CA) at year 20 examination, adhering to the American Thoracic Society guidelines. A comparability study performed on 25 volunteers at the LDS Hospital (Salt Lake City, UT) demonstrated excellent consistency between the old and new machines; the average difference between the Collins Survey and OMI spirometer was 6 ml for FVC and 21 ml for FEV 1 . The homoeostasis model assessment (HOMA) for estimating insulin resistance was calculated as serum glucose (mmol/L)×serum insulin (mU/L)/22·5 [ 17 ]. Serum C reactive protein (CRP) was measured using a high sensitivity new enzyme-linked immunosorbent assay method at the Department of Pathology, University of Vermont, Burlington, VT, USA [ 18 ]. After excluding participants who were pregnant during years 10, 15 or year 20 examination (n = 16), asthma diagnosis during the 20 years of CARDIA follow-up (n = 408), and those with missing values for either serum adiponectin (n = 423), year 20 lung function (n = 119), year 10 lung function (n = 447) or covariates (n = 80), 2,056 participants were included in this study. Overnight fasting blood samples were collected, processed within 90 minutes of blood collection and stored at -70°C. Total adiponectin was measured in serum by radioimmunoassay at Linco Research, Inc. (St. Louis, MO) using a polyclonal antibody raised in a rabbit with an effective range of 0.2 to 40 mg/L [ 8 ]. The correlation between adiponectin concentrations measured in 407 blinded duplicate samples was 0.91 and the coefficient of variation (CV) for the adiponectin assay was 17% (including laboratory measurement error, variation in specimen handling and freezing and labeling errors). A previous study demonstrated that serum adiponectin showed little circadian variability and limited within-person variation over time [ 19 ].
Results Participant characteristics at Year 15 examination At year 15, participants in the highest serum adiponectin quartile were more likely to be white, women, have higher educational attainment, and be former smokers (and less likely to be current smokers), as compared to those in the lowest adiponectin quartile (p < 0.001; Table 1 ). Consistent with previous reports, BMI, insulin resistance (HOMA) and systemic inflammation (serum CRP levels) were lower in the highest vs . lowest adiponectin quartile (p < 0.001; Table 1 ) [ 2 , 3 , 8 ]. Body mass index, HOMA and CRP were also significantly correlated with each other (r BMI, CRP = 0.53, r BMI, HOMA = 0.60 and r HOMA, CRP = 0.41, all p < 0.0001). Year 15 serum adiponectin concentrations were positively associated with Year 10 FVC and FEV 1 Year 10 FVC was 81 ml lower in the lowest vs . highest adiponectin quartile (p for trend = 0.005; Table 2 , model 1). Similarly, year 10 FEV 1 was 50 ml lower in the lowest vs. highest adiponectin quartile (p for trend = 0.01; Table 2 , model 1). Adjustment for either year 15 waist circumference or year 10 BMI instead of year 15 BMI in this model showed results very similar to that observed after adjustment for year 15 BMI (data not shown). However, after additional adjustment for insulin resistance and systemic inflammation, year 10 FVC and FEV 1 were not associated with adiponectin (Table 2 , model 2), suggesting that these are possible mechanisms for the adiponectin-lung function association. Year 10 FEV 1 /FVC was not associated with adiponectin in any of the models (Table 2 ). Year 15 serum adiponectin concentrations were positively associated with Year 20 FVC and FEV 1 Year 20 FVC was 82 ml lower in the lowest vs . highest adiponectin quartile (p for trend = 0.01; Table 3 , model 1). Similarly, year 20 FEV 1 was 38 ml lower in the lowest vs. highest adiponectin quartile, this difference showed a trend towards statistical significance (p for trend = 0.09; Table 3 , model 1). Adjustment for either year 15 waist circumference or year 20 BMI instead of year 15 BMI in this model showed results very similar to that observed after adjustment for year 15 BMI (data not shown). However, after additional adjustment for insulin resistance and systemic inflammation, year 20 FVC and FEV 1 were not associated with adiponectin (Table 3 , model 2). Year 20 FEV 1 /FVC was not associated with adiponectin concentrations (Table 3 ). Year 15 serum adiponectin concentrations were not associated with a ten-year decline in lung function Ten-year decline in lung function (FVC, FEV 1 or FEV 1 /FVC) was not associated with year 15 serum adiponectin concentrations (Table 4 ). The rate of decline of lung function between years 10 to 20 were strikingly similar across all adiponectin quartiles (Table 4 ). Year 15 serum adiponectin concentrations were positively associated with peak lung function in early adulthood Peak FVC in early adulthood ( i.e . the highest value among the years 0, 2 or 5 examinations) was 72 ml lower in the lowest vs . highest adiponectin quartile (4459 ml vs . 4531 ml; p for trend = 0.01). Similarly, peak FEV 1 was 46 ml lower in the lowest vs. highest adiponectin quartile, although this difference did not reach statistical significance (p for trend = 0.07). After additional adjustment for insulin resistance and systemic inflammation at year 15, peak FVC and FEV 1 were not associated with adiponectin (p for trend = 0.10 and 0.41 respectively). Peak FEV 1 /FVC was not associated with adiponectin concentrations. Decline in FVC or FEV 1 from peak to 20 years was also not associated with adiponectin. Including lung function at year 10 in estimating peak lung function did not change the observed associations between peak lung function and adiponectin (data not shown). In addition, there was no statistically significant interaction between serum adiponectin and sex in determining FVC, FEV 1 or FEV 1 /FVC values at either years 10 or 20 examinations (p > 0.10). Analyses with logarithmically transformed adiponectin as a continuous variable showed results very similar to those observed with adiponectin analyzed as quartiles (tables 2 , 3 and 4 ).
Discussion This study showed that serum adiponectin concentrations at the year 15 examination in the CARDIA study were positively associated with FVC and FEV 1 values both at years 10 and 20 and these associations were independent of BMI. Serum adiponectin concentrations were however not associated with lung function decline between years 10-20. The associations between adiponectin and FVC and FEV 1 at years 10 and 20 were no longer significant when additionally adjusted for insulin resistance and systemic inflammation. Adiponectin is an anti-inflammatory adipokine. It inhibits pro-inflammatory cytokines such as tumor necrosis factor-alpha (TNF-α) and interleukin (IL)-6 and induces anti-inflammatory cytokines such as IL-10 and IL-1 receptor antagonist [ 20 , 21 ]. Adiponectin's insulin-sensitizing effect stimulates glucose utilization and fatty-acid oxidation [ 22 , 23 ]. A murine model has recently shown that genetically-induced adiponectin deficiency in the lungs of normal-weight mice maintained on a normal diet resulted in increased expression of TNF-α and matrix metalloproteinases (MMP-2 and MMP-12) in alveolar macrophages and "alveolar simplification and/or enlargement" due to abnormal postnatal alveolar development [ 15 ]. This murine study suggests that systemic adiponectin, independent of obesity, may have a protective effect on the lung through inhibition of alveolar macrophage-related inflammation. This is the first population-based study to demonstrate that serum adiponectin is positively associated with lung function in humans. The difference in lung function across adiponectin quartiles was significant after adjusting for BMI or waist circumference, suggesting that it is not simply a function of global or abdominal adiposity. Based on current knowledge, it is likely that insulin resistance and systemic inflammation are mediators of the adiponectin-lung function association. However, the correlation between BMI, HOMA and CRP observed in this study makes it difficult to definitively classify these variables as confounders or mediators when evaluating the association between adiponectin and lung function. Hence, we present different scenarios with and without adjustment for these factors. Since serum adiponectin was associated with year 10 lung function but not subsequent decline in lung function, it is possible that the physiologic effect of serum adiponectin on lung function was established even before year 10 of the study and that classification by year 15 serum adiponectin concentrations might reflect lung growth abnormalities in early adulthood. The significant association observed between peak FVC in early adulthood and serum adiponectin concentrations at year 15 further supports this suggestion. Since serum adiponectin measurements were available about 10 years after measurement of peak lung function this finding needs to be confirmed in other longitudinal studies. The absence of adiponectin-lung function association after adjustment for HOMA and CRP suggests that the adiponectin effect on lung function might be explained by its anti-inflammatory and insulin sensitization effects. Our results do not agree with two small previous cross-sectional studies of 31 and 15 patients with stable and established COPD that showed, without adjustment for obesity, no correlation between serum adiponectin concentrations and spirometric lung function [ 24 , 25 ]. The cross sectional design, presence of COPD, older age of subjects, confounding by BMI and limited power due to a small sample size are possible reasons for the observed discrepancy between these studies. Usually, reduction in lung function with maintenance of FEV 1 /FVC ratio indicates restrictive ventilatory abnormality due to increased stiffness of the lungs or chest wall or reduced neuromuscular strength. Similar to observations made in mice models, inadequate alveolar morphogenesis may present with restrictive ventilatory abnormalities in humans. Another potential explanation includes the effect of adiponectin on peripheral or smaller airways (and associated premature airway closure and hyperinflation resulting in pseudo-restriction [ 26 ]). Similar small airway physiologic abnormalities, with thickened alveolar walls, have been previously described in patients with diabetes mellitus [ 27 , 28 ]. A recent study showed that phosphate buffered saline inhalation by adiponectin-deficient mice produced greater decrease in lung compliance compared to wild-type mice. The investigators suggested that this may be due to an increase in small airway closure from reduced lung elastic recoil [ 29 ]. Future studies with static lung volume estimation may help clarify the precise physiological changes associated with reduced serum adiponectin concentrations in humans. The immediate clinical relevance of our finding of a modest association between serum adiponectin and lung function in young adults is unclear. It is nonetheless significant in the context of an epidemiological study because even modestly lower lung function early in life is associated with increased risk for both future lung [ 30 ] and cardiovascular diseases [ 31 - 33 ]. Since an inflammatory pulmonary milieu may increase the risk for adverse reactions to other occupational and environmental exposures manipulation of systemic adiponectin concentrations in early life (such as with diet, exercise, weight loss and medications) may have long-term effects on lung development, injury and remodeling. The present study has several strengths, including the large number of generally healthy participants, inclusion of blacks and women, high quality spirometry data collected over a ten-year period, confirmation in a human population of findings demonstrated more mechanistically in a mouse-model, and excellent retention of the original cohort. One limitation of the study is that the measurements of lung function (years 10 and 20) and serum adiponectin (year 15) did not coincide. As a result, it is not possible to establish temporality of associations definitively. Another limitation is the inability of the adiponectin assay to distinguish between its different multimeric forms. Thus, this study could not evaluate potential biological differences in multimeric forms of adiponectin on lung function. We have limited information on lung function values at very high adiponectin concentrations (adiponectin concentrations ≥30 mg/L, which was the 99 th percentile in our data). We reduced the influence of these high values by studying natural logarithmically transformed adiponectin and discuss this methodological issue further in the online supplement (Additional File 1 ). Finally, serum adiponectin concentrations may not reflect airway adiponectin concentrations. In summary, this translational epidemiological study supports the hypothesis that lower serum adiponectin concentrations are associated with lower lung function in young adults. It further suggests that this association is independent of obesity and possibly mediated by insulin resistance and systemic inflammation. Longitudinal studies with long-term follow-up using non-invasive tests of peripheral airway function will help us evaluate the association between serum adiponectin concentrations in early adulthood and risk of lung and cardiovascular disease in later life.
Rationale Adipose tissue produces adiponectin, an anti-inflammatory protein. Adiponectin deficiency in mice is associated with abnormal post-natal alveolar development. Objective We hypothesized that lower serum adiponectin concentrations are associated with lower lung function in humans, independent of obesity. We explored mediation of this association by insulin resistance and systemic inflammation. Methods and Measurements Spirometry testing was conducted at years 10 and 20 follow-up evaluation visits in 2,056 eligible young adult participants in the Coronary Artery Risk Development in Young Adults (CARDIA) study. Body mass index, serum adiponectin, serum C-reactive protein (a marker of systemic inflammation), and insulin resistance were assessed at year 15. Main Results After controlling for body mass index, years 10 and 20 forced vital capacity (FVC) were 81 ml and 82 ml lower respectively (p = 0.004 and 0.01 respectively) in the lowest vs . highest adiponectin quartiles. Similarly, years 10 and 20 forced expiratory volume in one second (FEV 1 ) were 50 ml and 38 ml lower (p = 0.01 and 0.09, respectively) in the lowest vs . highest adiponectin quartiles. These associations were no longer significant after adjustment for insulin resistance and C-reactive protein. Serum adiponectin was not associated with FEV 1 /FVC or peak FEV 1 . Conclusions Independent of obesity, lower serum adiponectin concentrations are associated with lower lung function. The attenuation of this association after adjustment for insulin resistance and systemic inflammation suggests that these covariates are on a causal pathway linking adiponectin and lung function.
Statistical Analysis We used linear regression (PROC GLM) in SAS, version 9.1 (Cary, NC) to evaluate the associations of year 15 adiponectin concentrations with lung function at years 10 and 20. Quartiles of year 15 adiponectin concentrations (independent variable) were used to predict forced vital capacity (FVC), forced expiratory volume in one-second (FEV 1 ) and FEV 1 /FVC ratio at years 10 and 20 (dependent variables). In addition, the ten-year change (year 20 - year 10) in FVC, FEV 1 and FEV 1 /FVC was predicted, using year 15 adiponectin quartiles. Of note, serum adiponectin measurements at years 10 and 20 or conversely, spirometric measurement at year 15 were not available in the CARDIA study. These associations were adjusted for race, sex, study center, height, height 2 , age, age 2 , amount of self-reported physical activity, body mass index (BMI) and smoking status (never, former, current), all at year 15 (model 1). These covariates were selected because of their association with lung function or adiponectin. Since age and height were not linearly associated with lung function, age 2 and height 2 were included in the model to more completely adjust the effect of age and height on lung function. In addition, all analyses were performed with adjustment for measures of insulin resistance (as estimated by HOMA and systemic inflammation (CRP) (model 2). All these analyses (models 1 and 2) were also performed with adiponectin as a continuous variable. Adiponectin, HOMA and CRP were (natural) logarithmically transformed (ln) due to their skewed distribution. There was no evidence for collinearity among the covariates (except age-age 2 and height-height 2 ) used in the model. We also analyzed the association between year 15 serum adiponectin concentrations (independent variable) and peak lung function (FVC, FEV 1 and FEV 1 /FVC) in early adulthood (dependent variable) using models similar to those described above in which all the covariates were measured at year 5. Peak lung function was defined as the maximum lung function measurement at any of the three initial CARDIA visits (years 0, 2 or 5 visits). Abbreviations BMI: Body mass index; CARDIA: Coronary Artery Risk Development in Young Adults; CRP: C-reactive protein; CV: Coefficient of variation; FVC: Forced vital capacity; FEV 1 : Forced expiratory volume in 1 second; HOMA: Homeostatic Model Assessment; IL: Interleukin; MMP: Matrix metalloproteinases; TNF-α: Tumor necrosis factor-alpha. Competing interests RK has served as a paid consultant to AstraZeneca, Boehringer-Ingelheim, Dey Pharmaceuticals and Takeda Pharmaceuticals. He Serves on speakers' bureaus for AstraZeneca, Boehringer-Ingelheim, Pfizer and GlaxoSmithKline, and is the recipient of a research grant from GlaxoSmithKline. All other authors do not have any conflicts of interest to declare. Authors' contributions BT, DRJ and AS conceived the research question. BT performed all statistical analyses and wrote the manuscript. DRJ directed writing and analysis. LJS, RK, and MDG participated in data interpretation and provided critical review of the manuscript. AS directed data analysis and worked closely with BT in writing the manuscript and provided input in interpretation of the data. The manuscript was reviewed and approved by the CARDIA Steering Committee. All authors have read and approved the final manuscript. Supplementary Material
Acknowledgements This study was supported by National Heart, Lung, and Blood Institute contracts N01-HC-48047, N01-HC-48048, N01-HC-48049, N01-HC-48050 (CARDIA field centers), N01-HC-95095 (CARDIA Coordinating Center), PFHC95095 Reading Center (CARDIA Pulmonary Reading Center, subcontract to CARDIA Coordinating Center) and the YALTA grant (R01-HL-53560).
CC BY
no
2022-01-12 15:21:37
Respir Res. 2010 Dec 9; 11(1):176
oa_package/69/db/PMC3014903.tar.gz
PMC3014904
21126356
Background Beta-catenin protein is a vital component of the canonical Wnt/β-catenin signaling pathway, which is described as an oncogenic cause in many human cancers [ 1 ]. In head and neck squamous cell carcinomas (HNSCC), over expression of the Wnt/β-catenin signaling pathway increases cell survival and invasion [ 2 ]. The higher β-catenin expression in HNSCC patients, the more advanced stage [ 3 ] and poor prognosis are observed [ 4 ]. Mutations in the gene that encodes β-catenin (CTNNB1) [ 5 ] and elevated nuclear β-catenin [ 6 ] were implicated in prostate cancers (CaP). Over 90% of colorectal cancers (CRC) demonstrate a deregulated Wnt/β-catenin signaling pathway [ 7 ]. Published studies suggest that unregulated β-catenin, overlapping with adenomatous polyposis coli (APC) mutation, is associated with the initiation of CRC [ 8 - 10 ]. Beta-catenin is expressed in the cytoplasm and the nucleus. The cytoplasm β-catenin, as a component of adherens junctions (AJs) [ 11 ], is an essential element of cell-to-cell adhesion and stability. The level of cytoplasm β-catenin is controlled by the activity of a destruction complex that consists of axin, glycogen synthase kinase 3β (GSK-3β) and APC [ 12 - 15 ]. In the absence of Wnt signaling, the complex is assembled and GSK-3β phosphorylates and consequently degrades cytoplasm β-catenin [ 14 , 15 ]. However, GSK-3β is inactivated in cancer cells by phosphorylation at serine 9, a similar mechanism of GSK-3β inhibition by lithium [ 16 , 17 ]. In the presence of Wnt signaling, β-catenin destruction complex is disassembled by removing axin [ 18 , 19 ] resulting in β-catenin accumulation in the cytoplasm. The accumulated cytoplasm β-catenin hence enters the nucleus to initiate its oncogenic function. The nuclear β-catenin has an important function in many human malignancies [ 1 ] by stimulating cell growth and proliferation. The nuclear β-catenin affects TCF/LEF family transcription factors [ 20 , 21 ] and consequently activates oncogenes such as cyclin D1 [ 22 , 23 ], Myc [ 24 ] and many other downstream targets. The nuclear accumulation of β-catenin is a critical step in the activation process of the canonical Wnt signaling pathway and is associated with poor prognosis in cancer patients [ 25 ]. In addition to its role in cell growth and adhesion, activated canonical Wnt/β-catenin signaling pathway is linked to cancer stem cells [ 26 , 27 ] that contribute to tumor bulk, recurrence and resistance to chemotherapy. Accordingly, β-catenin inhibitors in combination with standard systemic therapies hold great promise to improve treatment's efficacy and outcome. The response rates of combination regimen of irinotecan and 5-fluorouracil/leucovorin (5-FU/LV) is 39% in metastatic CRC [ 28 ]. Treatment with oxaliplatin and 5-FU/LV has improved the response rate to 50.7% in CRC [ 29 ]. Treatment with docetaxel and prednisone against metastatic CaP resulted in a median survival of 19.2 months [ 30 ]. Docetaxel in combination with cisplatin and 5-FU against inoperable advanced HNSCC resulted in a median progression free survival of 11 months [ 31 ]. Although the relative survival in advanced solid tumors is improved by using systemic therapy, the current chemotherapy cure rates are limited. Thus, the development of new regimens is greatly needed to achieve a better clinical outcome. In our preclinical models, selenium-containing compounds enhanced the efficacy of multiple chemotherapeutic agents (CPT-11 or docetaxel) against various types of cancers (colorectal, head and neck and prostate) [ 32 , 33 ]. In mice bearing human colorectal cancer xenografts (HCT-8), combination treatment of MSC and irinotecan resulted in complete tumor regression (100% CR) that was not observed with each drug alone (30% CR) [ 32 ]. Sequential combination of MSeA and docetaxel resulted in synergy enhancing docetaxel-induced cell death in CaP [ 33 ]. Multiple mechanisms of the synergy between selenium and other chemotherapeutic agents are proposed. Selenium (Se) is an essential element that possesses antioxidant properties in a form of selenoproteins protecting cells from harmful free radicals [ 34 - 36 ]. The effect of selenium on β-catenin has yet to be investigated. This study is designed mainly to determine if β-catenin is a target of MSeA in CRC, HNSCC and CaP cancers; and to evaluate the role of GSK-3β in the degradation of β-catenin and if such an effect is associated with enhanced cytotoxicity of anticancer drugs.
Materials and methods Cell lines and drugs Human cancer cell lines of colorectal (HCT-8 and HT-29), head and neck (FaDu and A253) and prostate (PC3 and C42) were purchased from American type cell culture (ATCC, Manassas, VA) and maintained in RPMI 1640 with 10% fetal bovine serum (FBS). The cell lines were tested regularly using Stratogene's mycoplasma plus PCR Primer set (La Jolla, CA) and they were free from Mycoplasma. SN-38, docetaxel, 5-FU, paclitaxel, oxaliplatin, lithium chloride (LiCl) and cycloheximide (CHX) were purchased from Sigma Aldrich (St. Louis, MO). MSeA (CH 3 SeO 2 H) was purchased from PharmaSe Inc. (Lubbock, TX). Toptecan was obtained from GlaxoSmithKline (Durham, NC). Puromycin dihydrochloride, plasmid transfection medium and transfection reagents were purchased from Santa Cruz biotechnology Inc. (Santa Cruz, CA). Schedules and drug doses Cells were treated with various doses of MSeA (0.05, 0.1, 0.5, 1, 5 and 10 μM) for various times (2, 4, 6, 8, 16 and 24 h). MSeA and SN-38 or docetaxel were given alone or in sequential combination. In sequential combination, 2 h treatments with SN-38 (0.7, 1.3, 0.07 and 0.3 μM against HCT-8, HT-29, FaDu and A253 respectively) or docetaxel (2 nM against PC3 and C42) started 22 h after treatment with MSeA (1 or 5 μM). HCT-8 parental and β-catenin knockout cells were treated with SN-38 (2 h), docetaxel (2 h), paclitaxel (24 h), oxaliplatin (24 h), 5-FU (24 h) and topotecan (2 h) using various doses. The doses were for SN-38 (0.5, 1 and 5 μM), for docetaxel (0.05, 0.1 and 0.5 μM), for paclitaxel (0.5, 1 and 5 μM), for oxaliplatin (1 and 5 μM), for 5-FU (50, 100 and 500 μM) and for topotecan (0.5 and 1 μM). LiCl was applied for 24 h in multiple doses (0.5, 1, 5, 10, 20 and 25 mM) alone or in combination with MSeA (5 μM). Cycloheximide (CHX) was applied for 5, 10, 20, 30 minutes and 24 hours at a nontoxic concentration of 100 μM alone or in combination with MSeA (5 μM). Puromycin dihydrochloride was used at concentration of 20 μM for clones' selection. Preparation of cytoplasm and nuclear extract Cytoplasm and nuclear extracts were prepared as previously described [ 41 ]. Briefly, to obtain cytoplasm extract, untreated and treated HCT-8 cells were harvested and suspended in lysis buffer (0.08 M KCl, 35 mM HEPES, pH 7.4, 5 mM potassium phosphate, pH 7.4, 5 mM MgCl 2 , 25 mM CaCl 2 , 0.15 M sucrose, 2 mM PMSF, 8 mM dithiothreitol). After overnight storage at -80°C, cells were passed through a 28-gauge needle, centrifuged and the supernatant collected to represent the cytoplasm extract. The remaining pellet was re-suspended in lysis buffer, sonicated, centrifuged and the supernatant collected to represent the nuclear extract. Silencing the expression of β-catenin HCT-8 cells were utilized to generate a stable transfection using small hairpin β-catenin RNA (β-catenin ShRNA) purchased from Santa Cruz Biotechnology Inc. (Santa Cruz, CA). Transfection was carried out following the manufacture's instructions. Briefly, HCT-8 cells were plated 5 × 10 5 cells/well (6-well plate) one day before the transfection. A well with 70-80% cells confluence was transfected with control shRNA Plasmid-A (a negative control) that encodes a scrambled shRNA sequence that does not inhibit β-catenin to generate HCT-8 scrambled control (HCT-8SC). Another well with the same cells confluence, was transfected with β-catenin shRNA plasmid DNA, a β-catenin-specific lentiviral vector plasmid to knock down expression and generate HCT-8 recombinant clones (HCT-8R). Clones of stable transfectants were selected using 20 μM of puromycin dihydrochloride. After selection, 10 individual clones were evaluated using western blots and the 2 clones (HCT-8RH7 and HCT-8RF4) that demonstrated the most effect of survivin suppression were selected for further studies. Western blots analyses Western blots were performed as described previously [ 33 ] to determine the effects on the intracellular protein levels. Briefly, untreated and treated cells were collected and digested using RIPA buffer (1 M Tris, 1 M NaCl, Triton X-100 and distilled water) with fresh protease inhibitor cocktail. Protein level was measured using Bio-Rad DC protein assay and a synergy HT spectrophotometer (BioTek Instruments, Winooski, VT). Equal amount of protein (50 μg) was loaded on 4-20% SDS-PAGE. After transfer, nitrocellulose membrane was rinsed with PBS-T, blocked with 5% milk and hybridized with the selected antibody. The following primary anti-bodies were used: anti-β-catenin, anti-GSK-3β (BD Biosciences, San Jose, CA) and anti-p-GSK-3β (cell signaling technology, Danvers, MA). The following secondary antibodies were used: goat anti-mouse IgG and goat anti-rabbit IgG (Santa Cruz Biotechnology, Santa Cruz, CA). After incubation with the primary and secondary antibodies, membrane was rinsed and incubated with chemilluminescence or enhanced chemilluminescence and developed using Kodak X-OMAT 2000A (First Source Inc., Rochester, NY). Anti-β-actin (Sigma Aldrich, St. Louis, MO) was used as loading controls. Cell growth assay Cell growth was evaluated using sulforhodlamine B (SRB) assay as previously described and performed [ 33 ]. Briefly, after drug treatment, HCT-8WT, HCT-8SC and HCT-8RH7 and HCT-8RF4 cells were incubated in a drug-free medium for 5 days, fixed, washed and stained with SRB dye. The optical density of bound dye was measured at 570 nm using synergy HT multi-mode microplate reader (BioTek Instruments, Winooski, VT). Statistical analyses Each experiment has been repeated at least 3 times. Values were presented as the mean plus or minus standard deviation. Statistical analyses were performed comparing all treatments groups using unpaired t-student test. Significant difference between groups was noted when the p value was less than 0.05.
Results Inhibition of β-catenin by MSeA is concentration, time and tumor type dependent To evaluate the effect of MSeA on the expression of β-catenin, various tumor cell types were treated with multiple time and doses of MSeA. In all treated cells, MSeA decreased the expression of β-catenin in dose and time dependent manners (Figure 1 ). The data in Figure 1 indicate that the down regulation of β-catenin is MSeA concentration dependent. In CRC cells (HCT-8 and HT-29), 24 h treatment with 5 μM resulted in completely depletion of β-catenin. In contrast, in HNSCC cells (FaDu and A253), the decrease in β-catenin levels in FaDu cells was achieved with lower concentrations of MSeA (0.5 μM) than in A253 (5 μM) (Figure 1A ). In the androgen-independent CaP cells (PC3 and C42), inhibition of β-catenin by MSeA required a high concentration (5 μM) (Figure 1A ). The kinetics of β-catenin inhibition by MSeA appears to be tumor type dependent, early in HCT-8, C42, HT-29 and PC3, and late event in FaDu and A253 (Figure 1B ). MSeA inhibits β-catenin nuclear expression To determine whether MSeA down regulates the activity of β-catenin, nuclear and cytoplasmic extracts of colorectal cancer cells were tested for the level of β-catenin before and after MSeA treatment. The data in Figure 2A indicate that β-catenin is predominantly expressed in the nucleus of untreated CRC cells (HCT-8 and HT-29) indicating activation of β-catenin. Treatments with MSeA resulted in inhibition of nuclear expression of β-catenin (Figure 2B ). These data suggest that MSeA down regulates β-catenin activity through inhibiting its nuclear expression. Inhibition of β-catenin is due to enhanced degradation To determine whether the observed down regulation of β-catenin by MSeA results from inhibition of its synthesis or from increased degradation, cells were treated with MSeA and cycloheximide (an inhibitor of de novo protein synthesis [ 37 ]) alone and in combination. Treatments for up to 30 minutes did not affect the expression level of β-catenin (Figure 3A ). However, β-catenin is down regulated after 24 h treatment with MSeA alone and in combination with cycloheximide (Figure 3B ). These data suggest that the inhibition of β-catenin by MSeA is the result of increased degradation. The role of GSK-3β in β-catenin degradation is cell type dependent To determine the mechanism of β-catenin degradation by MSeA, the role of GSK-3β was evaluated in HT-29 and HCT-8 (Figure 4 ). Treatments with MSeA had no significant effect on the level of total GSK-3β (in HT-29 or HCT-8) and on the level of phosphorylated GSK-3β in HCT-8. In contrast, phosphorylated GSK-3β was significantly decreased in HT-29 (Figure 4A ). To evaluate further the role of GSK-3β, cells were treated with lithium chloride (LiCl) alone (GSK-3β inhibitor [ 38 ]) and in combination with MSeA. Treatment with LiCl increased the level of phosphorylated GSK-3β in both cell lines indicating inhibition of GSK-3β (Figure 4B ). Combination of MSeA with various doses of LiCl resulted in reversing the down regulation of β-catenin in HT-29 cells but not HCT-8 by MSeA (Figure 4B ). Data in Figure 4 demonstrated that the inhibition of β-catenin by MSeA is GSK-3β phosphorylation dependent in HT-29 but independent in HCT-8. The effect of MSeA in combination therapy on the level and activity of β-catenin To determine whether MSeA in combination with a chemotherapeutic agent would affect β-catenin expression and activity, cells were treated with MSeA ± chemotherapeutic agent and analyzed for total and nuclear β-catenin expression. The data in Figure 5A indicate that MSeA inhibited β-catenin expression in all cell lines but neither 7-Ethyl-10-Hydroxycamptothecin (SN-38, the active metabolite of irinotecan) nor docetaxel alone had an effect on the β-catenin levels. Adding SN-38 or docetaxel to MSeA did not interfere with selenium inhibition of β-catenin. The combination of MSeA/SN-38 resulted in even more observed inhibition of total β-catenin when compared with MSeA alone in HT-29, HCT-8 and FaDu cells. In CaP cells, MSeA alone and in combination with docetaxel have similar inhibitory effect on the expression of β-catenin (Figure 5A ). To determine whether the inhibition of β-catenin by MSeA in combination with SN-38 is due to inhibition of the nuclear expression, nuclear extracts of CRC cells were treated with MSeA alone and in combination with SN-38 and evaluated for the level of β-catenin. The combination treatment of MSeA/SN-38 resulted in down regulation of the nuclear expression of β-catenin in HT-29 and HCT-8 cells (Figure 5B ). These results indicate that the activity of β-catenin is decreased after the combination therapy. The inhibition of β-catenin expression by ShRNA or MSeA is associated with enhancement of drug-induced growth inhibition of tumor cell To determine whether inhibition of β-catenin can be correlated with enhanced efficacy of chemotherapy, β-catenin in tumor cells was knockdown by specific ShRNA. Two β-catenin ShRNA tranfectant clones (HCT-8RH7 and HCT-8RF4) that demonstrated inhibition of β-catenin expression when compared with scrambled control (HCT-8SC) and wild type (HCT-8WT) were selected for further testing (Figure 6A ). To evaluate the effect of silencing β-catenin on cell growth, HCT-8WT, HCT-8SC and HCT-8R (HCT-8RH7 and HCT-8RF4) transfectants were treated with various classes of chemotherapeutic agents. Treatments with 0.5 μM of SN-38 were more effective (p < 0.05) against HCT-8R (~50% cell growth inhibition) when compared with all other groups (15% cell growth inhibition in HCT-8SC or HCT-8WT, Figure 6B ). Other doses of SN-38 resulted in similar patterns of inhibition of cell growth (Figure 6B ). In similar fashion, treatments with various doses of docetaxel, paclitaxel, oxaliplatin, 5-FU and topotecan significantly enhanced efficacy against cell growth of HCT-8RH7 and HCT-8RF4 when compared with all other groups of HCT-8SC or HCT-8WT (Figure 6B ). To confirm further that inhibition of β-catenin is associated with enhanced cytotoxicity of anticancer drugs, tumor cells were treated with MSeA alone and in combination with SN-38 and the results were correlated with the levels of β-catenin (table 1 ). The data in table 1 demonstrated a relationship between enhanced cytotoxicity of SN-38 and inhibition of β-catenin by MSeA or ShRNA. Thus, these data support the initial hypothesis that inhibition of β-catenin by MSeA is a critical determinant of drug response.
Discussion Beta-catenin oncogonic protein is widely expressed in many human malignancies [ 1 ] including HNSCC [ 2 - 4 ], CaP [ 5 , 6 ] and CRC [ 8 - 10 ]. Beta-catenin is involved in cell growth [ 22 - 24 ], adhesion [ 11 ] and stemness [ 26 , 27 ]. Beta-catenin is found in multiple cellular locations including intracellular membrane, cytoplasm and nucleus. The nuclear accumulation of β-catenin indicates the activation of its oncogenic form that stimulates transcription factors and genes [ 22 - 24 ] leading to enhanced tumor cell growth and poor prognosis [ 25 ]. The hypothesis of this study is that β-catenin is a target of MSeA and its inhibition would translate into enhanced drug effect. Our results (Figure 1 ) established that MSeA is a potent inhibitor of β-catenin in various cancer types. This broad inhibitory effect of MSeA on the expression of β-catenin is pivotal for explaining the established synergy between selenium and various chemotherapeutic agents against multiple cancers. The data in Figure 2 demonstrate that the inhibition of β-catenin level after MSeA is due to the inhibition of the active form of β-catenin in the nucleus. Thus, these data indicate that pharmacologic doses of MSeA offer effective inhibition of β-catenin activation. Recent findings by Zhang et al demonstrate that selenium effect against esophageal squamous cell carcinoma is correlated with its inhibition on β-catenin/TCF pathway [ 39 ]. Our results confirm this finding in various human cancers and prove that the decreased level of β-catenin is associated with enhanced efficacy of various classes of chemotherapy. Thus, indicating the importance of β-catenin inhibition in drug response. To determine whether the inhibition of β-catenin is due to a decrease in synthesis or an increase in degradation, de novo protein synthesis was inhibited using CHX in the presence and absences of MSeA. The data in Figure 3 indicate that MSeA inhibition of β-catenin is due to increase in degradation but not decrease in synthesis in both CRC cells (HT-29 and HCT-8). Many studies have showed that cytoplasm β-catenin is degraded by an axin/GSK-3β/APC complex [ 12 - 14 ] and the degradation is a GSK-3β phosphorylation dependent process [ 14 , 15 ]. The degradation of the cytoplasm β-catenin prevents its accumulation and translocation into the nucleus. Data in Figure 4 demonstrated that the inhibition of β-catenin by MSeA is GSK-3β dependent phosphorylation in HT-29 but independent in HCT-8. The GSK-3β independent degradation of β-catenin is a novel finding in HCT-8 cells and indicates that the MSeA effect involves other signaling pathways than Wnt/β-catenin, which will be investigated in future studies. In preclinical models, sequential combination treatment of selenium compounds (MSC, SLM or MSeA) and various chemotherapeutic agents (SN-38 or docetaxel) were proven synergistic against various cancers including HNSCC, CRC and CaP [ 32 , 33 ]. Studies were carried out to determine whether the combination treatment of MSeA and chemotherapeutic agents affect β-catenin level in those cell lines. Our results in Figure 5 showed that treatment with MSeA in combination with SN-38 or docetaxel down regulated the total and the nuclear β-catenin. These results confirm that the chemotherapeutic agent did not interfere with selenium inhibition of the level and activity of β-catenin. However, neither SN-38 nor docetaxel alone affect the expression level of β-catenin (Figure 5 ). To determine that inhibition of β-catenin by MSeA will translate into enhanced drug-cytotoxicity, cells knocked down β-catenin by ShRNA were more sensitive to growth inhibition by SN-38 than wild type. Collectively, this study indicates that the decreased level of β-catenin is associated with enhancement of drug induced inhibition of cell growth. (table 1 ). Further, the data in Figure 6 indicate that silencing β-catenin increases the cytotoxicity of various chemotherapeutic agents. The efficacy of SN-38, docetaxel, paclitaxel, oxaliplatin, 5-FU and topotecan was significantly increased in HCT-8R when compared with control groups (Figure 6B ).
Conclusions These results support the hypothesis that β-catenin is a target of MSeA and its inhibition results in enhanced drug-cytotoxicity in multiple cancers. Degradation of β-catenin by GSK-3β is not a general mechanism but it is cell type dependent. Although Selenium is a multi-target agent [ 32 , 33 , 40 ], inhibition of β-catenin is a critical determinant of drug response. These preclinical results provided the rationale for validation of this new and innovative approach in a clinical setting.
Background Beta-catenin is a multifunctional oncogenic protein that contributes fundamentally to cell development and biology. Elevation in expression and activity of β-catenin has been implicated in many cancers and associated with poor prognosis. Beta-catenin is degraded in the cytoplasm by glycogen synthase kinase 3 beta (GSK-3β) through phosphorylation. Cell growth and proliferation is associated with β-catenin translocation from the cytoplasm into the nucleus. This laboratory was the first to demonstrate that selenium-containing compounds can enhance the efficacy and cytotoxicity of anticancer drugs in several preclinical xenograft models. These data provided the basis to identify mechanism of selenium action focusing on β-catenin as a target. This study was designed to: (1) determine whether pharmacological doses of methylseleninic acid (MSeA) have inhibitory effects on the level and the oncogenic activity of β-catenin, (2) investigate the kinetics and the mechanism of β-catenin inhibition, and (3) confirm that inhibition of β-catenin would lead to enhanced cytotoxicity of standard chemotherapeutic drugs. Results In six human cancer cell lines, the inhibition of total and nuclear expression of β-catenin by MSeA was dose and time dependent. The involvement of GSK-3β in the degradation of β-catenin was cell type dependent (GSK-3β-dependent in HT-29, whereas GSK-3β-independent in HCT-8). However, the pronounced inhibition of β-catenin by MSeA was independent of various drug treatments and was not reversed after combination therapy. Knockout of β-catenin by ShRNA and its inhibition by MSeA yielded similar enhancement of cytotoxicity of anticancer drugs. Collectively, the generated data demonstrate that β-catenin is a target of MSeA and its inhibition resulted in enhanced cytotoxicity of chemotherapeutic drugs. Conclusions This study demonstrates that β-catenin, a molecule associated with drug resistance, is a target of selenium and its inhibition is associated with increased multiple drugs cytotoxicity in various human cancers. Further, degradation of β-catenin by GSK-3β is not a general mechanism but is cell type dependent.
Competing interests The authors declare that they have no competing interests. Authors' contributions MSS performed and designed experiments, prepared and wrote the manuscript. DRR performed cytotoxicity experiments. YMR participated in study design, data interpretation and preparation of the manuscript. RGA designed the research strategy, supervised the project, assisted in data generation, results interpretation and correction of the manuscript. All authors read and approved the final manuscript.
Acknowledgements This research was supported, in part, by a Research Grant IRG-02-197-06 (RGA) form the American Cancer Society; and, in part, by the NCI Cancer Center Support Grant to the Roswell Park Cancer Institute CA016056.
CC BY
no
2022-01-12 15:21:37
Mol Cancer. 2010 Dec 2; 9:310
oa_package/50/03/PMC3014904.tar.gz
PMC3014905
21122117
Background Attention Deficit Hyperactivity Disorder (ADHD) is a common neurodevelopmental disorder characterised by pervasive, age inappropriate behaviours of inattention, hyperactivity and impulsivity. The current definition of ADHD defines the age of onset of impairing symptoms as occurring before the age of 7 years, although formal diagnoses are not usually made before this age. However, early characteristics are good predictors of later appearing behavioural problems [ 1 ] and therefore, employing research strategies to identify developmental aetiological factors in young children remains important. It is well established that ADHD in children is highly heritable with estimates averaging at ~76% [ 2 ], with the same being true of ADHD symptoms in pre-school children [ 3 ]. However, genetic variation underlying these observed heritabilities is still not well understood. Candidate gene studies in children have focused predominantly on genes of monoaminergic neurotransmitter systems, particularly dopamine. The main genes of interest in this research have been the dopamine transporter gene ( DAT1 ) and dopamine receptor genes ( DRDs ). These choices have been informed by a dopamine hypothesis of ADHD, which stems from the action of stimulant medications such as methylphenidate and dexamphetamine which increase levels of available synaptic dopamine. These studies have proven relatively fruitful with robust associations between DRD4 and DRD5 with ADHD being identified in meta-analysis [ 4 ]. More recently, whole genome association analyses in both children and adults have provided some information on potential new candidates for follow up [ 5 - 7 ]. Of particular interest is the convergent finding of association with variants within CDH13 , a gene that lies within the ADHD linkage region on chromosome 16p [ 5 , 8 ]. This has provided new insights into the underlying genetics of ADHD and has allowed for new hypotheses to be formed for future research. However, there have been fewer molecular studies in preschool children, although there is some evidence to suggest that candidate genes from various neurotransmitter systems such as DAT1 , synaptosome-associated Protein 25 ( SNAP25 ) and the noradranaline transporter ( NET1 ) may have some involvement [ 9 ]. It is apparent that these genes are not necessarily acting on the ADHD phenotype consistently throughout development, with a number of studies suggesting that although there is a general genetic stability across time from ages 2 through to 4 years [ 10 ]; 2, 3, 4 and 7 years [ 11 ]; 3 through 12 years [ 12 ] and 8 through to 14 years [ 13 ], there is also age-specific genetic variance. The implications of this are that association studies using heterogeneous samples are potentially losing information on age-specific effects of genotype on ADHD. Further, with the need for replication across studies it becomes very difficult to identify the causes of non-replication due to differences in sample demographics. We have recently reported high heritability and genetic association between specific risk alleles and ADHD symptom scores in a population sample of 2-year old twins, with modest evidence of association being found for DAT1 and NET1 [ 14 ]. In the present analysis we have used the same sample to assess the degree to which genetic effects on ADHD symptoms are stable from ages 2 to 3 using quantitative genetic techniques. In addition to this analysis, we have studied previously reported ADHD risk alleles to identify any age-specific genetic associations. Candidate gene variants were chosen based on previous positive association with ADHD in either clinical or quantitative trait locus (QTL) analyses. Given the nature of the analyses we hypothesised that there would be substantial genetic overlap in ADHD symptom scores across ages, which would translate into a number of genetic variants at age 2 also being associated at age 3.
Method Sample The Boston University Twin Project sample was recruited from birth records supplied by the Massachusetts Registry of Vital Records. Ethical approval was obtained for the study through the joint South London and Maudsley and the Institute of Psychiatry NHS Research Ethics Committee ref. 2002/238. Twins were selected preferentially for higher birth weight and gestational age. No twins with birth weights below 1750 grams or with gestational ages less than 34 weeks were included in the study. Twins were also excluded if one or both twins had a health problem that might affect motor activity (e.g., cerebral palsy, club foot) or had chromosomal abnormalities. The present analyses include 312 same-sex pairs of twins (144 MZ, 168 DZ; 164 male pairs, 148 female pairs). Although the sample was predominately Caucasian (85.4%), ethnicity was generally representative of the Massachusetts population (3.2% Black, 2% Asian, 7.3% Mixed, 2.2% Other). Socioeconomic status according to the Hollingshead Four Factor Index (1975) ranged from low to upper middle class (range = 20.5-66; M = 50.9, SD = 14.1). Zygosity was determined via DNA analysis using DNA obtained from cheek swab samples. In the cases where DNA was not available ( n = 3), zygosity was determined using parents' responses on physical similarity questionnaires which have been shown to be more than 95% accurate when compared to DNA markers [ 15 ]. In our present sample we were able to assign zygosity with certainty to 99% of the twin pairs using the parent questionnaire, moreover agreement between questionnaire and DNA zygosity analyses was very high (kappa = .94). Parent Reports of ADHD Behaviour Written informed consent was obtained from parents and they were invited to assess their children's behaviour at two time points; 1) within two weeks of their second birthday and 2) within two weeks of their third birthday. The mean age at time point 1 was 2.07 years ( SD = 0.05) and at time point 2 it was 3.05 ( SD = 0.05). Parent ratings of hyperactivity were obtained from either parent using the hyperactivity subscales of the Child Behavior Checklist/1.5 - 5 years (CBCL) [ 16 ] and the Revised Rutter Parent Scale for Preschool Children (RRPSPC) [ 17 ] which assess behaviors relating to overactivity, inattention, and impulsivity. Of the total sample 94% mothers and 6% fathers completed the questionnaires, with the same parent completing the questionnaire at both ages. In the present study reliabilities for the CBCL and the RRPSPC, as estimated by Cronbach's alpha were .78 and .75, respectively. The two ADHD measures correlated significantly at both time points (age 2, r = 0.67 , p < 0.01 and age 3, r = 0.65 , p < 0.01; data based on 312 individuals). These measures also display high genetic correlations at both ages (age 2 rG = 0.71, age 3 rG = 0.76, analyses are available on request from first author). Scores from these measures were subsequently averaged to form an ADHD composite measure, which was square root transformed for a more normal distribution. Model Fitting Analysis Because twin co-variances can be inflated by variance due to sex, all scores were residualised for sex effects. Residualised scores were used for all model fitting procedures. A Cholesky decompositon model was used to estimate the relative contributions of additive genetics (A), shared environment (C) and non-shared environment (E) to the phenotypic variance of ADHD at each age, as well as genetic and environmental contributions to the co-variation between ages. Models were fit to raw data using a maximum likelihood pedigree approach implemented in Mx structural equation modelling software [ 18 ]. The overall fit of a model was assessed by calculating twice the difference between the negative log-likelihood (-2LL) of the model and that of a saturated model (i.e., a model in which the variance/covariance structure is not estimated and all variances and covariances for MZ and DZ twins are estimated). Genotyping Polymorphisms were chosen based on previous association with ADHD in either clinical or QTL studies (Table 1 ). DNA was extracted from buccal swabs as described by Freeman et al . 2003[ 19 ]. Both parents and offspring were genotyped. VNTR polymorphisms ( DRD4 exon 3, DAT1 3'UTR, DAT1 intron 8, the 5-HTT LPR and MAOA promoter) were genotyped in-house. Protocols for genotyping the VNTRs are available on request from the authors. Single nucleotide polymorphisms (SNPs) were genotyped by Prevention Genetics http://www.preventiongenetics.com/resgeno/researchgeno.htm . Various genotyping quality control measures were implemented to assess the impact of potential error. Mendelian discrepancies in the data were checked using PEDSTATS http://www.sph.umich.edu/csg/abecasis/QTDT/download/ [ 20 ]. The average Mendelian error rate for the VNTR genotyping was 0.65% with the highest rate being for the MAOA promoter VNTR (1.45%). Where inheritance errors were detected, genotypes for that family were coded '0'. Eight of the chosen SNPs (rs3776513, rs2042449, rs1386493, rs1386497, rs1050565, rs2652511, rs1800955 and rs747302) failed at the stage of assay design. For the remaining 17 SNPs the average Mendelian error rate was 1.05%. A breakdown by SNP revealed two SNPs, rs40184 and rs1843809 that had high Mendelian error (2.03% and 8.39%, respectively) and these two SNPs were omitted from further analysis. With these SNPs removed, the error rate was reduced to 0.47% and remaining inheritance errors were coded as missing genotypes for the family/genotype combination. A second genotyping control measure was the use of a sex specific marker. The error associated with sex anomalies was 0.35%. Along with the specific sex marker, genotyping error on X-linked markers (MAOA promoter VNTR and rs6323) gave an additional sex discrepancy error of 0.008%. A further quality control measure was through genotyping 96 random duplicates. Only 0.02% of duplicated samples were not consistent with the original genotype. Taken together, genotyping error was estimated to be 1.5% plus hidden error. Hidden error can be considered as 1/3 total genotyping error. With additional, hidden genotyping error included, the genotype error rate including both detected and undetected errors may be as high as 4.5%. All markers included in the analysis conformed to Hardy-Weinberg equilibrium (p > 0.01). Association Analysis Tests of allelic association were performed using the Quantitative Transmission Disequilibrium Test (QTDT) [ 20 ] on ADHD scores residualised for sex effects. An advantage of using QTDT in association analyses using twin data is that all families remain informative regardless of twin class. QTDT tests for association in a variance components framework and using the -weg command in the program, one can model the phenotypic similarities that are due to sharing of the genome (polygenic (g), 100% for MZ twins and 50% for DZ twins), as well as phenotypic differences that are due to non-shared environmental influences (e). Three models of association were tested using a likelihood ratio test implemented in QTDT: the 'Total Association' test (AT), the 'Within-Test' of association (AW) and the test of stratification (AP). These different models provide the user with varied information regarding association statistics and tests of stratification. Overall association was tested using the AT model which assesses both the within-pair differences as well as between-pair sums (i.e. the correlation between phenotypic and genotypic differences and sums for each twin pair) and is the most powerful test in the absence of stratification effects. In contrast, the AW assesses the within component only. The within-pair design of the AW means that it is unaffected by between-family stratification effects, yet is less powerful than the AT in the absence of stratification. Based on the differences between these two models, the significance of association should consider stratification effects. To evaluate this we modelled association using the AP test which compares the significance from the between component versus the within component of association. Stratification effects are dismissed when these components are equal and p > 0.05. In this instance, results are interpreted from the AT. Conversely, results are interpreted from the AW if significant stratification effects are detected. VNTR markers were tested using the 'multi-allelic' function in QTDT. This provides a single p-value for tests of alleles with an allele frequency >0.05. UNPHASED http://www.mrc.bsu.cam.ac.uk/personal/frank/software/unphased/ was used to test X-linked markers (polymorphisms in MAOA ) because QTDT cannot deal with such data. Because UNPHASED has no means for handling MZ twin data, mean phenotypic scores for MZ pairs were used in these analyses.
Results Descriptive statistics for the measures analysed in this sample are presented in Table 2 . Intraclass correlations at both ages displayed DZ correlations that were roughly half MZ correlations, inferring predominantly additive genetic effects (Table 3 ). When compared to a saturated model, the fit of the data to the Cholesky decomposition model was not significantly different (χ 2 = 13.85, df = 11, p = 0.24, Table 4 ). The majority of the variance for ADHD symptoms at ages 2 and 3 was explained by additive genetic influences, producing estimates for A of 0.78 (95%CI 0.65 - 0.83) and 0.79 (95%CI 0.65 - 0.84) (Table 3 ), respectively. There were no significant effects of C on the trait variance at either age (Table 3 ), with no detriment in fit when this parameter was dropped from the model (χ 2 = 0, df = 3). There were modest effects of E at both ages (age 2, E = 0.22, 95%CI 0.17 - 0.29 and age 3, E = 0.21, 95%CI 0.16 - 0.27). From the Cholesky decomposition model (Figure 1 ) we can estimate the degree to which A, C and E contribute to the co-variance of ADHD symptoms across time. C has been omitted from Figure 1 because of the lack of significant C on the variance at either age. All path estimates are provided from the most parsimonious AE model. A large proportion of the additive genetic variance at age 2 was shared with that at age 3 (Figure 1 ), although there remained emerging age-specific effects (Figure 1 ). Indeed, dropping the age 3-specific A path from the Cholesky decomposition model resulted in a significant worsening in fit (χ 2 = 12.263, df = 1, p < 0.01), suggesting a contribution of genetics to both phenotypic stability and change. The effect of E on the covariation between ages was small, yet significant (Figure 1 ). Using unsquared path estimates from the Cholesky decomposition model, we can estimate the correlation between ADHD symptoms at age 2 and 3. In this case the phenotypic correlation between ages is calculated as (√0.79 × √0.48) + (√0.21 × √0.01) = 0.67. Additive genetic influences account for 93% of this correlation (bivariate heritability = ((√0.79 × ( rG = 0.78) × √0.79)/0.67) × 100 = 93%). Molecular Genetic Analysis Total Test of Association (AT) At age 2, nominal association was detected between the DAT1 3'UTR VNTR (χ 2 = 7.00, df = 2, p = 0.03) and one NET1 SNP, rs11568324 (χ 2 = 4.38, df = 1, p = 0.04) with the ADHD composite (Table 5 ). Two additional SNPs in NET1 , rs3785157 (χ 2 = 3.68, df = 1, p = 0.06) and rs998424 (χ 2 = 3.30, df = 1, p = 0.07) and a SNP in 5-HTT , rs140701 (χ 2 = 2.96, df = 1, p = 0.09) provided weak evidence of association with this measure (Table 5 ). At age 3, nominal association was detected between the same DAT1 polymorphism (χ 2 = 11.15, df = 2, p = 0.004) as at age 2, as well as the DRD4 exon 3 VNTR (χ 2 = 7.82, df = 3, p = 0.05). Given the non-independent nature of the phenotypes under investigation, we did not correct any of the association findings for the number of phenotypes studied. None of the associations at either age withstood Bonferroni correction for 20 comparisons (20 markers) at p < 0.05. Within Test of Association At age 2 we found no evidence for stratification effects (AP test, data not shown), although it cannot be ruled out due to low power to detect it in this sample. We therefore completed the AW test for all genetic markers, which is robust to stratification effects. Two SNPs in NET1 , rs3785157 (χ 2 = 4.65, df = 1, p = 0.03) and rs998424 (χ 2 = 4.42, df = 1, p = 0.04) showed nominal significance in this test with the ADHD composite, although high linkage disequilibrium (LD) between these SNPs suggests non-independence. Further, the DAT1 3'UTR VNTR (χ 2 = 5.09, df = 2, p = 0.08) and rs140701 (χ 2 = 3.03, df = 1, p = 0.08) displayed an association trend with the same measure (Table 5 ). At age 3 we found evidence for stratification in the AP test for two markers in NET1 , rs3785157 and rs998424 (χ 2 = 5.42, df = 1, p = 0.02 and χ 2 = 4.46, df = 1, p = 0.03, respectively). Nominal associations were found with rs3785157 in NET1 (χ 2 = 4.30, df = 1, p = 0.04), rs11080121 in 5-HTT (χ 2 = 4.77, df = 1, p = 0.03), the DAT1 3'UTR VNTR (χ 2 = 12.17, df = 2, p = 0.002) and the DRD4 exon 3 VNTR (χ 2 = 8.69, df = 3, p = 0.03) (Table 5 ). In addition, rs998424 in NET1 and rs140701 in 5-HTT displayed an association trend (χ 2 = 3.22, df = 1, p = 0.07 and χ 2 = 3.24, df = 1, p = 0.07, respectively). rs11568324 was not tested in the AW test due to low minor allele frequency (MAF = 0.01) and subsequent low numbers of informative twin pairs. Application of a Bonferroni correction to each nominally associated marker for a total of 20 comparisons yielded only the DAT1 3'UTR VNTR significant (AW test, p = 0.04).
Discussion In this study we investigated the genetic relationship between ADHD symptom scores at two time points in infancy. Consistent with previous reports we found ADHD scores to be highly heritable at age 2 and 3 years, providing evidence for the involvement of additive genetics on the variance of these measures, as well as identifying them as viable measures for molecular studies. Intraclass correlations for our ADHD measure were suggestive of predominantly additive genetic influences at both ages. However, the literature is mixed with regards the effects of dominance and contrast effects, a feature of ADHD that is often found in samples of older children [ 21 ]. Dominance and contrast effects are characterized by DZ correlations that are lower than half MZ correlations, and while there is evidence for dominance in symptoms of overactivity in young children [ 22 ], there is no evidence for these effects in other studies of activity and attention problems [ 23 ]. In light of the power needed to detect dominance and contrast effects [ 24 ] and given the lack of evidence for these effects in this study, we did not formally test for them, although future research in large samples using similar measures are needed to clarify this issue. Phenotypic stability of ADHD symptoms across ages was moderate, producing inter-age correlations of 0.51 - 0.62 (twin 2 - twin 1), which is consistent with previous reports using samples of this age range [ 10 ]. The suggestion here is that while symptoms are consistent across ages for the most part, there remains developmental change, which is reflected in the newly emerging additive genetic variance at age 3, a variance component that is unaffected by error associated with fluctuations in evaluations. Prior research has shown a level of genetic stability on ADHD traits across numerous age ranges, including very young children [ 10 , 11 ]. Our analyses concurred with these findings as we found that genetic effects at age 2 are largely shared with those acting at age 3. The suggestion here is that genetic variation that influences variance in ADHD scores at age 2 will be the same as those acting at age 3, on the most part. Having said that, unique effects of additive genetics at age 3 are significant, so while there is substantial genetic continuity across ages, emerging effects cannot be ignored. Unfortunately a limitation of this study was the limited power to assess sex × gene interaction effects in the quantitative analysis. This is an interesting area of research and one that should be considered in future research with more powerful samples, although at present there is little evidence for gene × sex interaction, at least in symptoms of overactivity [ 22 ]. Given the results from our quantitative analysis, it is interesting to consider the results of our molecular genetic analyses. At age 2, we found modest, nominally significant (p < 0.05) associations with four variants ( DAT1 3'UTR VNTR, rs11568324, rs3785157 and rs998424). Although there were some associations in common at age 3 ( DAT1 3'UTR VNTR and rs3785157), the association between ADHD scores and rs11568324 at age 2 did not replicate at age 3. Further, an age-3-specific association was observed with the DRD4 exon 3 VNTR and one SNP in 5-HTT (rs11080121), findings that are consistent with our quantitative genetic results. Although suggestive at this stage, these findings highlight problems of age-specific genotypic effects that may occur in demographically heterogeneous samples. We may speculate that these differences in genetic association are due to new effects emerging at age 3, implying developmental specificity in which phenotypic consequences of DNA polymorphisms are effectively masked until a particular developmental stage is reached. There are, however, alternative explanations. It might be that subtle differences in ratings between ages causes some manner of spurious association at either age independently, an issue that relates largely to the power of the sample and increases the chance of type I and II errors. In any case, from our analyses it is apparent that there are age-specific effects of genotype on ADHD symptom scores and is thus a factor that should be considered in genetic studies. An interesting comparison to be drawn is one between this study and an analysis carried out by Mill et al . [ 9 ], who conducted a similar analysis in a population-based twin sample. Although they used a composite measure of ADHD symptom scores across 2, 3, 4 and 7 years for the main analysis, they also reported some individual time-point data. DAT1 was found to be associated with ADHD symptoms at ages 2 and 3, and our report therefore serves as a replication of these findings. A further point for discussion is the observed difference between the AT and AW tests of association. At age 2, rs3785157 and rs998424 were significantly associated (nominal p < 0.05) only in the AW test. Given the increased power of the AT test to detect association in the absence of stratification, these results may be surprising, and may reflect between-family differences in child ratings. We are, however, unable to assign this observation to any stratification effects because of a non-significant finding in the AP test. This raises issues regarding the power of the sample to detect stratification and makes it difficult to conclude that there are in fact any significant differences in the between and within family components of association. However, of interest is that at age 3, larger discrepancies in effects of these two markers were observed between the AT and AW tests, an observation that is apparent in the AP test which displays significant evidence of stratification. This phenomenon is also seen for associations with the DAT1 3'UTR VNTR and DRD4 VNTR at age 3, where there is a decrease in p-value in the AW compared to the AT test, albeit with no significant difference in the AP test. Taken together, we conclude that there is evidence for stratification effects, an observation that is not unique to this study [ 9 ] and which may reflect between-family differences in rating styles. In particular, it is interesting to note that the pattern of DAT1 3'UTR VNTR associations in this study are the same as those observed by Mill et al . [ 9 ]. Both studies display greater significance for the AT than AW test at age 2, with the reverse effect at age 3. The suggestion is, therefore, that there may be new stratification effects emerging at age 3 that could contribute to the observed age-specific genotypic effects. A major limitation of this study is the power of the sample to detect genetic association, especially if we consider convincing levels of significance to be in the order of p < 5 × 10 -7 [ 25 ]. Using the genetic power calculator http://pngu.mgh.harvard.edu/~purcell/gpc/ we estimated that the sample had 47% power to detect a QTL affecting 1% of the phenotypic variance and 71% power to detect a 5% QTL. Despite being underpowered, we detected nominal significance for a number of polymorphisms at ages 2 and 3, and although we cannot rule out the possibility of false positives, the study serves as a proof of principle, in that age-specific effects of genotype on behavioural measures is an issue to be addressed, especially in underpowered samples. In this study we investigated the genetic relationship between ADHD symptom scores at age 2 and age 3. Although we found that the majority of genetic effects were shared across ages, there was room for some age-specificity. These inferences were borne out in the molecular genetic analyses, whereby associations seen at age 2 replicated at age 3. However, some observed associations were age-specific, which highlights this issue as an important one to consider in genetic association studies.
Conclusions This report indicates that although the majority of genetic effects on ADHD symptom scores at age 2 are stable through to age 3, there remains significant emerging effects. As well as enabling us to better understand how genes contribute to the aetiology and origin of ADHD, the report also serves to highlight the importance of demographic homogeneity in molecular genetic studies.
Background A twin study design was used to assess the degree to which additive genetic variance influences ADHD symptom scores across two ages during infancy. A further objective in the study was to observe whether genetic association with a number of candidate markers reflects results from the quantitative genetic analysis. Method We have studied 312 twin pairs at two time-points, age 2 and age 3. A composite measure of ADHD symptoms from two parent-rating scales: The Child Behavior Checklist/1.5 - 5 years (CBCL) hyperactivity scale and the Revised Rutter Parent Scale for Preschool Children (RRPSPC) was used for both quantitative and molecular genetic analyses. Results At ages 2 and 3 ADHD symptoms are highly heritable ( h 2 = 0.79 and 0.78, respectively) with a high level of genetic stability across these ages. However, we also observe a significant level of genetic change from age 2 to age 3. There are modest influences of non-shared environment at each age independently ( e 2 = 0.22 and 0.21, respectively), with these influences being largely age-specific. In addition, we find modest association signals in DAT1 and NET1 at both ages, along with suggestive specific effects of 5-HTT and DRD4 at age 3. Conclusions ADHD symptoms are heritable at ages 2 and 3. Additive genetic variance is largely shared across these ages, although there are significant new effects emerging at age 3. Results from our genetic association analysis reflect these levels of stability and change and, more generally, suggest a requirement for consideration of age-specific genotypic effects in future molecular studies.
Conflict of interests The authors declare that they have no competing interests. Authors' contributions NI carried out the VNTR genotyping, data analysis, and interpretation and drafted the manuscript. KS designed the study, carried out data collection, helped with interpretation and helped draft the manuscript. PA helped with interpretation and helped draft the manuscript. All authors read and approved the final manuscript. Pre-publication history The pre-publication history for this paper can be accessed here: http://www.biomedcentral.com/1471-244X/10/102/prepub
Acknowledgements The BUTP is supported by grant MH062375 from the National Institute of Mental Health.
CC BY
no
2022-01-12 15:21:37
BMC Psychiatry. 2010 Dec 1; 10:102
oa_package/bf/93/PMC3014905.tar.gz
PMC3014906
21122100
Background When Europeans arrived in Brazil in 1500, they found more than two million Amerindians [ 1 ], many of them inhabiting the eastern part of the country. Five hundred years later, in the 2000 Brazilian census, there remained only 734 thousand Amerindians in Brazil, almost all of them living in the northern (Amazon region) and the western states. We know almost nothing about the genetic makeup of the once numerous Amerindian populations that lived in the eastern part of Brazil. Even the historical evidence that we have is meager, and limited to imperfect reports written by European scientific expeditions who came to Brazil early in the 19th century [ 2 ]. One of the best known eastern Brazilian Amerindian nations was the Botocudos, a hunting-gatherer group that is mentioned in Darwin's The Descent of Man . The names this tribe used themselves were 'Gren' or 'Kren'; the name 'Botocudos' (by which they are generally referred) was given to them by the Portuguese because they inserted into their lower lips and earlobes wooden disks similar to the corks of wine casks ( botoques ) used in Portugal. The Botocudos belonged to the Macro-Je linguistic group, and inhabited adjacent regions in the states of Minas Gerais, Bahia and Espírito Santo, in southeast Brazil [ 3 ]. We do not have information on whether they were linguistically, culturally and genetically homogeneous, or if all they shared were the physical appearances, the ornaments and their hunter-gatherer lifestyle. In 1808, the Portuguese royal court moved to Brazil, fleeing from the Napoleonic invasion of the Iberian Peninsula. Soon afterwards, Prince Regent João, acting on reports about the Botocudo savagery and their refusal to subject to European rule, declared war on their nation, a policy that eventually led to their virtual extinction [ 4 ]. Nowadays, their only descendants are a very small group (< 500 individuals) of Krenak Indians, who are considerably admixed with other Indian groups [ 3 ]. In 2000, Alves-Silva et al. [ 5 ] reported that approximately one-third of the mitochondrial mtDNA lineages of self-identified 'white' cosmopolitan Brazilians had Amerindian origin. Because the population of Brazil is approximately 190 million inhabitants, a naive extrapolation would lead us to expect the existence of roughly 60 million Brazilians carrying Amerindian mtDNA. If it were possible to study this DNA and ascertain the Amerindian group from which those individuals originated, it might also be possible to reconstitute the mtDNA haplotype profile of many extinct original native populations. We reasoned that the chances of success would be much improved if we studied extant populations that have always lived in small regions once inhabited by specific Amerindian nations. We have called this strategy 'homopatric targeting', a neologism made up of the Greek roots ομοιος ( homos ) meaning 'the same' and πατρίδα ( patrida ) meaning 'fatherland'. In this paper, we present the results of this technique applied to the rural population of Queixadinha which is located in the northeast part of the state of Minas Gerais in Brazil, in what is known as the homeland until the first part of the 19th century of the now virtually extinct Botocudo nation of Amerindians [ 3 , 4 ]. This study led to the identification of some mtDNA haplotypes that had not hitherto been described in any other human population studied, and these are candidate Botocudo haplotypes. The presence of skeletons classified as Botocudos stored in the anthropological collection of the National Museum in Rio de Janeiro provided us with the material to investigate the validity of our approach. We studied teeth extracted from 14 ancient Botocudo skulls, identifying one haplotype that was present among the lineages observed in the extant individuals studied. Thus, homopatric targeting emerges as a useful new phylogeographical strategy to study the peopling and colonization of the New World, especially when direct analysis of genetic material is not possible.
Methods Consent was obtained from all participants, and all DNA analyses were performed anonymously. The study was approved by the local ethics committees of all the institutions involved in sample collection. Extant populations The population studied has been described previously [ 28 ], and included 173 individuals from the rural community of Queixadinha (17.12° S; 41.42° W) in the Vale do Jequitinhonha region of the state of Minas Gerais in Brazil, occupied before the 19th Century by the Botocudo Amerindian nation. Three-generation pedigrees were obtained, and individuals who belonged to the same maternal lineages were removed from the study. Thus, 74 matrilineally unrelated individuals remained for further study of their mitochondrial ancestry. As controls, we analyzed DNA samples from 100 unrelated cosmopolitan individuals living in cities in the same macrogeographic region in the northeastern part of the state of Minas Gerais, obtained from paternity casework. RFLP analysis and selection of Amerindian mtDNA candidates Five amplified segments in the mtDNA coding region were analyzed by RFLP tests to type haplogroup-specific sites as follows: haplogroup A, + 663, Hae III; haplogroup C, -13259, Hinc II; haplogroup D, -5176, Alu I and haplogroup X, -1715, Dde I. Haplogroup B was identified using the 9 bp polymorphic deletion (region V, between COII and tRNA Lys ). All PCR amplifications and digestions were carried out according to previously described protocols [ 5 ]. mtDNA control region amplification and sequencing The nucleotide sequences of the HVSI and II of the control region of the mitochondrial DNA were determined for all individuals who had been typed as belonging to the Amerindian haplogroups A, B, C, D and X, and for those that possibly belonged to haplogroup M. The method used was direct sequencing from PCR products; both strands of each sample were sequenced and analyzed separately. Subsequently, the sequences were compared with the reference sequence of human mitochondrial DNA [ 19 , 29 ], and the mutations (or polymorphisms) characteristic of each lineage and each individual were identified. The initial amplifications via PCR were performed using two pairs of specific primers (Table 3 ) The PCR assays were as previously described [ 5 ]. Two sequencing methods were used, using two different sequencers. For the reactions using the Automated laser fluorescence sequencer (Pharmacia, Uppsala, Sweden), amplified segments were purified (WizardTMPCR Preps Kit; Promega BioSciences. Sunnyvale, CA, USA) and around 300-400 ng of sample were used in the sequencing reactions, with a commercial fluorescence label kit (use of the Thermo SequenaseTMPrimer Cycle Sequencing Kit with 7-deaza-dGTP; Amersham Life Sciences, Amersham, Buckinghamshire, UK). In sequencing reactions, fluorescein-labeled primers were use (Table 3 ). For sequencing reactions performed on the automatic capillary sequencer (MegaBACE 1000; GE Healthcare, USA), around 40 to 50 ng of PCR product were mixed with 10 μM of primer (MiL15996 for direct sequencing of the HVSI and MiL16401 for the reverse, with the M13-universal and M13 reverse primers for the direct and reverse strands of HVSII) and 4 μl of reagent from a (DYEnamicTME dye terminator kit; Amersham Pharmacia Biotech), thus making a total volume of 10 μl. The mixture underwent PCR, and the sequencing products were purified. Filtering of artificial mutations generated in the sequencing process To avoid the presence of phantom mutations generated artificially in the sequencing process, which might cause erroneous evolutionary interpretations, we followed the strategies described by Bandelt et al. [ 30 ]. In addition, the direct and reverse strands of all the sequences were analyzed separately, and all the mutations encountered were carefully certified. The sequence files generated were analyzed together with the respective chromatograms. mtDNA minisequencing Based on the results obtained from the phylogenetic analyses of the Amerindian sequences, some haplotypes were selected as possible genetic signatures of indigenous populations of the region. To verify the presence of recently described polymorphisms encountered in the coding region of mtDNA that can better distinguish between Asian and Amerindian haplogroups [ 12 , 16 , 17 ], we developed a minisequencing protocol [ 18 ]. Thus, we were able to avoid the need for complete sequencing of the mtDNAs, which would have resulted in an excessively high cost for the project. In total, 13 polymorphisms were analyzed: three from haplogroup A, six from B, two from C and two from D, (see Additional file 4 ). The primers used in the minisequencing for the regions of interest were as described by Rieder et al. [ 31 ], and the average size of the products was 800 bp. The primers for the minisequencing were designed adjacent to the polymorphic sites, and tails of varying sizes were added, based on the M13 sequence of the plasmid pUC18. It was thus possible to separate the 13 PCR products in a single reaction in the ALF automated sequencer. DNA extraction from the ancient samples For the analysis, 14 teeth were extracted from skulls classified as presumed Botocudo Indians (kindly provided by the Museu Nacional do Rio de Janeiro, Rio de Janeiro, Brazil; Table 2 ). All the samples were dated to the 19th century. Unfortunately, the classification was primarily geographical, and although improbable, we cannot rule out the possibility that individuals belonging to another Amerindian group were wrongly included in this one. The surfaces of the teeth were cleaned by soaking in 6% sodium hypochlorite for 15 minutes, then rinsed in double-distilled, ultraviolet (UV)-irradiated water. Each tooth was ground with a mortar and pestle until a fine-grained powder was obtained. Samples (500 mg) of the powder were transferred into sterile 15 ml tubes, and the DNA was extracted as described previously [ 32 ]. Briefly, the powder was incubated in 10 ml 0.45 M EDTA and 0.25 mg/ml proteinase K (pH 8.0) in a rotary oven in the dark at room temperature for 24 hours. Remnant tissue was removed by centrifugation (3,000 rpm, 1200 × g) and the supernatant transferred into 40 ml binding buffer (5 M guanidinium isothiocyanate, 25 mM NaCl, 50 mM Tris) in a 50 ml sterile tube, and incubated with 100 μl silica suspension (pH adjusted to 4.0 by adding 37% w/v HCl) for 3 hours in a rotary oven in the dark at room temperature. The silica was collected by centrifugation (3,000 rpm) and washed once with 1 ml binding buffer. The buffer-silica suspension was transferred into a fresh 1.5 ml tube, separated by centrifugation (13,500 rpm) and washed twice with washing buffer (50% v/v ethanol, 125 mM NaCl, 10 mM Tris and 1 mM EDTA, pH8.0). DNA was eluted into two tubes, each containing 50 μl Tris-EDTA buffer (pH 8.0) at room temperature. The eluates were separated into aliquots and stored at -20°C. A negative extraction control to which no tooth powder was added accompanied each sample extraction (mock extraction). PCR and sequencing of the ancient samples The mtDNA analyses were performed by DNA sequencing of HVSI of the control region and specific sites on the coding region (RFLP) of mtDNA. For these analyses, two primer pairs were designed to amplify overlapping fragments that divided the region in two smaller amplicons: fragment 1 (nucleotides 15989 to16251) and fragment 2 (16190 to 16410). For some samples, it was necessary to divide fragment 2 into two smaller amplicons (16190 to16322 and 16268 to 16410) (See Additional file 5 for primer sequence). PCR conditions were the same for all reactions. A sample (3 μl) of the aliquot containing the ancient DNA (not quantified) was amplified in 20 μl reaction volume containing 0.2 mM of each dNTP, 0.3 μM of each primer, 2.5 mM MgCl 2 and 2 U Taq DNA polymerase ( Taq Platinum, Invitrogen, Carlsbad, CA, USA) in a buffer (20 mM Tris-HCl pH8.4 and 50 mM KCl). The cycling parameters were: initial denaturation at 94°C for 4 min, followed by 44 cycles at 94°C for 30 seconds, 60°C for 30 seconds and 72°C for 30 seconds, with a final extension step at 72°C for 10 minutes. To confirm amplification, the PCR products (3 μl) were separated in 6% polyacrylamide gels (PAGE), stained with silver salts and precipitated with lysophosphatidic acid (LPA) and polyethylene glycol (PEG)8000. Sequencing reactions were performed on each strand, using the same primers as for PCR amplification. The sequencing reaction products were cleaned up and then run on an automated sequencer (MegaBace 1000; GE Healthcare), using the same conditions as described above for the modern samples. Sequencing was performed in both directions (forward and reverse). The sequence files generated were analyzed together with the respective chromatograms using Bioedit v.7.0.9 software. (BioEdit, Carlsbad, CA, USA). To monitor for the possible presence of phantom mutations generated artificially in the sequencing process, we followed the strategies described by Bandelt et al. [ 30 ]. Four primer pairs (see Additional file 6 ) for shorter amplicons were designed for RFLP analysis of the four Amerindian haplogroup-specific sites at coding region of the mtDNA (haplogroup A: + 663, Hae lll; haplogroup B, 9 bp deletion (COII/tRNA Lys ); haplogroup C, + 13262, Alu l and haplogroup D, - 5176, Alu l. The PCR conditions were as described above. The PCR amplification product, (3 μl) was digested for 2 hours at 37°C with 1 U of the appropriated restriction enzyme. The digestion products were separated in 6% PAGE gels, which were stained with silver for identification of the presence or absence of the restriction sites that characterize haplogroups A, C or D and of the 9 bp deletion that defines haplogroup B. Contamination prevention The extractions of ancient DNA and the PCR assays were performed in a physically separated laboratory, in which no work with amplified DNA had ever been performed previously. The bench was irradiated with UV lamps (254 nm) for 30 minutes before all experiments, and cleaned with a high concentration of sodium hypochlorite. All apparel (gloves, face masks, caps and laboratory coats) were disposable. Laboratory equipment (pipettes, tubes, filter tips, centrifuges) were sterilized by a long exposure to UV (254 nm). All metallic material and laboratory glassware were sterilized in an oven at 200°C for at least 6 hours. Preparation of ground tooth powder was performed in a separated room from the buffer preparation, DNA extraction procedure and PCR assays. To detect possible contamination by exogenous modern DNA, extraction and amplification blanks were used as negative controls, and all personnel involved, either directly or indirectly in the work were genetically typed (HVSI) and their profiles compared with the results obtained from the ancient teeth samples. Mitochondrial sequence analyses We manually compared our sequences with 5,133 HVSI mtDNA sequences from North, Central and South America (see Additional file 2 ).We also compared our sequences with others available only in selected public DNA sequence databases (EMPOP http://empop.org , Ambase http://www.lghm.ufpa.br/ambase , mtDB http://www.genpat.uu.se/mtDB/ , mitosearch http://www.mitosearch.org , hvrbase++ http://www.hvrbase.org ), and with the FBI mtDNA population database http://www2.fbi.gov/hq/lab/fsc/backissu/april2002/miller1.htm . Our database search results were up to date as of July 2010. To compare and understand the relationship between the sequences found in both Queixadinha and Botocudo populations, we also performed a median-joining network analysis using the software Network 4.502 [ 33 ]. Data analysis The program CLUMP [ 34 ] was used to perform the χ 2 tests. The Raymond and Rousset test of population differentiation and estimates of haplotype and nucleotide diversity were calculated with the program Arlequin version 2.000 [ 35 ].
Results Study of the variability of haplotypes and lineages of Amerindian mtDNA from populations of Minas Gerais The population studied comprised 173 individuals from the rural community of Queixadinha (termed QUEIX hereafter) in the Vale do Jequitinhonha region of the state of Minas Gerais in Brazil, occupied before the 19th Century by the Botocudo Amerindian nation. Three-generation pedigrees were obtained, and individuals who belonged to the same maternal lineages were removed from the study. Thus, of the original 173 samples, we included 74 matrilineally unrelated individuals in the study. We investigated their mitochondrial ancestry by standard restriction fragment length polymorphism (RFLP) for haplogroups A, B, C, D, × and M, as described previously [ 6 - 10 ]. In total, 20 probable Ameridian matrilineal lineages were identified (27.0%), classified as follows: 14 of haplogroup C (70.0%), four of haplogroup B (20.0%) and two of haplogroup D (10.0%) (Table 1 ). No matrilineage belonged to haplogroup A or M, or to any Amerindian × lineage. Likewise, of the 100 cosmopolitan unrelated samples from northeastern Minas Gerais (MGNE), we identified 24 (24%) as belonging to Amerindian haplogroups: nine of haplogroup A (37.5%), seven of haplogroup B (29.2%) and four each of haplogroups C and D (16.7% each) (data not shown). Again, no matrilineage belonged to haplogroup M or Amerindian X. The frequencies of 27.0% of Amerindian lineages in the rural population (QUEIX) and 24.0% in the cosmopolitan population (MGNE) are commensurate with the findings of Alves-Silva et al. [ 5 ] in Brazilians. However, the discrepancy in the relative haplogroup frequencies was puzzling. In QUEIX, the prevalence of haplogroup C was high (70.0%), whereas haplogroup A was absent. By contrast, in MGNE, haplogroup A was predominant (37.5%), whereas haplogroup C was present in a more modest proportion (16.7%), in general concordance with our previous results for the whole of Brazil [ 5 ]. The difference in haplogroup distribution between these two regions was highly significant (χ 2 = 15.8; P < 0.001). In the 20 mtDNA HVSI sequences obtained from the QUEIX samples, we identified 13 different haplotypes, the sequence of which (318 bp from 16045 to 16362) is shown in Table 1 . The B, C and D founding haplogroups [ 8 , 11 ] were all present in this study, being represented by haplotypes MG11, MG27 and MG37. For all samples of haplogroup C, we performed sequencing of the hypervariable segment (HVS)II, and confirmed the presence of the other polymorphisms characteristic of this haplogroup [ 12 ]. The haplotype diversity of the Amerindian QUEIX samples was 0.9263 ± 0.0431, lower than that of the cosmopolitan Amerindian lineages in MGNE, which was 0.9746 ± 0.020, similar to that of the Amerindian lineages of the white population of Brazil (0.9780 ± 0.0083) [ 5 ]. Additionally, we used mtDNA HVSI haplotype frequencies to perform an exact test of population differentiation comparing the QUEIX sample and the control MGNE sample with data previously obtained for the north, northeast and south regions of Brazil (BR-SE) or for the southeast of Brazil (BRSE) [ 5 ]. The results showed that the MGNE, BR-SE and BRSE samples did not differ significantly from each other, but that QUEIX differed from all three, and this difference was highly significant (see Additional file 1 ). Phylogeographic and comparative study of Amerindian lineages We could not find any instance of haplogroup B lineages MG18, MG22, MG 23 or MG24 in available databases or in the literature (see Additional files 2 and 3 ). One interesting mutation is the transition T→C in position 16178, found in all these four B lineages. As far as we could find, this mutation has only been previously identified in two other individuals, both from urban contemporaneous populations in the south and southeast of Brazil [ 5 ]. This transition might conceivably be considered a marker for the identification of Amerindian lineages of extinct populations from Brazil. From the lineages of haplogroup B, we selected MG18 and MG24 for minisequencing. Similarly, we could not find any previous description of haplogroup D lineage MG39 (see Additional files 2 and 3 ). This haplotype presents a transition in position 16278 that has not yet been described in any other native population analyzed to date, but we also found it in two sequences of the D haplogroup in the control cosmopolitan MGNE sample (data not shown). We selected the haplotype MG39 for minisequencing. Of the haplotypes identified in the QUEIX sample, 70% belonged to haplogroup C. The modal haplotype was MG33, found in five individuals, which we did not find in any databases or in the literature, and is possibly typical of the region (see Additional files 2 and 3 ). It is characterized by transitions in nucleotides 16166, 16224, 16260 and 16356, associated with the known markers of haplogroup C (16223, 16298, 16325 and 16327). Thus, haplotype MG33 was submitted for further phylogenetic analysis via minisequencing. Haplotypes MG30, MG31 and MG34 were also not encountered in any database or in the literature after extensive searches, as described in Methods (See Additional files 2 and 3 ). Of special interest in this group was haplotype MG30, which was found in three individuals from Queixadinha. With the exception of the transition at position 16051, which is common in several native American populations [ 13 - 15 ] and was present in another haplotype (MG31), the other transitions (16217 and 16287) that characterize MG30 have not yet been identified in any other native American population, and so were selected for minisequencing. Another haplotype of interest submitted for minisequencing was MG34, which was exclusive to the QUEIX sample and was distanced from the founding haplotype by three transitions (at positions 16205, 16311 and 16327) and one transversion (16113), none of which has been described previously in the literature. Minisequencing Because the two hypervariable segments (HVSI and HVSII) by themselves cannot generate a precise haplogroup classification, owing to the constant occurrence of recurring mutations, we established a strategy for confirmation of the Amerindian origin of the samples selected as possible genetic signatures for the extinct indigenous populations previously inhabiting the region. Studies on complete mtDNA sequences have allowed the identification of polymorphisms present at coding regions that can differentiate between the ancestral Asiatic haplogroups and the Amerindian descendents [ 12 , 16 , 17 ]. The minisequencing approach [ 18 ] allowed the allocation of MG18 and MG24 to Amerindian haplotypes B2 and MG39 to Amerindian D1, by determining the presence of polymorphisms at positions 11177, 3547, 4977, 6473 and 9950 from haplogroup B2 and 2092 from haplogroup D1 (See Additional file 4 ). Mutations at 15487 and 14318 demonstrated that MG30, MG33 and MG34 belonged to haplogroup C, and the presence in all of them of HSVI 16325C (Table 1 ) specified the presence of Amerindian haplogroup C1. Results on mtDNA extracted from old samples of teeth thought to be Botocudo We analysed teeth extracted from 14 skulls in the collection of the Museu Nacional do Rio de Janeiro, which are thought to be remains of Botocudo Amerindians (Table 2 ). By sequencing two smaller overlapping fragments of HVSI, we obtained sequences 318 bp in length (extending from nucleotides 16045 to 16362 of the Cambridge Reference Sequence [ 19 ]) in both directions from mtDNA isolated from these teeth. Twelve of the HVSI samples contained transitions that are characteristic of Native American C haplogroup (16223 C→T, 16298 T→C, 16325T→C and 16327 C→T) [ 8 , 20 ]. The other two haplotypes contained substitutions at nucleotides 16189 T→C and 16217 T→C and were therefore initially classified as haplogroup B [ 8 , 20 ]. These two samples belonging to haplotype B will be described in detail in a forthcoming publication (V.F. Gonçalves, F.C. Parra, H. Gonçalves-Dornelas, C. Rodrigues-Carvalho, H.P. Silva and Sergio D.J. Pena, in preparation). Notably, we did not find among the teeth any instance of the two other major Native American haplogroups, A and D. The 14 HVSI sequences recovered from the ancient teeth code for six haplotypes defined by 15 different polymorphic sites. Gene diversity was calculated as only 0.8352 ± 0.0617 (even lower than the QUEIX samples), and nucleotide diversity was estimated at 0.014687 ± 0.008591. A haplotype network of the sequences found in the QUEIX and the Botocudo teeth samples is shown in Figure 1 . Our most significant finding was that the HVSI sequence Bot04, found in four individuals, was identical to MG31 (Table 1 ). We could not find any description of this haplotype in the literature, even after extensive searches. The founding haplotype of the Native American haplogroup C was present in three skulls, being coded as Bot01 (Figure 1 ). This haplotype is widespread in Native American populations [ 21 , 22 ], and was also found in two individuals in our cosmopolitan MGNE control sample (data not shown). The HVSI sequence of Bot02 haplotype (found in a single skull) is characterized by the transition in nt16129 (G→A) in addition to the aforementioned specific markers for haplogroup C. This lineage is shared with individuals of other South American populations, such as the Guahibo from Venezuela [ 23 ], and the Marajó, Trombetas and Santarém from the Brazilian Amazon region [ 24 ] and Chile [ 25 ]. Haplotype Bot03 (Table 1 ) was seen in four individuals (two from the Mutum area in the state of Espirito Santo, and the other two from Minas Gerais, one from the valley of the Mucuri river and the other from the Doce river valley). We could not find any description of this haplotype or of Bot04 (mentioned above) in the literature, even after extensive searches (see Methods).
Discussion The Brazilian population is the product of genetic admixture between three ancestral groups: native Amerindians, European colonizers and African slaves. We have previously demonstrated that the vast majority of the chromosome Y lineages found in the contemporaneous Brazilian population, independent of their geographical region, is of European origin (98%) [ 26 ]. By contrast, mtDNA throughout Brazil has a much more uniform distribution of geographical origin, with European lineages making up 39%, Amerindian 33% and African 28%, and presents regional differences that correlate with the history of colonization for each region [ 5 ]. Together, these results reveal a sexually asymmetrical pattern of reproduction, with the male contribution being mostly European and the maternal contribution being mainly Amerindian and African. Extrapolating from this to the current population of Brazil, which is approximately 190 million inhabitants, we would expect the existence of roughly 60 million Brazilians carrying Amerindian mtDNA. If we could study this DNA and ascertain the Amerindian group from which the individuals originated, it might be possible to reconstitute the mtDNA haplotype profile of many extinct original native populations. Our strategy, which we propose to call homopatric targeting, is to concentrate the searches in extant populations that have always lived in small regions once inhabited by specific Amerindian nations. In this way, we could explore the mtDNA lineages of population groups that no longer exist. The Botocudos present interesting anthropometric craniometric characteristics that distinguish them from the vast majority of other Amerindians, and suggest that they might conceivably be related to the Paleoindians from the Lagoa Santa region in Brazil [ 27 ]. We studied DNA samples from the population of Queixadinha, a rural community in the Jequitinhonha valley (QUEIX) and used as controls 100 DNA samples from residents in cities that were also in the northeastern region of Minas Gerais (MGNE), an area that included the valleys of the Jequitinhonha, Mucuri and Doce Rivers and data from cosmopolitan centers of Brazil [ 5 ]. We found 13 different Amerindian mtDNA haplotypes in samples from Queixadinha, nine of which could not be found in available databases or in the literature after extensive searches (see Methods; see Additional files 2 and 3 ). We believe that they are most likely of Botocudo origin, based on two reasons. First, the local population is largely indigenous to the geographical region; it is an arid and poor area, attracting practically no migrants. The continuous low population size and reproductive isolation are reflected in a lower rate of mtDNA haplotype diversity compared with the cosmopolitan population of surrounding cities. Second, the only Amerindian inhabitants in the region were the Botocudos, who were sufficiently ferocious to keep all other groups at bay. At first sight, we might expect that haplotypes MG30 and MG33, respectively found in three and five apparently non-matrilineally related individuals of Queixadinha, might be especially frequent among the Botocudos. Nevertheless, because of genetic drift, there is no compelling reason to believe that present-day haplotype frequencies reflect the ancient relative abundance of these haplotypes. Of course, the Botocudo origin for the identified Amerindian mtDNA haplotypes in our homopatric targeting is only inferred, not proven. However, it constitutes a concrete hypothesis that we could test by analyses of mtDNA extracted from ancient remains of presumed Botocudo skulls in the Museu Nacional do Rio de Janeiro. We sequenced the HVSI of DNA extracted from ancient teeth of 14 Botocudo skulls, and obtained six different mtDNA haplotypes. Of these six haplotypes, four were classified as Amerindian haplogroup C. Although these are small numbers, they suggest that Botocudos indeed had an excess of Amerindian haplogroup C lineages, possibly as a consequence of several founder effects and/or narrow bottlenecks occurring in the past of this population of hunter-gatherers. This scenario is also supported by the low genetic diversity (0.835) found in Botocudos. Likewise, we suggest that the absence of Amerindian haplogroup A in both the Queixadinha and the Botocudo samples is related, probably also because of genetic drift. By contrast, the absence of Amerindian D lineages in the Botocudo sample gene pool was not unexpected, because this haplogroup represents only 10% of the Amerindian matrilineages in QUEIX. The analysis of the distribution of the ancient Botocudos HVSI haplotypes among 5,133 Amerindian haplotypes (beyond the public mtDNA sequences databases) showed that, except for one individual from Zona da Mata, the Bot04 haplotype is exclusively present in the QUEIX population. This result is sufficient to validate our 'homopatric targeting' strategy. In conclusion, our success in using the present-day population to retrieve the genetic lineages of peoples who are now extinct opens up an important pathway towards the reconstitution of our history, especially in cases in which direct analysis of the specific genetic material has become impossible.
Background Brazilian Amerindians have experienced a drastic population decrease in the past 500 years. Indeed, many native groups from eastern Brazil have vanished. However, their mitochondrial mtDNA haplotypes, still persist in Brazilians, at least 50 million of whom carry Amerindian mitochondrial lineages. Our objective was to test whether, by analyzing extant rural populations from regions anciently occupied by specific Amerindian groups, we could identify potentially authentic mitochondrial lineages, a strategy we have named 'homopatric targeting'. Results We studied 173 individuals from Queixadinha, a small village located in a territory previously occupied by the now extinct Botocudo Amerindian nation. Pedigree analysis revealed 74 unrelated matrilineages, which were screened for Amerindian mtDNA lineages by restriction fragment length polymorphism. A cosmopolitan control group was composed of 100 individuals from surrounding cities. All Amerindian lineages identified had their hypervariable segment HVSI sequenced, yielding 13 Amerindian haplotypes in Queixadinha, nine of which were not present in available databanks or in the literature. Among these haplotypes, there was a significant excess of haplogroup C (70%) and absence of haplogroup A lineages, which were the most common in the control group. The novelty of the haplotypes and the excess of the C haplogroup suggested that we might indeed have identified Botocudo lineages. To validate our strategy, we studied teeth extracted from 14 ancient skulls of Botocudo Amerindians from the collection of the National Museum of Rio de Janeiro. We recovered mtDNA sequences from all the teeth, identifying only six different haplotypes (a low haplotypic diversity of 0.8352 ± 0.0617), one of which was present among the lineages observed in the extant individuals studied. Conclusions These findings validate the technique of homopatric targeting as a useful new strategy to study the peopling and colonization of the New World, especially when direct analysis of genetic material is not possible.
Competing interests The authors declare that they have no competing interests. Authors' contributions VFG and FP carried out molecular genetic studies, participated in the data analysis, and drafted the manuscript. HGD carried out molecular genetic studies. CRC and HPS identified the skulls in the museum collection, and extracted and provided the teeth for DNA analyses. SDJP conceived of the study, participated in its design and coordination, and helped to draft the manuscript. All authors read and approved the final manuscript. Supplementary Material
Acknowledgements We are grateful to Dr José Roberto Lambertucci, Roberto C. Amado and Carlos M. Antunes (Queixadinha Project, Universidade Federal de Minas Gerais) for kind donation of blood samples from Queixadinha population. We thank Dr Claudia B. Carvalho (Departamento de Bioquímica e Imunologia of Universidade Federal de Minas Gerais) for developing the minisequencing system. Dr Marcel Giovanni Costa França and Dr Queila Souza Garcia(Departamento de Botânica of Universidade Federal de Minas Gerais) kindly provided access to their physical facilities. Neuza A. Rodrigues and Kátia Barroso provided expert technical assistance. This work was supported by grants from CNPq of Brazil.
CC BY
no
2022-01-12 15:21:37
Investig Genet. 2010 Dec 1; 1:13
oa_package/9e/97/PMC3014906.tar.gz
PMC3014907
21176206
Background The majority of patients with peritoneal carcinomatosis (PC) from colorectal cancer present with unresectable disease at the time of diagnosis. The morbid nature and fatality peritoneal disease in patients with colorectal cancer is significant and the recent focus of clinical outcomes research. In a recent multi-centre prospective study of 370 patients with PC from non-gynecological malignancies, patients with colorectal cancer survived a median time of 5.2 months [ 1 ]. Research protocols using palliative systemic chemotherapy for PC have been conducted with encouraging tumor response rates, but overall survival remains poor [ 2 , 3 ]. The reported median survival after systemic 5-Fluorouracil/Leucovorin (5FU/L) based chemotherapy for PC of colorectal cancer can, under the best of circumstances, achieve median survival of only 5.2 to 12.6 months [ 4 ]. Modern systemic therapy regimens with combinations of cytotoxic and biological agents appear promising in clinical trials, demonstrating improved tumor response rates over older regimens ultimately translating into gains in both progression-free and overall survival in patients with metastatic colorectal cancer [ 5 - 10 ]. Nonetheless, the patient cohorts with Stage IV disease in these trials have failed to include patients with PC. The difficulties of including these patients are a result of the inability to image sub-centimetre peritoneal lesions and assess tumor response on the RECIST criteria. Hence, strictly speaking, this leaves this subgroup of patients with Stage IV colorectal cancer without any appreciable evidence of disease and the treatment response cannot be documented or monitored. Aggressive surgical therapy has been shown to be promising when combined with hyperthermic intraperitoneal chemoperfusion (HIPEC). A multi-institutional registry study of 506 patients with PC of colorectal origin showed that median survival of up to 32 month can be attained with this aggressive multi-modality treatment approach in patients with limited peritoneal surface disease who are able to undergo complete cytoreduction [ 11 ]. More recently, Elias et al reported a 5-year survival rate of 51% and median survival of 63 months in patients with limited PC treated with oxaliplatin-based HIPEC [ 12 ]. The lack of specific data for patients with isolated PC represents a gap in the current literature. In the modern era of effective systemic chemotherapy, outcomes for this particular patient subset (limited PC of colorectal origin) need to be re-examined. Further, the considerable progress made in CS and HIPEC in peritoneal carcinomatosis has not rightfully translated into routine clinical practice. Debate over the appropriateness of CS and HIPEC as a treatment strategy without concrete and replicable data from randomized trials, together with concerns over aggregate treatment-related morbidity and mortality ranging from 14% to 55% and 0% to 19%, respectively [ 4 ], have hampered the ability to reach a treatment consensus amongst the general oncology community. To evaluate the effectiveness of systemic chemotherapy, we report the results of a single institution experience of systemic chemotherapy for PC from colorectal cancer with stratification according to the peritoneal surface disease severity score (PSDSS) to elucidate stage-specific outcomes that may guide clinical treatment decision for patient-specific delivery of therapy.
Methods Cohort Definition Between January 1 1987 and December 31 2006, patients with colorectal cancer treated at the University of Wuerzburg Medical Centre were identified from the Wuerzburg Institutional Database (WID). In our institution, the surgical peritoneal surface malignancies program (including debulking surgery and HIPEC) was initiated in September 2008. Patients were included if they had intraoperatively confirmed peritoneal carcinomatosis either at the time of initial presentation or at time of recurrence with histological diagnosis of tumor from colorectal origin. The exclusion criteria were for peritoneal carcinomatosis from non-colorectal origin, patients died within 30 days after exploration or having more than three extra abdominal metastases. Data Source The WID is a central data repository that is expanded prospectively on a daily basis with clinical, operative, and research data of patients who were evaluated and treated at the University of Wuerzburg Medical Centre. Data available within the WID include patient demographics, histological diagnoses that are based on International Classification of Diseases coding standards, physician and hospital billing data, inpatient admission and outpatient registration data, operating room procedures, laboratory results, and computerized pharmacy records. The WID undergoes continuous cross platform integration with the Comprehensive Cancer Registry to ensure updated follow-up information for identification of deceased patients. Inpatient and outpatient records of all identified patients were reviewed retrospectively to extract information regarding type and duration of chemotherapy, sites of metastatic disease at presentation and disease status at last follow-up. Retrospective Peritoneal Surface Disease Severity Score (PSDSS) The retrospective PSDSS was estimated based on the three most important prognostic indicators; clinical symptoms, extent of carcinomatosis based on the tumor burden (analog PCI) and tumor histopathology [ 13 ]. Each of these three categories was classified into three sub-categories based on the severity of each clinicopathological factor: 1. Clinical Symptoms; none, mild (weight loss < 10% of body weight, mild abdominal pain, asymptomatic ascites) or severe (weight loss ≥ 10% of body weight, unremitting pain, bowel obstruction, symptomatic ascites). 2. Extent of Carcinomatosis intraoperatively; limited (analog PCI < 10), moderate (analog PCI 10 to 20) or extensive (analog PCI > 20). 3. Tumor histopathology of the primary tumor; well to moderately differentiated without positive lymph node, moderately differentiated with positive lymph nodes or poorly differentiated and/or signet ring (Table 1 ). The impact of these clinicopathological variables derived from the patient's clinical presentation at the time of evaluation for treatment, radiological assessment of the extent of carcinomatosis, and the tumor histopathology. This was scored as stages I to IV based on the summation of the arbitrary scores for each of the three clinicopathological staging parameters based on our clinical experience: PSDSS Stage I < 4; PSDSS Stage II = 4-7; PSDSS Stage III = 8-10; PSDSS Stage IV > 10. Follow-Up and Outcomes Treatment was grouped according to the type of systemic chemotherapy regimen; no chemotherapy (best supportive care), 5-Fluorouracil/Leucovorin (5FU/L), or modern chemotherapy (Oxaliplatin/Irinotecan-based) with or without biological agents (Bevacizumab/Cetuximab/Panitumumab). All patients were followed every 3 months. Helical contrast enhanced computed tomography (CT) was performed every 6 months. Follow-up data was obtained from the referring physicians, phone calls and/or emails from the patients, or the cancer registry. All deaths in this study were disease-related, attributable to progressive colorectal cancer. The primary study endpoint was from the time of diagnosis of peritoneal carcinomatosis to the time of death (overall survival). Follow-up data recorded included the data of the status of the patient (alive with disease, alive without disease and dead of disease). Statistics The data collected were analyzed using JMP software (JMP ® , Cary, NC Version 7) software. The patient characteristics were reported using frequency and descriptive analyses. The Kaplan-Meier method was used to analyze survival. Univariate analysis (log-rank) was performed to determine the clinicopathological factors affecting survival, including the PSDSS stage. All factors correlating with outcome having p < 0.10 on univariate were entered into a Cox proportional hazards regression model for multivariate analysis. The median time to death was defined as the time where 50% of patients have died. P < 0.05 was considered statistically significant.
Results Patient Characteristics One thousand nine hundred and twenty patients with colorectal cancer underwent a laparotomy during the study period. Peritoneal carcinomatosis was observed in 240 patients (13%); 98 patients (42%) at initial diagnosis and 142 patients (58%) at time of recurrence. Ten patients (2%) died from surgical complications during the immediate post operative period, eight patients (3%) died prematurely of non-cancer related reasons, 20 patients (8%) had incomplete records in the database, and 35 patients (15%) with more than 3 extra abdominal metastasis were excluded from study. In total, 167 patients formed the cohort of this study. The median age was 63 (range, 22 to 88) years. Sixty-four patients (38%) had isolated peritoneal carcinomatosis. Aside from peritoneal carcinomatosis, other sites of metastasis include the liver or lung in 67 patients (40%) and 36 patients (22%) had peritoneal carcinomatosis with bone or brain metastasis. The detailed patient characteristics are presented in Table 2 . Survival Analysis The median follow-up time from diagnosis of peritoneal carcinomatosis to last clinical follow up was 8 (range, 1 to 112) months. At the time of analysis, 163 patients (98%) have died of disease and there were four survivors (2%) who are alive without disease. The median follow-up in these four survivors was 78 (range, 43 to 112) months. The overall median survival was 8 (95%CI 6 to 9) months and the 3- and 5-year overall survival was 6% and 3% respectively (Figure 1 ). Impact of Chemotherapy Treatment on Outcomes Eighty-three patients (50%) had no chemotherapy treatment and received best supportive care only. Forty-two patients (25%) received 5FU/L chemotherapy and forty-two patients (25%) received modern chemotherapy of which eight patients (5%) had biological agents in combination with modern chemotherapy. The median duration of chemotherapy treatment was 18 (range, 0 to 115) weeks. The median survival was 5 (95%CI 3 to 7) months in patients receiving best supportive care, 11 (95%CI 6 to 15) months for patients treated with 5 FU/L, and 12 (95%CI 4 to 20) months for patients treated with modern chemotherapy. The median survival differed significantly in patients who received chemotherapy versus those who received best supportive care (p = 0.026), however, outcomes did not differ between patients treated with 5FU/L or modern chemotherapy (p > 0.05) (Figure 2 ). Stratifications According to the retrospective PSDSS Six patients (4%) were scored as PSDSS Stage I, 53 patients (32%) as PSDSS Stage II, 33 patients (20%) as PSDSS Stage III and 75 patients (45%) as PSDSS Stage IV. The detailed treatment type in patients classified according to the PSDSS is shown in Table 3 . Treatment differed between the four PSDSS Stages (p = 0.02). Median survival differed stage-wise was 4 (95%CI 2.7 - 5.1) months for PSDSS Stage IV, 7 (95%CI 4.4 - 10.3) months for PSDSS Stage III, 19 (95%CI 13.8 - 24.1) months for PSDSS Stage II, and 39 (95%CI 34.2 - 42.4) months for PSDSS Stage I (p = 0.003) (Figure 3 ). The median survival of all patients with PSDSS Stage I/II was 22 (95%CI 14.2 - 26.7) months and for PSDSS Stage III/IV was 5 (95%CI 4.2 - 7.2) (p < 0.001) (Figure 4 ). In the PSDSS Stage I/II patients (n = 59) who received best supportive care, the median survival was 16 (95%CI 12.8 - 24.0) months; for those who received 5FU/L, the median survival was 16 (95%CI 13.7 - 22.8) months, and for patients treated with modern systemic chemotherapy, the median survival was 28 (95%CI 17.1 - 38.2) months (p = 0.12) (Figure 5 ). For a subgroup of patients with isolated PC with PSDSS Stage I/II (n = 20), the median survival was 21 (95%CI 16.6 - 24.8) and not different compared to the whole group. Analysis of overall survival from diagnosis of carcinomatosis to last follow-up in uni- and multivariate analyze is shown in Table 4
Discussion Cytoreductive surgery (CS) combined with intraoperative hyperthermic intraperitoneal chemotherapy (HIPEC) is a treatment option for selected patients with peritoneal carcinomatosis (PC) from colorectal cancer. There has been enormous interest in the literature about this multi-modality therapeutic approach for a disease that has been associated with poor outcome. Phase II studies have demonstrated that CS combined with HIPEC is associated with an improved survival in patients with limited PC amenable to complete cytoreduction when compared to historical controls which were treated palliatively with systemic chemotherapy alone [ 14 ]. In 2004, a multi-institutional registry from 28 international treatment centres demonstrated that the median survival was 19 months and 3-year survival was 39% in 506 patients with CRPC who were treated with CS and HIPEC. These early outcomes are encouraging; however, treatment-related morbidity and mortality contribute to continued concern over the feasibility of this aggressive multi-modality therapy approach [ 11 ]. With continued specialty-centre experience, the patient selection process has improved. A recently published consensus statement emphasized the critical importance of proper patient selection to identify only suitable candidates for treatment to ensure that appropriately selected candidates receive and benefit from treatment, and unsuitable candidates are not subjected to the morbidity of a procedure unlikely to improve patient outcome [ 15 ]. By redefining and optimizing the patient selection process, treatment of patients with only limited PC has been shown to provide potentially curative oncological treatment. Elias et al. reported in a comparative trial a median survival of 62.7 months for patients with limited PC treated with CS and HIPEC compared to a median survival of 23.9 months in patients treated with palliative surgery and systemic chemotherapy alone [ 12 ]. Although, the survival results in this study reflect a highly selected group of patients, the impressive survival results support the concept that CS/HIPEC is a potentially curative treatment strategy and if performed in patients with limited PC, cure can be attained with high likelihood. If the extent of PC is not controlled through complete cytoreduction, CS and HIPEC may still prove beneficial; however, its role in the current era of modern systemic chemotherapy may require further investigation. As part of the efforts to identify patients with PC that are suitable candidates for CS/HIPEC, Pelz et al proposed and validated a scoring system (Peritoneal Surface Disease Severity Score) that stages patients with PC taking into consideration the clinicopathological markers that predict for treatment outcome [ 13 ]. In an analysis of patients who underwent a complete cytoreduction, patients who were staged as PSDSS Stage I and Stage II were shown to have a 3-year overall survival of 60% to 80%. Although the study was limited by the follow-up time, the early results were promising and the long-term outlook depicted in the Kaplan-Meier curve showed a trend towards long-term survival [ 16 ]. In the present study, we used a retrospective PSDSS, because the PCI, described by Sugarbaker, was published first in 1995. The retrospective evaluation of the PCI is very difficult. For this reason, we used the term low, moderate and extensive to describe the tumor burden, analog to the PCI <10, 11-20 and > 20. The findings of the current study affirm the premise that peritoneal carcinomatosis is a foremost cause of disease-specific mortality in patients with metastatic colorectal cancer. Patients with isolated PC, PC with liver/lung metastasis, or PC with brain/bone metastasis, predictably experienced early demise (p = 0.15), with an overall median survival of 5.0 months. The poor survival results reflect a subgroup of patients observed routinely in clinical practice for whom treatment options are limited. The biologically aggressive nature of PC impairs the functional status of patients to an extent that makes them eligible only for palliative, best supportive care only. It also remains unfortunate that, although modern systemic chemotherapy have improved survival in patients with metastatic colorectal cancer, the analysis in our study did not show a difference in outcomes between treatment with 5FU/L compared to modern chemotherapy in patients with PC (Figure 2 ). However, the authors do acknowledge that the number of patients receiving modern systemic chemotherapy, especially in combination with biological agents, in the current study are small, and further studies involving a larger cohort of patients is required to elucidate the true treatment effects. By demonstrating a stage-wise difference in survival stratified according to the PSDSS, it appears that this staging system is of clinically meaningful prognostic utility in patients with peritoneal carcinomatosis. It is important to emphasize the marked contrast in survival outcomes between patients with PSDSS stage I/II and stage III/IV PC. Further, in patients with isolated PC who are PSDSS stage I/II, the median survival was 21 months. This survival result is comparable to current survival data from randomized trials of metastatic colorectal cancer that encompasses the use of modern systemic chemotherapy in combination with biological agents [ 17 - 19 ]. To draw upon the favourable prognosis of this group of patients, it is likely that patients with no symptomatology, low volume peritoneal disease, and favourable tumor biology, may derive the maximal benefits of the effective CS/HIPEC treatment strategy.
Conclusions In conclusion, our data demonstrates that peritoneal carcinomatosis remains a fatal condition in patients with metastatic colorectal cancer and it appears to be the dominant determinant of outcome. Treatment with systemic chemotherapy, especially modern agents is likely to be beneficial in patients with PC of colorectal origin. The optimal treatment results based on current evidence may be attained through careful selection of patients with a "favourable prognosis" for multi-modality therapy in whom the benefits of treatment outweigh the associated risks, for example, patients with PSDSS stage I/II, to undergo radical surgical cytoreduction in combination with hyperthermic intraperitoneal chemotherapy in an effort to obtain potentially curative disease clearance and extend the overall survival.
Background We evaluate the long-term survival of patients with peritoneal carcinomatosis (PC) treated with systemic chemotherapy regimens, and the impact of the of the retrospective peritoneal disease severity score (PSDSS) on outcomes. Methods One hundred sixty-seven consecutive patients treated with PC from colorectal cancer between years 1987-2006 were identified from a prospective institutional database. These patients either received no chemotherapy, 5-FU/Leucovorin or Oxaliplatin/Irinotecan-based chemotherapy. Stratification was made according to the retrospective PSDSS that classifies PC patients based on clinically relevant factors. Survival analysis was performed using the Kaplan-Meier method and comparison with the log-rank test. Results Median survival was 5 months (95% CI, 3-7 months) for patients who had no chemotherapy, 11 months (95% CI, 6-9 months) for patients treated with 5 FU/LV, and 12 months (95% CI, 4-20 months) for patients treated with Oxaliplatin/Irinotecan-based chemotherapy. Survival differed between patients treated with chemotherapy compared to those patients who did not receive chemotherapy (p = 0.026). PSDSS staging was identified as an independent predictor for survival on multivariate analysis [RR 2.8 (95%CI 1.5-5.4); p < 0.001]. Conclusion A trend towards improved outcomes is demonstrated from treatment of patients with PC from colorectal cancer using modern systemic chemotherapy. The PSDSS appears to be a useful tool in patient selection and prognostication in PC of colorectal origin.
Competing interests Dr. Terence C. Chua is a surgical oncology research scholar funded by the St George Medical Research Foundation. The other authors indicated no potential conflicts of interest. Authors' contributions JOWP, TCC, JE, AS, DLM and AGK have developed the study concept. UM was responsible for statistical considerations. JOWP, JD, CTG and AGK followed the patients and collected the data. JOWP, TCC and AGK drafted the manuscript. All authors contributed to and approved the final manuscript. Pre-publication history The pre-publication history for this paper can be accessed here: http://www.biomedcentral.com/1471-2407/10/689/prepub
CC BY
no
2022-01-12 15:21:37
BMC Cancer. 2010 Dec 22; 10:689
oa_package/56/f5/PMC3014907.tar.gz
PMC3014908
21176209
Background Recently, computer-aided systems for data mining, for example by multivariate analysis, are now readily available and have shown promising results when applied to metabolic profiling for diagnostic purposes [ 1 , 2 ]. Currently, several applications of metabolome analysis based on machine learning for human cancer diagnosis using peripheral blood or urine were demonstrated [ 3 - 10 ]. Among metabolites, the amino-acid balance in patients with various diseases often differs from that maintained in healthy individuals, as a result of metabolic changes. Amino acids are considered to be central compounds within metabolic networks. The blood serves as the medium linking the metabolic processes in the different organs of the human body. Human amino-acid metabolism in the blood has been monitored clinically for >30 years. Fischer's ratio, which is defined as the balance between branched-chain amino acids (BCAAs) and aromatic amino acids, has been used as an indicator of both the progression of liver fibrosis and the effectiveness of drug treatment [ 11 ]. Specific abnormalities in amino-acid concentrations, as assessed using multivariate analysis, have also been reported in animal models of diabetes, in human liver fibrosis and in other pathologies [ 12 - 14 ]. The metabolism in cancer cells is known to be significantly altered compared with that in normal cells, and these changes are also reflected in the plasma amino-acid profiles of patients with various types of cancer. For example, a significant reduction in gluconeogenic amino acids (GAAs) and a significant increase in free tryptophan have been reported in lung cancer patients [ 15 ]. Kubota et al. used plasma amino-acid profiles to discriminate between patients with breast cancer, gastrointestinal cancer, and head and neck cancers, and healthy controls [ 16 ]. Therefore, detecting metabolic changes from amino-acid profiles could potentially be useful in cancer diagnosis. Post-genomic technologies also offer possibilities for exploiting amino-acid profiling. Recently, novel methods for analyzing amino acids have been established using high-performance liquid chromatography (HPLC)-electrospray ionization (ESI)-mass spectrometry (MS) [ 17 - 19 ]. This will help to make amino-acid measurements easier and reduce both the time and the cost of analysis. Therefore, one potentially useful metabolomics tool is the "AminoIndex", which could be a simple and versatile method for monitoring various pathological conditions [ 12 ]. Here we investigated the possibility of "AminoIndex" as a novel diagnostic method for the screening of non-small-cell lung cancer (NSCLC).
Methods All of the patients in the study had been diagnosed histologically with NSCLC at the Osaka Medical Centre for Cancer and Cardiovascular Diseases, Japan, between January 2006 and October 2008. While hospitalized, their informed consent for inclusion was obtained. Data from the first 141 patients enrolled between January 2006 and September 2007 were used as the study data set. A further 4,340 subjects without apparent cancers, who were undergoing comprehensive medical examinations at the Mitsui Memorial Hospital, Japan, in 2008, were recruited as control subjects. Of these, 423 were age-matched, gender-matched, and smoking status-matched with the patients in the study data set group. Data from the remaining patients and control subjects were used as the test data set. Data from an additional 15 SCLC patients, who were hospitalized at the Osaka Medical Centre for Cancer and Cardiovascular Diseases, Japan, between January 2006 and October 2008, were also used. Blood samples were collected from the controls and the NSCLC patients before any medical treatment. The study was conducted in accordance with the Declaration of Helsinki, and the protocol was approved by the ethics committees of the Osaka Medical Centre for Cancer and Cardiovascular Diseases and Mitsui Memorial Hospital. All subjects gave their informed consent for inclusion before they participated in the study. Analytical methods Blood samples (5 ml) were collected from forearm veins, after overnight fasting, in tubes containing ethylenediaminetetraacetic acid (EDTA; Termo, Tokyo, Japan), and were immediately placed on ice. Plasma was prepared by centrifugation at 3,000 rpm and 4°C for 15 min, and then stored at -80°C until analysis. After plasma collection, all samples were stored and processed at the Life Science Institute of Ajinomoto Co., Inc. (Kawasaki, Japan). To reduce any bias introduced prior to analysis, samples were analyzed in random order. The plasma samples were deproteinized using acetonitrile at a final concentration of 80% before measurement. The amino-acid concentrations in the plasma were measured by HPLC-ESI-MS, followed by precolumn derivatization [ 17 - 19 ]. The analytical methods were described in detail previously [ 17 ]. The concentrations of amino acids in the plasma were expressed as μM. Statistical analysis of plasma amino-acid profile The mean amino-acid concentrations ± standard deviations (SDs) were calculated. Differences between the plasma amino-acid concentrations in NSCLC patients and controls were assessed using the Mann-Whitney U-test and receiver-operator characteristic (ROC) curve. The area under the curve (AUC) for each ROC curve (the ROC_AUC) was calculated for each amino acid. Principal component analysis (PCA) was also used to assess differences in the plasma amino-acid profile between the controls and the NSCLC patients, with linear combinations of all of the amino acids included as explanatory variables. In PCA analysis the plasma amino-acid concentrations were transformed using the following equation: where z i,j was transformed concentration of of the i -th sample of the j -th amino acid, x i,j was concentration of the i -th sample of the j -th amino acid, n was sample size, and was the average concentration of j -th amino acid. Machine learning and validation First, an unconditional multiple logistic regression analysis with variable selection was used to construct a criterion for distinguishing NSCLC patients from controls using the study data set with the raw plasma concentrations of 21 amino acids as explanatory variables. The candidate variables of most appropriate logistic regression model, which had the minimum Akaike's information criterion (AIC) value, were selected from among all of the possible combinations in which the number of variables was below seven. A leave-one-out cross-validation (LOOCV) was performed to correct potential over-optimization for all models in parallel. Briefly, one sample was omitted from the study data set, and the logistic regression model was calculated for the remaining samples, to estimate coefficients for each amino acid. The logistic regression function values for the left-out sample were calculated based on the model. This process was repeated until every sample in the study data set had been left out once, and the function values generated were then used for AIC calculation. Finally, a case-control study was utilized for our study, and so a conditional logistic regression analysis, conditioned on the matching factors (i.e., gender, age, and smoking status), was performed in order to evaluate the association between the combination of amino acids obtained above and NSCLC. The discriminant score, which was defined as a logit of the conditional logistic regression function value, was constructed as a criterion. The degree of discriminancy of this score between NSCLC patients and controls was evaluated through the ROC curve. A distinct test data set, which had not been used in the model generation, was also used to confirm the stability of the obtained model, and to calculate the ROC_AUC values for the discriminant scores. Subgroup analysis To assess the effects of cancer stage and histological type, both the study data set and the test data set was stratified according to the analysis parameters. To assess the effects of cancer stage and histological type on the discriminant scores of NSCLC patients, a subgroup analysis was performed using the ROC curve, in each data set. A two-sided P value of less than 0.05 was considered to indicate statistical significance. Software All statistical analyses were performed using MATLAB (The Mathworks, Natick, MA), LogXact (Cytel, Cambridge, MA), and GraphPad Prism (GraphPad Software, La Jolla, CA).
Results Characteristics of patients and control subjects The study data set comprised 141 patients with NSCLC, and 423 age-matched, gender-matched, and smoking status-matched control subjects, whereas there were 162 patients and 3,917 controls in the test data set; a further 15 SCLC patients were also included (Table 1 ). Among the patients, 28% and 36% were non-smokers in the study and test data sets, respectively, whereas almost 50% of the control subjects were non-smokers (Table 1 ). There were no significant differences in body mass index (BMI) between the patients and the control subjects (Table 1 ). In both the study and test data sets ~50% of the patients were categorized as having stage I disease, ~5% as stage II, ~25% as stage III and ~20% as stage IV (Table 1 ). The Eastern Cooperative Oncology Group performance status (ECOG) score of most patients was 0 or 1; hence, the majority of the patients were asymptomatic or symptomatic but completely ambulatory (Table 1 ). The histological type was adenocarcinoma in almost 75% of the patients and squamous cell carcinoma in almost 25%, the other types present included large-cell carcinoma, adenosquamous carcinoma, pleomorphic carcinoma and mucoepidermoid carcinoma (Table 1 ). Changes in amino-acid concentrations in NSCLC patients In the study data set, the plasma concentration of His was significantly lower, and those of Ser, Pro, Gly, Ala, Met, Ile, Leu, Tyr, Phe, Orn, and Lys were significantly higher, in NSCLC patients than in controls (Table 2 ). Amino acids in the human body undergo interdependent regulation; comparing single amino-acid concentrations between controls and patients might thus be insufficient to elucidate any changes in plasma amino-acid profiles associated with cancer development. Changes in the balance of the plasma amino acids in the study data set were therefore investigated using principal component analysis (PCA) in the current study. Five PCs with eigenvalues >1 were identified (Table 3 ). To evaluate their performance, the Mann-Whitney U -test was used to compare each PC score between the controls and NSCLC patients. Three of the PCs showed significant p values (< 0.001): PC1, PC3, and PC5 (Table 3 ). The contributing amino acids for the PCs that had a variance of >0.05 were then extracted; the results identified Ala, Val, Met, Ile, Leu, Tyr, Phe, Trp, and Lys as contributing factors for PC1, Cit, His, Trp, Orn, and Arg as contributing factors for PC3, and Ser, Gly, Cit, His, and Arg as contributing factors for PC5 (Table 3 ). As a result, fifteen amino acids (Ser, Gly, Ala, Cit, Val, Met, Ile, Leu, Tyr, Phe, His, Trp, Orn, Lys, and Arg) were identified as whose profile in plasma were associated with NSCLC (Table 3 ). Classifier for discriminating NSCLC patients The results described so far suggested that it should be possible to improve the discrimination between cancer patients and normal controls by deriving multivariate functions, using the raw plasma amino-acid concentrations as explanatory variables, which would summarize the changes in metabolic status. Multiple logistic regression analyses by unconditional and conditional likelihood methods were therefore performed with variable selection and LOOCV cross-validation, using the study data set (as described in the Methods). The resulting conditional logistic regression model included six amino acids: Ala ( p = 0.007), Val ( p < 0.001), Ile ( p < 0.001), His ( p = 0.035), Trp ( p = 0.027) and Orn ( p < 0.001). The area under the curve (AUC) of the ROC for the discriminant score was 0.817 in the study data set (Figure 1 ). Furthermore, to verify the robustness of the resulting model, a ROC curve was generated using the split test data set, which had not been used to construct the model. A ROC_AUC of the ROC for the discriminant score was 0.812 in the test data set (Figure 1 ), again demonstrating that the obtained model performed well. Subgroup analysis of the discriminant scores From the point of view of cancer screening, attention might be paid to whether or not the obtained model also provides sufficient discriminating power to extract effectively patients with early-stage cancer and for all histological types. Thus, to investigate the consistency of the results based on the discriminant scores among different subpopulations defined by cancer stage and histological type, a subgroup analysis was performed using both study data set and the test data set. The discriminant scores of the SCLC patients were also calculated to verify whether the obtained model could discriminate them from the controls. Interestingly, it was suggested that the model could discriminate lung cancer patients regardless of cancer stage or histological type. Using the discriminant scores, the ROC_AUCs were 0.796 (study data set) and 0.817 (test data set) for stage I patients, 0.906 (study data set) and 0.801 (test data set) for stage II patients, 0.823 (study data set) and 0.843 (test data set) for stage III patients, and 0.836 (study data set) and 0.713 (test data set) for stages IV patients (Figure 2A, B ). The model would thus be expected to be effective in detecting early, as well as advanced, cancers. We also demonstrated that the model could detect both adenocarcinomas and other histological types of cancer equally well: the ROC_AUCs were 0.795 (study data set) and 0.796 (test data set) for adenocarcinoma, and 0.860 (study data set) and 0.892 (test data set) for squamous cell carcinoma (Figure 2C, D ). Furthermore, the distribution of the discriminant scores for SCLC patients was similar to that for NSCLC patients, with a ROC_AUC of 0.877 (Figure 2D ).
Discussion Lung cancer has been the leading cause of cancer death since 1998 and >60,000 patients have died since 2005 in Japan. The 5-year survival rate for patients undergoing surgery is only 61%, and an accurate screening method for lung cancer would be an important advance [ 20 ]. In Japan, chest X-rays and sputum cytology are used for screening lung cancer. Although chest X-rays are useful for detecting peripheral lung cancer, two-thirds of patients diagnosed in this way have associated metastases, and this method is not sufficient to detect the early stages of the disease [ 21 ]. In addition, highly skilled staffs are required to achieve sufficient accuracy. Sputum cytology might be useful for detecting upper respiratory-tract carcinoma, but this method has been reported to be inadequate for detecting peripheral lung cancer and lung cancer in asymptomatic non-smokers [ 21 ]. Recently, low-dose helical computed tomography (CT) was reported to be capable of detecting small, early lung cancers in high-risk populations; however, it is not known whether using this method would affect the mortality rate due to lung cancer or whether it would be cost-effective [ 22 ]. In comparison to those methods, the "AminoIndex" would be easier to use, as it involves a relatively simple plasma assay, imposes a lower physical burden on patients and does not require advanced technical skills to perform [ 12 ]. The current study demonstrated that plasma amino acid profiles were associated with NSCLC. The ROC_AUCs were 0.817 for the study data set under the conditional logistic regression analysis conditioned on the matching factors (Figure 1 ). Okamoto et al. recently reported that plasma amino-acid profiles might be used to screen colorectal and breast cancer [ 23 ]. Despite the smaller sample size, they reported ROC_AUCs of 0.860 (with study data) and 0.910 (with test data) for colorectal cancer patients, and 0.906 (with study data) and 0.865 (with test data) for breast cancer patients [ 23 ]. Our current study achieved similar discrimination power using data set with a larger sample size under controlling for potential confounders, thereby demonstrating the robustness of the model. Many reports have shown that the metabolism, including that of amino acids, is notably altered in cancer cells [ 4 , 24 - 26 ], and that the plasma amino-acid profiles are also changed [ 15 , 16 , 27 - 30 ]. Cascino et al. described significant increases in levels of Trp, Glu and Orn in lung cancer patients [ 15 ]. Proenza and colleagues also reported an increased level of Orn in patients with lung cancer [ 29 ]. Naini et al. reported reduced levels of plasma Arg in lung cancer patients [ 31 ]. Changes in the amino-acid balance and an increase in gluconeogenesis have been well documented, especially in cachexic patients with advanced cancer [ 32 , 33 ]. In the current study, the obtained model identified patients at all stages of lung cancer and without cachexia equally well, suggesting that the method did not rely on detecting metabolic abnormalities associated with malnutrition, which might be present in advanced cancer patients (Figure 2A, B ). Hirayama et al. reported no significant correlation between the levels of metabolites, including several amino acids, and the patients' tumour stage [ 24 ]. And it was also reported that amino acids were frequently identified compounds among whole metabolites in blood in relation to cancer [ 3 , 8 ]. The metabolism of specific amino acids is known to be associated with specific organs, such as muscle, liver or kidney, changes in the levels of amino acids are affected by their metabolism in, and excretion from, multiple organs of the body. Although it remains unclear how the metabolic changes occurring in tumour cells affect the systemic, plasma amino-acid profile, these results show that the metabolic changes caused by cancer development are at least partially responsible for the changes in plasma amino-acid profile seen even in lung cancer patients with early stage cancer. So, profiling the plasma free amino acids is similar to monitoring metabolic networks in multiple organs and it might better allow us to detect particular conditions in specific organs. Since this study was designed as a case-control study, the obtained model could not be directly applied to further observation or prediction even though the robustness of the model was preliminarily demonstrated. Therefore model construction and validation using cohort with larger samples will be necessary to clarify its utility. Nonetheless, we believe that this screening technique could be a straightforward diagnostic method for the management of lung cancer.
Conclusions The current study demonstrated that the plasma amino-acid profile of NSCLC patients differed from that of healthy subjects. And we showed that the multivariate classifier might be effective for discriminating lung cancer patients. Although further prospective validation will be necessary in the future, this method might be an effective and convenient screening tool for lung cancer patients.
Background The amino-acid balance in cancer patients often differs from that in healthy individuals, because of metabolic changes. This study investigated the use of plasma amino-acid profiles as a novel marker for screening non-small-cell lung cancer (NSCLC) patients. Methods The amino-acid concentrations in venous blood samples from pre-treatment NSCLC patients ( n = 141), and age-matched, gender-matched, and smoking status-matched controls ( n = 423), were measured using liquid chromatography and mass spectrometry. The resultant study data set was subjected to multiple logistic regression analysis to identify amino acids related with NSCLC and construct the criteria for discriminating NSCLC patients from controls. A test data set derived from 162 patients and 3,917 controls was used to validate the stability of the constructed criteria. Results The plasma amino-acid profiles significantly differed between the NSCLC patients and the controls. The obtained model (including alanine, valine, isoleucine, histidine, tryptophan and ornithine concentrations) performed well, with an area under the curve of the receiver-operator characteristic curve (ROC_AUC) of >0.8, and allowed NSCLC patients and controls to be discriminated regardless of disease stage or histological type. Conclusions This study shows that plasma amino acid profiling will be a potential screening tool for NSCLC.
Competing interests We declare that we are participants in the "AminoIndex" research consortium organized by Ajinomoto, and that we have all seen and approved the final version of this manuscript. Akira Imaizumi and Hiroshi Yamamoto are employees of Ajinomoto. Masahiko Higashiyama, Fumio Imamura and Akira Imaizumi have applied for patents for plasma amino-acid profiling using multivariate analysis as a diagnostic procedure. Authors' contributions AI and HY designed this case control study. JM, MH, TN, MY, FI and KK coordinated the study and collected the background data on the subjects. HY also coordinated the study, and supervised the collection of control data. JM, TD, and AI provided data analysis and wrote the manuscript. JM, MH, AI, TN, HY, TD, MY, FI and KK provided final reviews and approval of the manuscript. All authors read and approved the final paper. Pre-publication history The pre-publication history for this paper can be accessed here: http://www.biomedcentral.com/1471-2407/10/690/prepub
Acknowledgements We thank Dr. Jiro Okami, Dr. Kazuyuki Oda, Dr. Kiyonobu Ueno, Dr. Kazumi Nishino, Dr. Yuuki Akazawa and Dr. Junji Uchida for collecting the blood samples and background data on the patients. We thank Dr. Hiroshi Miyano, Mr. Kazutaka Shimbo, Mr. Hiroo Yoshida, Ms. Michiko Amao and Ms. Mina Nakamura for amino-acid analyses. We thank Dr. Katsuhisa Horimoto, and Dr. Mitsuo Takahashi for help with the statistical analysis. We also thank Ms. Tomoko Kasakura for help with data collection.
CC BY
no
2022-01-12 15:21:37
BMC Cancer. 2010 Dec 22; 10:690
oa_package/4d/22/PMC3014908.tar.gz
PMC3014909
21143999
Introduction The geographical distribution of malaria so far described in sub-Saharan Africa is diverse. This ranged from savannah malaria, forest, highland, urban and hydro-agricultural malaria [ 1 ]. Currently, the need to investigate urban malaria has become urgent due to the resurgence of the disease and the agro-economical interest by populations in developing subsistence activities in urban and suburban areas of major cities in sub-Saharan Africa [ 2 - 6 ]. In recent studies by Robert et al . [ 3 ] and Warren et al . [ 7 ] it was shown that urbanization decreases malaria prevalence as a results of a drastic reduction in Anopheles breeding sites better access to treatment and improved (mosquito-proof) housing measures (overview in [ 8 ]). It is therefore important to emphasize the role played by urbanization in reducing malaria transmission by twofold [ 9 ]. Furthermore, agricultural activities play an important role in malaria transmission in both urban and peri-urban zones [ 6 ]. Indeed, poverty, food insecurity and malnutrition have become urban issues in sub-Saharan Africa. While meeting these challenges in cities of sub-Saharan Africa is critical, it represents a serious issue of public health [ 10 ]. Agro-economic practices involving vegetable farming is now common in many urban areas and this provides suitable breeding sites for mosquitoes with potentially higher epidemiological risk of malaria in urban than rural areas. The general trend in practice now is that, usually non-used spaces (marshland, road edges, beaches etc) are increasingly transformed into vegetable farms comprising different kinds of crops. In Cotonou the capital city of Benin, peri-urban agriculture consists of belts of vegetable farming surrounding the city. The advantages of urban agriculture are considerable. They contribute to improve the living conditions of citizen by supplying food, income and employment [ 11 ]. However, the economic and social value of urban and peri-urban agriculture is hindered by a number of factors including the proliferation of mosquito breeding sites. Many studies have reported the relationship between malaria and rice production, but little is known about the link between malaria transmission and urban agriculture. A recent study conducted by Robert et al . [ 8 ] has shown that vegetable farming in urban areas of Dakar, (Senegal) might not be the suitable breeding sites for larval development. However, Matthys et al . [ 12 ] found that peri-urban agriculture created more breeding sites for Anopheles and therefore increased malaria risk in the city. An entomological study conducted in Ghana showed higher Anopheles biting rates in urban areas with agriculture compared to urban areas without such practice [ 13 ]. Another study in Kenya found no association between urban farming at household level and vector breeding sites [ 14 ]. In Benin, there are few data available that investigate the association between malaria transmission and urban vegetable farming. The present study was conducted in three major sites of urban farming in Benin with the aim of investigating the entomological aspects of malaria transmission in relation to seasonal variations of vector populations in these areas of Benin. Specifically, the study aimed to determine (a) the distribution of Anopheles mosquito species throughout the year at these sites, (b) their human biting pattern, (c) the infectivity rates of malaria vectors and (d) the entomological inoculation rates and malaria transmission in the three study areas.
Methods Study area The study was conducted in Benin, from January 2009 to December 2009 in three vegetable farms (Figure 1 ): (i) Houeyiho in Cotonou, (economic capital city of Benin). The vegetable farm is located at 6°45'N and 2°31'E in a highly populated zone. The farm is 14-hectares in size and shared between five local cooperatives of approximately 2,000 farmers. (ii) Acron in Porto-Novo, (administrative capital city of Benin) The vegetable farm is at 6°30'N and 2°47'E, at the outskirt of Porto-Novo and has the longest history of vegetable farming in the region. Initially it consisted of three hectares but has recently expanded up to 20 hectares. The number of farmers has also increased to about 150 individuals. (iii) Azèrèkè, Parakou. This farm is located at 9°22'N and 2°40'E at the vicinity of Parakou city, known as the Azèrèkè site. The size of this vegetable plantation is 10 hectares. Agricultural practices in those farms create numerous trenches that retain rain and water from irrigation systems. These stagnant waters provide suitable breeding sites for mosquitoes, particularly Anopheles gambiae , the main malaria vector in the areas. Field mosquito collection Indoor collections of adult mosquitoes were carried out monthly from January to December 2009. Collections were organized in Households Close to Vegetable Farms (HCVF) and in others Far from the Farms (HFVF) where there is no agricultural practice. Adult mosquitoes were collected using two sampling methods: (1) Indoor and outdoor Human Landing Catches (HLC) performed monthly over two consecutive nights (8:00 PM to 6:00 AM), in 4 randomly selected compounds; (2) Indoor Pyrethrum Spray Catches (PSC) in 4 other selected compounds; the same compounds in each sampling method being consistently used throughout the study. Collectors gave prior informed consent and received anti-malaria prophylaxis and yellow fever immunization. They were organized in teams of two for each collection point and they rotated between location within houses every two hours. Mosquitoes from HLC were used to evaluate the sporozoite infection rate of each molecular form. Knocked down mosquitoes falling on white bed sheets were preserved for identification at molecular level using PCR analysis for their resistance status as described by Martinez-Torres et al . [ 15 ]. The PSC were carried out monthly from December to January and used to establish the temporal dynamics of mosquito density and the molecular forms of An. gambiae . All mosquitoes were kept separately in labelled tubes containing silica gel and frozen at -20°C for further laboratory analysis. Laboratory processing of mosquitoes Based on morphological characters using standard identification keys [ 16 ], all female mosquitoes referring to An. gambiae complex were identified. The head-thoraces of these females from the human landing catches were tested for the presence of CircumSporozoite Protein (CSP) of P. falciparum using enzyme-linked immunosorbent assay (ELISA) as per Wirtz et al . [ 17 ]. Identification of species and characterization of molecular forms within the An. gambiae complex were performed using PCR-RFLP [ 18 ]. Entomological parameters The entomological indicators of malaria parasite transmission intensity at the sites were: (1) the human biting rate (HBR), which is the number of mosquitoes biting a person during a given time period (bites/p/t) (time being night, month or year); (2) the CSP rate is the proportion of mosquitoes found with Plasmodium falciparum CSP over the total number of mosquitoes tested: (3) the Entomological Inoculation Rate (EIR), expressed as the number of infective bites of anopheline per person per unit of time (bi/p/t) and calculated as the product of the HBR by the CSP rate. Data analysis An analysis of variance (ANOVA) was performed to compare the entomological estimates (HBRs, EIRs) between the seasons at the sites in the North and south Benin. The resistance allele frequency at the kdr and Ace- 1 locus was calculated using Genepop software (version 3.3) as described by Raymond and Rousset [ 19 ]. Ethical considerations Ethical approval for this study was granted by the Ethical Committee of the Ministry of Health in Benin. Verbal consent was asked to the head of each household for the spray catches and consent of collectors was obtained prior to HLC. In case of refusal, permission was sought from the next household.
Results Mosquito fauna composition A total number of 71,678 mosquitoes were collected from HLC and 29,295 from PSC at the three study sites (Table 1 ). The majority was Culex spp (74%). Of the remaining 26%, 97% were Anopheles gambiae s.l. and 3% were An. pharoensis, Anopheles ziemanni , and An. funestus all together. Dynamic of the molecular forms PCR species from An. gambiae s.l. collected revealed the presence of two members of An. gambiae s.l . at Azèrèkè: An. gambiae s.s., and An. arabiensis (Table 2 ). At Houeyiho and Acron, the entire mosquito population was An. gambiae s.s of the M molecular form throughout the year. However, at Azèrèkè, An. gambiae s.s . predominates at 85%. Despite the presence of both An. gambiae and An. arabiensis identified at Azèrèkè, the PCR analysis revealed two molecular forms in An. gambiae s.s. only, with proportions of 65% for the M and 35% for the S form. Seasonal abundance and biting rates The Annual Human Biting Rate (HBR) was estimated from the Human Landing Catches (HLC). The highest bites of An. gambiae s.l . during the rainy seasons was found in July (80 bites/p/n) in southern Benin and September (100 bites/p/n) in the northern part of the country as well (Figure 2 ). The results from this study showed that the average HBR of An. gambiae s.l . was 10.95 bites/p/n at Houeyiho; 7.84 bites/p/n at Acron and 3.04 bites/p/n at Azèrèkè during the dry season in HCVF. These bites from HCVF were significantly higher than in HFVF at Houeyiho (2.70 bites/p/n), Acron (2.85 bites/p/n) and Azèrèkè (1.41 bites/p/n) (P < 0.05). The human population living in HCVF received about one to five times higher bites of An. gambiae s.l . than those who lived in HFVF. However, there was no significant difference between the HBRs during the rainy season in HCFV and HFVF at Houeyiho, Acron and Azèrèkè (P > 0.05 for the three sites). The average annual HBR from January through December was significantly higher in HCVF than that from HFVF at the three sites (P < 0.05) (Table 3 ). Sporozoite rate and EIR A low percentage of Anopheles caught from the HLC (1.6%) was circumsporozoite protein positive at the three study sites. The main malaria parasite was P. falciparum transmitted by Anopheles gambiae s.l. in the southern sites surveyed. Transmission at these sites was high during the two rainy seasons (April to-July and October to November) and low during the two dry seasons (December to March and August to September) (Figure 2 ). In the north of the country, transmission occurred during the rainy season (June to October), with at least two members of An. gambiae s.l .: An. arabiensis (15%) and An. gambiae s.s . (85%) transmitting P. falciparum . The EIRs were significantly higher in the dry season in HCVF at Houeyiho (0.28 bi/p/n), Acron (0.23 bi/p/n) and at Azèrèkè (0.15 bi/p/n) than in HFVF at the same sites Houeyiho (0.041 bi/p/n); Acron (0.01 bi/p/n) and Azèrèkè (0.031 bi/p/n) (P < 0.05). The trend in average annual EIRs at the three sites was similar to that observed in the dry season, with significantly higher EIR in HCVF than in HFVF at the three sites (P < 0.05) (Table 4 ). However, during the rainy season, there was no significant difference between the EIRs from HCFV and HFVF (P > 0.05 for the three sites). Distribution of the kdr and ace-1R mutations An average of 30 mosquitoes from indoor resting fauna were analysed every month for the Leu-Phe kdr and ace-1 R mutations (Table 5 ). In southern Benin, the kdr mutation was found in all M form of An. gambiae , with a frequency of 0.89 at Acron and 0.91 at Houeyiho. At Azèrèkè, in the north, Kdr occurred both in the M and S forms but at a higher frequency in S (75%) than M form (25%) ( P < 0.05). The ace-1 R mutation was also found at all sites surveyed but at a very low frequency. It was 0.02 at Houeyiho, 0.01 at Acron and 0.04 at Azèrèkè (Table 5 ).
Discussion The findings from the present study showed a clear evidence of the dynamics of malaria transmission in urban or sub-urban areas of Benin where vegetable farming activities have grown extensively. The abundance and fluctuations in larval and adult densities of samples collected were inherent to the environmental and ecological characteristics related to each site assessed [ 20 ]. Indeed, the diversity of anopheline species found in this study either in the south or northern part of Benin showed that apart from Anopheles gambiae s.s., the major vector of malaria parasites in West Africa, An. arabiensis should also be considered as a secondary transmitter in northern Benin though at a lesser extent. Yadouleton et al . [ 10 ] reported the presence of An. arabiensis in the Northern part of Benin at low frequency. Furthermore, within the An. gambiae complex, the S form was only found at the Azèrèkè site in the North (Sudano-guinean ecotype) while the M form was identified at Houeyiho and Acron in the coastal area of Cotonou (guinean ecotype) and the northern sub-urban areas. Corbel et al . [ 21 ] in Benin and Awolola et al . [ 22 ] in Nigeria reported that the geographical distribution correlated with the ecological or climatic factors as the M form is more adapted to the dry environment and breeds along irrigated fields than the S form which is commonly found in humid forest areas and temporary pools. The predominance of An. gambiae s.l. in the study area is consistent with its distribution throughout Africa. The presence of higher biting rates (Figure 2 ) and sporozoite infective Anopheles (Table 4 ) in households close to vegetable farms than in those on the far side of the farms have shown that malaria parasite transmission was permanent during the year and was reinforced by the presence of breeding sites in urban vegetable farming. In fact, results from this study showed that in the three study sites, people who lived nearby the vegetable farms received during the dry season one to five times more bites than those who lived farther, but during the rainy season, there was no significant difference between HCVF and HFVF. The increased number of biting (HBR) and EIR recorded during the dry seasons in houses close to the vegetable farms compared to those far from the farms could be explained by the presence of permanent pools and puddles maintained during watering of vegetable crops. These findings showed that communities living close to vegetable farms are permanently exposed to malaria throughout the year whereas the risk in those living far from such agricultural practices is limited and only critical during the rainy seasons. The main mechanism conferring resistance in An. gambiae to pyrethroids ( Leu-Phe kdr mutation) in West Africa was found in mosquito samples collected in different sites. The allelic frequency of this mutation among populations collected near or far from the vegetable farms was high (> 0.90). This could be the direct consequence of the extensive use of pesticides for cotton crop protection in the south and northern Benin [ 23 , 24 ] or the use of the same pesticides by local farmers against vegetable pests [ 10 ]. The gene frequency being already too high in both situations, it appeared difficult to separate the effect of the local farming practices from that of the old history of cotton production and pesticides on resistance selection in the areas surveyed. Distinguishing the impact of the two agricultural practices will require sampling and testing of mosquitoes in areas where kdr is still on its moderate form in Western Africa.
Background Urban agricultural practices are expanding in several cities of the Republic of Benin. This study aims to assess the impact of such practices on transmission of the malaria parasite in major cities of Benin. Method A cross sectional entomological study was carried out from January to December 2009 in two vegetable farming sites in southern Benin (Houeyiho and Acron) and one in the northern area (Azèrèkè). The study was based on sampling of mosquitoes by Human Landing Catches (HLC) in households close to the vegetable farms and in others located far from the farms. Results During the year of study, 71,678 female mosquitoes were caught by HLC of which 25% (17,920/71,678) were Anopheles species. In the areas surveyed, the main malaria parasite, Plasmodium falciparum was transmitted in the south by Anopheles gambiae s.s. Transmission was high during the two rainy seasons (April to July and October to November) but declined in the two dry seasons (December to March and August to September). In the north, transmission occurred from June to October during the rainy season and was vehicled by two members of the An. gambiae complex: Anopheles gambiae s.s . (98%) and Anopheles arabiensis (2%). At Houeyiho, Acron and Azèrèkè, the Entomological Inoculation Rates (EIRs) and the Human Biting Rates (HBRs) were significantly higher during the dry season in Households Close to Vegetable Farms (HCVF) than in those located far from the vegetable areas (HFVF) (p < 0.05.). However, there were no significant differences in HBRs or EIRs between HCFV and HFVF during the rainy seasons at these sites (p > 0.05). The knock-down resistance ( kdr ) mutation was the main resistance mechanism detected at high frequency (0.86 to 0.91) in An. gambiae s.l. at all sites. The ace-1 R mutation was also found but at a very low frequency (< 0.1). Conclusion These findings showed that communities living close to vegetable farms are permanently exposed to malaria throughout the year, whereas the risk in those living far from such agricultural practices is limited and only critical during the rainy seasons. Measures must be taken by African governments to create awareness among farmers and ultimately decentralize farming activities from urban to rural areas where human-vector contact is limited.
Competing interests The authors declare that they have no competing interests. Authors' contributions AY carried out field experiments, collected, analysed, interpreted data and wrote the manuscript. NR and AA contributed to the design of the study and revised the manuscript for intellectual content, BM, KD contributed to the design of the study. GP and RO and HA helped with the field activities. MA conceived, designed the study, and revised the manuscript for intellectual content. All authors read and approved the final manuscript.
Acknowledgements This work was supported by the ADDRF and WHO through its TDR/RCS re-entry grant. We are grateful to villagers of Houeyiho, Acron and Azèrèke assisted in the implementation this study.
CC BY
no
2022-01-12 15:21:37
Parasit Vectors. 2010 Dec 12; 3:118
oa_package/7f/92/PMC3014909.tar.gz
PMC3014910
21134260
Background Cardiovascular diseases are among the most common causes of death in industrialized countries. Risk factors include increased age, male sex, diabetes, hypertension, and lipoprotein abnormalities. The aorta provides at least 60 to 70% of systemic compliance [ 1 ]. Reduced elasticity and compliance of the aorta are etiologic in cardiovascular diseases such as atherosclerosis and therefore serve as early indicators of asymptomatic atherosclerotic lesions [ 2 , 3 ]. The velocity of pressure and flow pulses travelling down an elastic vessel termed the pulse wave velocity (PWV) increases with arterial stiffness [ 4 ]. The PWV is a direct measure of arterial stiffness [ 3 ] and serves as an independent predictor for cardiovascular risk and mortality [ 5 - 8 ] in many cases of CVD, including atherosclerosis [ 9 ]. In this study, a high field cardiovascular magnetic resonance (CMR) protocol using the transit time (TT) method [ 10 ] was developed and tested for its capability to distinguish groups of healthy and atherosclerotic mice by means of the PWVs in the descending aortas. Originally, the transit time method has been used to determine the PWV in humans using several invasive methods [ 11 , 12 ] and non-invasive methods such as ultrasound [ 13 , 14 ] and CMR [ 15 ] before it was applied to smaller mammals such as mice. To this day new CMR methods have been refined by many workgroups [ 16 , 17 ] and they constitute a comprehensive technique for the characterization of morphologic and functional arterial systemic parameters in humans. In the past, the need to determine morphologic and functional parameters of the murine arterial system arose. Genetically engineered phenotypes of mice that are deficient in apolipoprotein E (ApoE (-/-) mice) spontaneously develop severe hyperlipidemia and atherosclerotic lesions at the arterial wall [ 18 ]. ApoE (-/-) mice develop all stages of lesions observed during atherogenesis that resemble lesions found in humans [ 19 ]. Therefore and for its short generation times, the ApoE (-/-) mouse model became an important model for human atherosclerosis [ 20 ]. Physical dimensions in mice are approximately 20 times smaller than in humans. With isoflurane anesthetized mice have high heart rates of 8 to 10 beats per second, whereas healthy humans at rest have heart rates of 0.8 to 1.7 beats per second. The necessities of very high spatial and temporal resolutions constitute the challenges for the determination of the PWV. In CMR of the corresponding animal model, the acquirable spatial resolution is limited by the signal-to-noise ratio (SNR). In the past, CMR systems did not provide a sufficient SNR to allow for accurate PWV measurements in mice. New high-field CMR systems facilitate a very high SNR and allow for examinations of morphologic and also functional parameters of the arterial system of mice [ 21 , 22 ]. Alternatively to CMR, ultrasound methods have been utilized to non-invasively measure PWVs in mice [ 23 - 25 ]. However, these ultrasound methods turned out to be highly observer dependent and in general were limited by acoustical windows in the thorax and by angular offsets in the alignment of the ultrasound transducer and the aorta [ 26 ]. The limited spatial resolution of ultrasound is a substantial impediment for morphological studies [ 27 ]. The primary objective of this study was to test the applicability of CMR and the TT method to assess the regional PWV of the descending murine aorta. The MR measurements were performed on a CMR system with a main magnetic field of 17.6 T. The accuracy and reproducibility of the CMR method were validated on a vessel phantom made of poly(vinyl alcohol) cryogel (PVA-C). In addition, it was tested whether the precision of the CMR method is sufficient to differentiate the mean PWVs of groups of atherosclerotic ApoE (-/-) and healthy wild type mice.
Methods The Transit Time Method During each systole the left ventricle of the heart ejects one stroke volume of blood into the aortic root, hence, generating a pulse beat. The pulse beat that in fact is a local increase of blood pressure and flow velocity propagates down the aorta with a velocity that is named the pulse wave velocity (PWV). The principle to measure arterial PWV with the TT method is as follows. At least two measurement sites, I and II in Figure 1a , are to be selected on the aorta at a mutual distance, Δz. The transit time, Δt, which the pulse needs to travel from location I to II is to be measured (see Figure 1b ). The PWV is calculated as PWV = Δz/Δt. This procedure is called the two-point TT method. The generalization of the two-point TT method is the multi-point TT method [ 28 - 30 ]. It measures the transit times of the pulse waves in multiple locations along the path of wave propagation. The PWV is the constant of proportionality between the distances and the corresponding transit times. Figure 1b and 2a reveal that flow and pressure pulse curves show a gradual rather than an instantaneous increase. However, a distinctive feature, the foot of the pulse wave, is necessary to determine the transit time. The foot of the pulse wave is defined as the intersection point of a regression line fitted to the velocity or pressure values before the pulse and another line fitted to the early increasing values. Phantom Experiments The accuracy of the MR method was tested on an elastic vessel phantom that was connected to a custom built flow and pressure pulse generator. The phantom allowed for the determination of PWVs by CMR measurements of flow waveforms and for reference measurements of pressure waveforms utilizing a pressure catheter that was inserted in the outlet end of the vessel (Figure 3 ). The PWVs obtained by the two different measurement methods were compared to each other in order to give evidence of the accuracy and precision of the MR method. The phantom material had to have appropriate physical properties, e.g., mechanical strength, uniformity, long-term stability, and a modulus of elasticity resembling animal tissue. Poly(vinyl alcohol) cryogel (PVA-C), discovered independently by Peppas et al. and Nambu et al. [ 31 , 32 ], has high breaking strengths and physiologic elastic moduli ranging from 0.1 to 1.0 MPa [ 33 ] and was, therefore, used to build the phantom vessel. Aqueous poly(vinyl alcohol) solution (15% w/w) was injected into an acrylic mold and crosslinked to form the gel by repeated freezing and thawing. The elastic modulus of the gel depends amongst others on the number of the freeze-thaw cycles, on the concentration of the aqueous solution, and essentially on the thaw rate [ 33 , 34 ]. The vessel phantom had an inner diameter of 6 mm and a wall thickness of 0.25 mm. Its length was approximately 8 cm. The vessel was dimensioned according to Eq.[ 1 ], the Moens-Korteweg equation [ 35 ], to have a PWV in the physiological range of 2.0 to 6.5 m/s. E is Young's modulus of elasticity, h and d are the vessel wall thickness and the diameter, and ρ is the fluid density. The phantom was stored in a water bath to stabilize for at least 14 days prior to measurements because PVA-C is subject to an aging process that can affect its elasticity [ 34 ]. A homemade pressure and flow pulse generator was connected to the inlet end of the vessel phantom. A function generator (F34, Interstate Electronics Corporation, Anaheim, CA, USA) controlled the pulse generator. The function generator was set to generate pressure pulses every 1.8 s. Therewith, residual waves reflected at impedance mismatches inside the vessel phantom setup could decay completely between pulses. The function generator also triggered the data acquisitions of the MR system and of the setup used to record the pressure waves inside the vessel phantom. The setup to record pressure waves comprised a pressure catheter, a signal amplifier, and a storage oscilloscope. The catheter was compounded of a blunted canula (TSK-Supra, Ebhardt-Söhne GmbH, Geislingen, Germany) and a Statham P23XL pressure transducer (Viggo-Spectramed, Inc., Oxnard, CA, USA). The signal amplifier was an MBS STAT ZAK (ZAK Psychologische und Physiologische Instrumente GmbH, Simbach, Germany). The storage oscilloscope was a TDS 3032 (Tektronix Inc., Beaverton, OR, USA). The outlet end of the vessel was connected to a reservoir filled with an aqueous solution of 1.75 mM CuSO 4 . The height of the reservoir was adjusted in order that the vessel did not bloat or collapse. Two plastic rods supported the vessel on one side to prevent it from swinging in the transversal direction but allowed it to expand during the transit of the pulse. The multi-point TT method was used to determine the PWV of the vessel phantom by pressure measurements. The measurement positions of the pressure catheter were determined at its protruding end with a calliper gauge. The pressure data were analyzed in MATLAB (The Mathworks, Inc., Natick, MA, USA) to determine the onset times of the pressure waves for each measurement position of the pressure catheter. The measurement positions were plotted versus the corresponding pressure pulse onset times. The slope of the linear regression yielded the PWV (see Figure 2b ). The multi-point TT measurements were performed at six different installation heights of the vessel phantom that ranged from -13 mm to 12 mm from the MR iso-center. The PWV at the MR iso-center was determined by linear regression through the PWV and installation height values. The MR system that was used for the phantom and in vivo measurements was a Bruker AVANCE 750 spectrometer (Bruker Biospin, Rheinstetten, Germany) with a vertical main magnetic field of 17.6 T and a bore size of 89 mm. The self-shielded gradient insert, a Bruker Micro 2.5, had a maximum gradient strength of 993 mT/m and an inner diameter of 40 mm. The homebuilt transversal electromagnetic mode radio frequency (RF) resonator had an accessible inner diameter of 25 mm. The MR sequence applied phase velocity encoding to record the time courses of the flow velocities. The sequence was based on a two-dimensional FLASH sequence with incorporated velocity compensating gradients in all three gradient directions (Figure 4 ). To encode through-slice flow velocities, bipolar velocity encoding gradients were superposed onto the velocity compensating gradients in the slice direction. Three flow encoding steps were applied to each scan. The velocity-encoding window of the MR sequence was set to ± 0.20 m/s to accommodate the maximal flow velocities. The flow encoding gradients required the maximal available gradient power. Transit times of the pulse waves are in the range of a few milliseconds, therefore, a temporal resolution of 1 ms was necessary to be able to obtain the PWV from the time courses of arterial blood flow velocities. The intrinsic temporal resolution of the MR sequence, equivalent to its repetition time, was 5 ms. Hence the time courses of the flow velocities had to be sampled in an interleaved fashion. The sequence was initiated five times, every time with an additional delay of 1 ms between the trigger signal and the initiation of the sequence. The resulting five data streams were interleaved in the post processing to generate a recording of flow velocities with a temporal resolution of 1 ms. The detailed timing and loop structure of the MR sequence are visualized in Figure 4 . The spatial in-plane resolution was 147 × 147 μm 2 and the slice thickness was 1 mm. The field of view (FOV) was 22 × 22 mm 2 . The echo time was 1.6 ms. The Gauss RF excitation pulse had a length of 200 μs and an excitation angle of 20°. Signal averaging was not applied. The two-point and multi-point TT methods were used to determine the PWV of the vessel phantom by MR measurements. A set of two-dimensional Fast Low Angle Shot (FLASH) experiments was leading the measurement protocol to localize the measurement positions on the vessel phantom. The MR imaging slices were positioned perpendicularly to the vessel at positions ±2.5 mm, ± 5.0 mm, ± 7.5 mm, and ± 10.0 mm from the magnets iso-center. A perpendicular slice orientation was crucial to avoid displacement artifacts caused by fluid flowing out of the imaging slice. MR data were processed with a custom written routine using MATLAB. Velocity information was computed for pixels inside the vessel lumen by fitting a line to the phase values as a function of the first moments of the velocity encoding gradients (phase difference method). For every time frame the velocities were averaged over the luminal area. 64 pressure measurements were performed before 80 MR measurements. Differences in the mean PWV values obtained on the vessel phantom by the two measurement methods were tested by a two-sided t-test for unpaired data points for the hypothesis of equality against the alternative of differing values. The data points were not paired because many parameters, such as static pressure, temperature, and acoustic noise could not be held constant during the transition from pressure to MR measurements due to limited access and the strong magnetic field inside the magnet. The hypothesis was accepted for p-values ≥ 0.05. PWV values are given as mean ± standard error (SE). In Vivo Experiments The proposed MR method was designed to utilize the two-point TT method in vivo in the descending murine aorta. Nine MR measurements were performed on a group of five female eight-month-old ApoE (-/-) mice and eight measurements on a group of four age- and sex-matched C57Bl/6J mice. The ApoE (-/-) mice were fed a western type diet (TD 88137, Harlan Laboratories, Inc., Indianapolis, IN, USA) 10 weeks prior to MR measurements. During the MR examinations, the mice were anesthetized with an isoflurane inhalation (1.5 - 2.0 Vol.%) in O 2 (2 L/min) applied by means of a nose cone. Mice were placed vertically (head up) in the RF resonator. Due to the small diameters of the gradient insert and the RF resonator, their body temperature could be kept constant at 37°C by adjusting the temperature of the gradient insert temperature control unit. A pressure sensitive pneumatic balloon (Graseby Medical Limited, Watford, United Kingdom) was placed between the inner RF resonator wall and thoraces of mice to detect cardiac trigger and respiratory gating signals. Outside of the gradient insert, a pressure transducer (24PCEFA6 D, Honeywell S&C, Golden Valley, MN, USA) transformed the pressure signal from the balloon into an electrical signal that was amplified and processed in real-time by a homebuilt unit. Thus, electrical interference between the trigger signal and gradient fields oscillating in the same frequency domain was avoided. All experimental procedures were in accordance with institutional and internationally recognized guidelines and were approved by the Regierung von Unterfranken (Government of Lower Franconia, Germany). The reference number of the permit of the animal experiments is 55.2-2531.01-19/07. Two imaging slices were positioned perpendicularly to the thoracic and the abdominal aorta (as illustrated in Figure 1a ). Reasonable in vivo experiment times only allowed for the two-point TT method to be applied. Slice separations were maximized in order to minimize the errors in the measured PWVs. Again, a perpendicular slice orientation was of crucial importance. To localize the descending aorta, a set of two-dimensional FLASH experiments was leading the measurement protocol. The velocity-encoding window was set to ± 1.66 m/s to accommodate the maximal blood flow velocities. The RF excitation pulse had an excitation angle of 40°. The mice had a heart period of approximately 115 ms. A time window of 40 ms was sufficient to sample the late diastole and early systole. All other parameters of the measurement protocol were set as in the measurements on the vessel phantom. The total image acquisition times ranged from 15 to 20 min. MR data were processed using the MATLAB routine that was also used for the analysis of the data acquired on the vessel phantom. The curve progression in the velocity-time diagrams showed sharp bends between the sections before and during the pulse. This allowed for a semi automatic selection of the fit ranges that defined the onset times of the flow pulse. The fit range before the pulse was selected manually. It included data points on an approximate horizontal line. The beginning of the pulse fit region was set automatically as a series of three consecutive data points with velocity values at least two standard deviations above the extrapolated fit line of the section before the pulse. The end point was selected manually as the last data point on the first straight portion of the flow pulse section. The program AMIRA (Visage Imaging, Inc., San Diego, CA, USA) was applied to gauge the distance between the two measurement locations. Therefore, straight-line segments were drawn along the luminal midline in a longitudinal reference scan of the aorta. Isoflurane causes respiratory depression and, therefore, anesthetized mice develop a gasping breathing pattern with a period of approximately 1 s to 1.5 s. The duration of each respiratory movement is approximately 0.4 s. The MR data were acquired in the intermediate movement-free time intervals. From the periodicity of previous respirations, the homebuilt heart-triggering/breath-gating unit anticipates the next phase of respiratory motion and stops the MR data acquisition before the respiratory motion sets in. Data acquisition is continued approximately 0.4 s after the detected onset of breathing motion. Occasionally, the breathing pattern was not precisely periodic. Then data was acquired during the respiratory movement and motional artifacts in the MR images were the result. Sporadic non-periodic respiration was observed during all measurements, but showed no adverse motional artifacts in most of the measurements. In some of the motional artifacts, the signal amplitude and phase information of extra-luminal pixels were shifted into the vessel lumen. Those pixels were excluded from the calculation of the flow velocities when the anatomical structures that those pixels belonged to could be identified visually. Differences in the mean PWV values of the two animal groups were tested by a two sided t-test for the hypothesis of equality against the alternative of a higher value for the ApoE (-/-) group. The hypothesis was rejected in favor of the alternative for p-values < 0.05. In this work all in vivo PWV values are given as mean ± standard deviation (SD).
Results Phantom Experiments On the vessel phantom, a reference PWV = 3.31 ± 0.18 m/s was obtained by the pressure catheter measurements. The multi-point MR measurements determined a PWV = 3.32 ± 0.18 m/s. The PWV values of the pressure and MR measurements are not different with statistical significance (p = 0.999). The parameters and results of the validation measurements are summarized in Table 1 . The PWV values of the two-point MR measurements agree within their standard errors. A representative pressure wave in the vessel phantom and the multi-point TT method (explained in the method section) are shown in Figure 2 . In Vivo Experiments Figure 1 shows representative time courses of flow velocities, which were recorded on an ApoE (-/-) mouse. The measured PWV values (Figure 5 ) ranged from 1.9 m/s in a C57Bl/6J mouse to 3.8 m/s in an ApoE (-/-) mouse. The average values of each animal group are PWV = 3.0 ± 0.6 m/s for ApoE (-/-) mice and PWV = 2.4 ± 0.4 m/s for C57Bl/6J mice. The mean value of the ApoE (-/-) group was higher than that of the C57Bl/6J group with statistical significance (p = 0.014). The change of the heart periods of examined mice was smaller than 5 ms during experiments. The experiment parameters and the results for both animal groups are summarized in Table 2 .
Discussion A pulsatile elastic vessel phantom with a physiological PWV was developed. PWV values of the vessel phantom were investigated by MR and pressure measurements. The mean PWV determined by the MR multi-point TT method showed no statistically significant difference to the reference value. The developed MR TT method is accurate. The agreement of the MR two-point TT measurements at the different measurement positions on the vessel phantom indicates that the MR method delivers reproducible PWV values. These measurements also indicate that the separation of the imaging slices shall be maximized in order to reduce the uncertainty in the measured PWV. A statement about the in vivo precision of the MR method in a best-case scenario cannot be made from the results of the phantom measurements, because the standard deviation of the phantom measurements is higher than that of the in vivo measurements. Different error mechanisms, such as residual transversal vessel wall oscillations [ 36 ], water droplets forming on the outside of the phantom vessel wall, and a jitter in the pressure pulses reduced the precision of the phantom measurements. The results of this study show that the PWV is measurable in vivo in the descending murine aorta using a two-point TT CMR method and an CMR system with a main magnetic field strength of 17.6 T. The measurements are made possible by the high SNR intrinsic to high-field CMR and the large filling factor of the used RF resonator. The interleaved acquisition scheme and the pneumatic triggering technique (explained in the methods section) provide the necessary temporal resolution. Heart periods are sufficiently stable throughout the experiments, due to reliable anaesthesia and body temperature control. Flow compensation and the short echo time of the MR sequence greatly reduce motional artifacts. Additionally, the short echo time alleviates susceptibility artifacts, which are usually caused by boundaries of tissues with differing magnetic susceptibilities in high-field CMR. Wave reflections, mainly generated by the micro-vascular bed, i.e. the arterioles, will change the shape of the flow wave and will affect the estimation of PWVs [ 37 , 38 ]. It is critical for the determination of the PWV of the forward travelling pulse wave that only the reflection-free part of the wave is analyzed. Assuming a hypothetical PWV of 5 m/s and a distance of 20 mm between the abdominal measurement position and the microvascular bed, the reflected pulse wave needs 8 ms to return to the abdominal aorta. In this study, at most the early 7 ms of the flow velocity upstroke were used for the calculation of the PWV, therewith, wave reflections did not impose an impediment. The PWV measured on the group of eight-month-old ApoE (-/-) mice was 25% higher than the PWV in the age-matched wild-type group. The in vivo experiments showed, that the precision of the measurement method suffices to distinguish between the PWVs of different animal groups. The difference is statistically significant (p = 0.014). The PWV values measured in mice in this study are lower than values previously stated in literature but agree in general with most publications [ 25 , 39 , 40 ]. The deviations between the findings of this study and literature might be caused by systematic deviations inherent to the different methods of measurement, as the different results of ultrasound and pressure measurements on alike animal groups by Hartley and Wang suggest. Deviations to the findings can also be caused by the differences in age of the animals, because different animal strains were used, or because the animals were positioned vertically in the MR system. The PWVs measured in vivo in this study highly deviate from values found in an MR study that analyzed the part of the pulse wave that contains wave reflections [ 38 ]. Wang et al. found no significant differences in PWV values between ApoE (-/-) and C57Bl/6J mice that were younger than 13 months [ 24 ]. In this study, a significant difference was observed at eight months of age already. In the current work, ApoE (-/-) mice received a different, a western type diet, which accelerates aortic wall remodelling and increases elastic destruction and thus the PWV [ 41 ]. To make a general remark about the field of application of the TT method, it has to be considered that the TT method delivers a regional PWV value, which is averaged over the propagation pathway in between the measurement locations. In early stages of atherosclerosis diminutive vascular lesions are scattered along the aorta, which affect the PWV locally. Therefore, interest of research is also on the local elastic properties of the murine aorta. Our workgroup successfully applied the QA method [ 17 ] to mice in another study [ 22 ] to determine local PWVs. The QA method uses the volume flow (Q) to cross-sectional area (A) relation during the early systole to calculate the PWV. The QA method is unrivalled in local estimation of the PWV. But since it employs cross-sectional vessel areas, it comprises an additional source of error. The TT method is more robust and requires less user interpretation because it operates without the determination of cross-sectional areas [ 42 ].
Conclusions This study demonstrates the feasibility of the non-invasive determination of the pulse wave velocity by the transit time method and high-field CMR. The measured PWVs are accurate and reproducible in comparison to reference pressure measurements on a vessel phantom and sufficiently precise to distinguish between animal groups. Therefore, the measured PWV can be used as a marker to classify the state of arterial dysfunction in living mice. Due to the short in vivo acquisition times of 15 to 20 minutes, the presented MR method is applicable to enhance studies of morphologic parameters characterizing the murine arterial system [ 21 , 43 ] by adding the PWV as a functional parameter to form a comprehensive set of parameters.
Background Transgenic mouse models are increasingly used to study the pathophysiology of human cardiovascular diseases. The aortic pulse wave velocity (PWV) is an indirect measure for vascular stiffness and a marker for cardiovascular risk. Results This study presents a cardiovascular magnetic resonance (CMR) transit time (TT) method that allows the determination of the PWV in the descending murine aorta by analyzing blood flow waveforms. Systolic flow pulses were recorded with a temporal resolution of 1 ms applying phase velocity encoding. In a first step, the CMR method was validated by pressure waveform measurements on a pulsatile elastic vessel phantom. In a second step, the CMR method was applied to measure PWVs in a group of five eight-month-old apolipoprotein E deficient (ApoE (-/-) ) mice and an age matched group of four C57Bl/6J mice. The ApoE (-/-) group had a higher mean PWV (PWV = 3.0 ± 0.6 m/s) than the C57Bl/6J group (PWV = 2.4 ± 0.4 m/s). The difference was statistically significant (p = 0.014). Conclusions The findings of this study demonstrate that high field CMR is applicable to non-invasively determine and distinguish PWVs in the arterial system of healthy and diseased groups of mice.
Competing interests The authors declare that they have no competing interests. Authors' contributions MP designed and manufactured the vessel phantom, designed and programmed the MR sequence and data analysis software, performed the phantom and in vivo measurements and the data evaluation, and drafted the manuscript. VH participated in the design and development of the MR sequence and data analysis software, performed the in vivo measurements, and participated in the phantom design. GK participated in the design of the study, helped coordinating it, and performed animal handling and supervision before and during in vivo experiments. WRB conceived of the study, participated in its design and coordination, and helped to draft the manuscript. ER designed and manufactured the heart triggering and breath-gating unit, participated in the manufacturing of the vessel phantom, and coordinated the study. PMJ participated in the conception of the study, provided the MR system and custom-made MR hardware, and critically revised the manuscript for important intellectual content. All authors read and approved the final manuscript.
Acknowledgements This work was funded by the Deutsche Forschungsgemeinschaft in the scope of Sonderforschungsbereich 688 'Mechanisms and imaging of cardiovascular cell-cell interactions' (grant number: SFB 688).
CC BY
no
2022-01-12 15:21:37
J Cardiovasc Magn Reson. 2010 Dec 6; 12(1):72
oa_package/10/55/PMC3014910.tar.gz
PMC3014911
21122110
Background One of the most fundamental aspects of biological control is the regulation of size, on the level of the individual cell, an organ, and the whole organism. Studies in yeast have yielded scores of genes controlling size, many associated with ribosomal protein synthesis [ 1 ]. In metazoan organisms, growth and size control are usually studied on the level of either whole organs or even whole organisms, and several genetic mechanisms involved in organism and organ size control have been elucidated [ 1 , 2 ]. For example, signaling pathways triggered by insulin and TGFβ are known to control organismal size [ 1 - 4 ]. Moreover, intriguing links between size control and tumor formation and suppression have been found in the form of genes such as Myc , Brat , and TFG [ 1 , 2 , 5 , 6 ]. In spite of these advances, size regulation in the nervous system is poorly understood, even though the size differences of neurons are particularly astonishing. Cross-sectional cell soma size of neurons ranges widely from 0.005 mm to 0.1 mm in mammals. Size in terms of length of axon and dendrites can also hugely differ from neuron type to neuron type, from several microns to several meters within one given mammalian species. Two different nematode species, Caenorhabditis elegans and Ascaris suum have the same number and types of neurons (their axonal projection patterns are identical as well), yet they differ in soma size and neuronal processes length by several orders of magnitude [ 7 ]. Even though the astounding range of neuron sizes in the nervous system has been known for a long time, few genes have been found that specifically control neuronal soma size. One striking case is the gene encoding the phosphatase PTEN, which, when knocked-out, results in a significant increase in neuron soma size, an effect mediated by the kinase mammalian target of rapamycin (mTOR) [ 8 - 10 ]. The importance of the PTEN-mediated neuron-size regulation is illustrated by Lhermitte-Duclos disease, which is characterized by overgrowth of neuronal soma [ 8 , 9 ]. Neuron size regulation is particularly enigmatic when considering size difference between otherwise quite similar neuronal cell types. Such differential size regulation is strikingly apparent in one intriguing and poorly understood context in the nervous system, that of neuronal laterality. In general, nervous systems are morphologically bilaterally symmetric, yet they often are lateralized (left/right asymmetric) in specific functions [ 11 ]. That is, groups of neurons located on one side of the brain perform different tasks than their mirror-symmetric neurons on the contralateral side of the brain. This lateralization is evident in many nervous systems across phylogeny, from worms to humans [ 11 - 14 ]. Yet how such asymmetry is genetically programmed is poorly understood. Curiously, in spite of the strong functional lateralization of many brain areas, there are very few genetic correlates to this asymmetry, that is, very few genes are known to be expressed in a left/right asymmetric manner in the adult nervous system of any species [ 12 - 14 ]. However, there is another quite striking correlate to functional asymmetry that has been described in several systems: a difference in soma size of contralateral neuronal ensembles. For example, within several subfields of the human hippocampus, there are regional differences in soma size in the left versus right hemisphere [ 15 ]. Intriguingly, these hemispheric soma size differences are abrogated in schizophrenic patients [ 15 ]. Left/right asymmetric soma size differences have also been observed within auditory and language-associated regions of the temporal lobe [ 16 ]. Similarly, the optic tectum of birds, which is strongly functionally lateralized, displays soma size differences in contralateral neuron types [ 17 , 18 ]. It is, however, not clear how widespread the coupling of functional lateralization and size regulation is. Also, virtually nothing is known about the underlying molecular pathways that control cell size in these left/right asymmetric, neuronal contexts. The nematode C. elegans contains an exquisitely well-characterized, largely bilateral nervous system that also displays functional lateralization [ 12 , 13 ] and therefore serves as a good model to investigate the problem of neuronal left/right asymmetry. We investigate here a pair of chemosensory neurons, the ASE neurons (Figure 1A ). These two neurons, a left and a right one (ASEL and ASER) are symmetrically positioned in one of the main head ganglia of C. elegans and are bilaterally symmetric in many morphological (dendritic morphology, synaptic connectivity) and molecular (gene expression) regards [ 12 , 19 , 20 ]. However, each neuron senses a distinct spectrum of chemosensory cues and expresses a distinct spectrum of putative chemoreceptors (Figure 1A ) [ 12 , 21 ]. Moreover, one neuron (ASEL) responds to upshifts in the concentration of a chemosensory cue, inducing runs in the locomotory behavior of the animal, while the other neuron (ASER) responds to downshifts, inducing reversals of the animal [ 22 ]. This lateralization is controlled through a complex bistable system composed of several gene regulatory factors, including regulatory RNAs and transcription factors [ 23 ]. Even though its neuronal anatomy has been described in detail, neuronal size has, somewhat curiously, not been studied at any great depth in C. elegans . Moreover, it has not been addressed whether functionally lateralized neuron pairs display soma size differences. If this were indeed the case, it may be possible to link genetic mechanisms that control functional lateralization to lateralized size control. We investigate this issue in this paper.
Materials and methods Transgenic reporter strains The following transgenes were used to measure neuron soma sizes: ASEL/R, otIs125 = flp-6 prom ::gfp; otIs242 = che-1 prom ::gfp ; AWCL/R, otIs151 = ceh-36::dsRed2 ; AWC on/off , otEx9961 = srsx-3::TagRFP ; AWCL/R, oyIs28 = odr-1::gfp ; ADFL/R, zdIs13 = tph-1::gfp ; AWBL/R, kyIs104 = str-1::gfp ; ASKL/R, otEx4302 = sra-9::gfp ; AIYL/R, otIs173 = ttx-3 prom ::gfp . ASE nuclear size was measured with otIs188 ( che-1 fosmid ::yfp ). Measurements of ASE features For the soma or nuclear size measurement, transgenic worms, harboring neuron-type specifically expressed reporter constructs are picked at the desired stage (either L1 or adult) and examined using an Axioplan 2 microscope and a Sensicam QE camera controlled by Micro-Manager software [ 51 ]. Worms were rolled on the cover slip such that ASEL and ASER were in the same plane (dorso-ventral view), and stacks were made with a 63 × oil-immersion objective at 1 μm depth. The stacks were analyzed using ImageJ software, where the contrast of the cell was chosen such that the fluorescence intensity did not impinge on neighboring cells, and the ImageJ plugin Voxel Counter was used to count the number of pixels for each cell. GFP intensity was normalized by cropping stacks around each cell separately and adjusting the brightness levels of the two stacks such that the maximum intensity level of each stack was reset to one standard. Statistical analysis of the relative sizes within a given strain was also performed by using a paired two-tailed t -test; significance was determined using the Bonferroni correction. For sets of experiments where n ≥ 3, we employed the Bonferroni correction: instead of using thresholds of P < 0.05 or P < 0.01, we used stricter P -value thresholds of P < 1-((1-0.05) 1/n ) and P < 1-((1-0.01) 1/n ), respectively, where n is the number of experiments in a given set. We measured cross-sectional diameters in the electron micrographs by tracing each dendrite in ImageJ and using the Measure tool. We measured ploidy by ethanol fixation followed by DAPI staining either otIs151 ( ceh-36 prom ::rfp ) or otIs232 ( che-1::mChopti ) for ASE cell identification. Image stacks of DAPI-stained worms were taken using the method described above. We measured DAPI intensity as a proxy for DNA amount and report the data as relative DAPI intensities. We used freeze fracture followed by methanol/acetone fixation for immunostaining. To determine nucleoli size and number, we used cguIs001 ( fib-1::gfp ) [ 52 ] and an antibody against Nop1p (FIB-1) from EnCor BioTechnology (#MCA-38F3, Gainesville, FL, USA) at a 1:200 dilution, detected with a 1:200 dilution of an anti-mouse (Invitrogen #A-21202, Carlsbad, CA, USA") secondary antibody.
Results The pair of ASE neurons displays size asymmetries We visualized the ASEL and ASER gustatory neurons in live animals using chromosomally integrated gfp reporter gene constructs in which ASE-expressed cis -regulatory sequences drive non-localized green fluorescent protein (GFP), which diffuses throughout the entire cell and its processes (Figure 1A ). Using two different transgenes ( otIs242 = che-1 prom ::gfp and otIs125 = flp-6 prom :: gfp ), we find that the two neuron soma show consistent and highly stereotyped size differences in adult animals (see Materials and methods for details on size measurements). The volume of the soma of ASER is more than 30% larger than the soma of ASEL (Figure 1 ). We next examined the size of specific structures in the soma. Using a gfp reporter that is targeted to the nucleus of ASEL and ASER, we find that the volume of the nucleus of ASER is not significantly different from that of the ASEL neuron (Figure 2A ). We estimated DNA content (that is, ploidy) of the ASEL versus ASER cell using the standard DAPI stain and observed no significant difference either (Figure 2B ). We then visualized the number and size of nucleoli. We find that the ASER neuron contains, on average, more nucleoli (Figure 2C,D ). Using a set of available electron microscopical sections of the head regions of two different worms, we found that these size differences are not restricted to soma volume, but extend to the relative cross-sectional areas of these neurons. They show an almost twofold difference in cross-sectional area, which translates into a two-fold difference in the volume per unit length (Figure 3A ). These results were confirmed with confocal imaging of dendritic diameter using gfp reporters (Figure 3B ). The axonal projections of ASEL/R into the nerve ring also show lateralities in diameter (Figure 3C ). The overall length of the axonal projections and dendrites are the same on the left and right [ 19 ]. We also examined a panel of additional neuron pairs in the head ganglia. We examined four additional sensory neuron pairs (AWCL/R, ADFL/R, AWBL/R, ASKL/R) and one interneuron pair (AIYL/R; the main postsynaptic target of ASEL/R). We found that even though there was some variation in individual animals, none of these neurons showed, on average, any indication of a consistent laterality in soma size (Figure 1C,D ). This notion was corroborated by an analysis of sensory dendrite diameter, in which we also found no significant sidedness (Figure 3A ), again in contrast to the situation with ASEL/R. We examined the AWCL/R case in more detail. Like the ASEL/R gustatory neuron pair, this olfactory neuron pair is known to be functionally lateralized. The left versus right neurons sense different sensory cues and process information differentially [ 13 , 24 , 25 ]. However, in contrast to ASEL/R laterality, which is deterministic (that is, 100% invariant; a phenomenon called 'directional asymmetry') [ 26 ], AWCL/R asymmetry is stochastic (a phenomenon called 'antisymmetry') [ 26 ]. This lateralization can be visualized with two distinct putative odorant receptors, str-2 and srsx-3 [ 27 ]. In 50% of animals str-2 is expressed in the AWCL, while in the other 50% it is expressed in AWCR. srsx-3 shows the complementary pattern. The str-2 -expressing cell has traditionally been called the AWC on cell [ 24 ]. Even though, on average, AWC soma showed no laterality, we tested whether the AWC on or AWC off cell may correlate with a specific relative size. However, this is not the case (Figure 1C,D ). Taken together, the functionally lateralized ASEL/R neuron pair shows a consistent soma size laterality that is paralleled by axonal, dendritic, and nucleolar lateralities, but not by lateralities in nuclear size or DNA content. The neuron pairs that we examined for lateralities included neuron pairs in physical proximity to ASEL/R and/or related by common ancestry (that is, lineage). A lack of directional asymmetry in these related neuron pairs illustrates that it is not simply the case that one side ('hemisphere') of the worm is larger than the other, but rather that neuron size is regulated in a neuron-type-specific manner. We also note that absolute size measurements of other neuron pairs differ from neuron type to neuron type, with the larger ASER not being larger than other neuron pairs and the smaller ASEL not being smaller than yet other neuron pairs. It is therefore not obvious as to whether the size difference between ASEL and ASER is due to 'overgrowth' of ASER or 'growth inhibition' of ASEL. Size differences translate into distinct electrophysiological properties One of the most likely functional consequences of a difference in size is a difference in the passive spread of voltage from one end of a neuron to the other. To assess whether the observed left-right differences in neurite diameters are theoretically sufficient to produce a significant difference in voltage spread, we modeled ASE neurons as a pair of cylindrical cables representing the dendrite and axon. The cables were joined at one end and sealed at the other. The soma was omitted because it is too small to affect the extent of voltage spread [ 28 ]. Voltage spread is a function of the ratio R of membrane and axial resistivity as well as the anatomical dimensions. R was set to the value obtained in a previous analysis of ASER neurons [ 28 ]. Here we assume that the effective passive electrical properties of ASEL and ASER, including the value of R , are the same for small depolarizations in the likely operating range of the neurons. Partial support for this assumption is provided by the fact that the steady state current-voltage relationships of these neurons are nearly identical in their operating range. Dendrite and axon lengths were measured in confocal reconstructions from GFP-labeled ASE neurons in unfixed animals (dendrite, 116 μm, n = 28; axon, 80 μm, n = 18). The diameter of the dendrites and axons of ASEL and ASER neurons were measured separately in each of 13 worms (Figure 3B,C ). For each worm, we used standard cable theory [ 28 , 29 ] to compute the steady-state voltage at the beginning or end of the axon in response to a unit depolarization of the distal tip of the dendrite (representing the sensory cilium where sensory transduction is believed to occur in real ASE neurons). We found small but significant differences in the extent of voltage spread at both locations (Figure 4 ). As output synapses from ASEL and ASER neurons reside along the entire length of their axons, we conclude that differences in process diameters could result in stronger outputs from ASER neurons. Size laterality does not depend on sensory activity, but is embryonically programmed by the che-1 transcription factor The soma size lateralities in the optic tectum of birds correlate with loci of functional lateralities, and those functional lateralities are dependent on visual input, that is, neuronal activity [ 11 , 17 , 18 ]. We therefore tested whether activity of the ASE neurons has an impact on their size differences. We examined soma size lateralities in a number of mutants in which the ASE neurons are not able to sense or transduce sensory stimuli. We observed no effect on soma size laterality (Figure 5A ). Keeping animals in a sensory-deprived environment by hatching them in water also does not affect soma size lateralities (Figure 5A ). These findings suggest that rather than being activity-dependent, size lateralities may be developmentally programmed. To test this notion, we examined ASEL/R size laterality not just in the adult, but also at earlier stages. We indeed find that already at the first larval stage, right after hatching, the differences in size between the two neurons is already as apparent as in the adult (Figure 5B ). Going back to the 450-minute stage of embryogenesis-100 minutes after the ASE neurons are formed-we already observe size differences. The observation of differential size regulation occurring in the C. elegans embryo is somewhat unexpected as, in contrast to the enormous size increase of all cell types after hatching, there is in general little overall cell growth in embryos. Rather, as the overall volume of the embryo is constant, every cell division results in smaller daughter cell sizes. To begin analyzing the genetic mechanisms that underlie these size differences, we first used a genetic background in which the ASEL/R neurons fail to be appropriately specified. The ASEL/R-specific che-1 Zn finger transcription factor is required for the correct development of ASEL/R neurons; in che-1 mutants, ASEL/R neurons are not functional (that is, animals are not able to chemotax to water-soluble attractants, hence the name che ), and fail to express scores of genes that are normally expressed in ASE, yet the ASE neurons are still generated [ 20 , 30 , 31 ]. Measuring the size of ASE neurons in che-1 mutants, we find that the soma differences of ASEL and ASER are eliminated (Figure 5C ). Left/right size differences are therefore programmed through the activity of the che-1 transcription factor. Gene regulatory factors that control functional laterality also control size asymmetry We next turned to a set of genes that we have previously identified as controlling the functional left/right asymmetry of the ASE neurons [ 23 ]. A complex regulatory system, composed of transcription factors and regulatory RNAs, controls the left/right asymmetric expression of distinct putative chemoreceptors of the gcy gene family in ASEL versus ASER (Figure 6A ). The activity of what we termed 'class I' regulatory genes promotes ASER fate, and their loss leads to a conversion of ASER to ASEL. 'Class II' regulatory genes have the opposite activity; they promote ASEL fate and their loss leads to a conversion of ASEL to ASER. Class I and class II genes cross-inhibit each other's activities (Figure 6A ). We first analyzed ASE soma size lateralities in three different genetic contexts in which both neurons are transformed to the ASER fate ('2 ASER'; as assessed by gcy chemoreceptor gene expression). We used animals carrying loss-of-function mutations in the ASEL inducers die-1 (a Zn finger transcription factor) and lsy-6 (a miRNA), and transgenic animals in which the ASER-inducer cog-1 (a homeobox gene) is ectopically expressed in both ASE neurons. We find that in all three genetic backgrounds, both ASE neurons now adopt the larger size that is normally characteristic of ASER (Figure 6B ). Similarly, we analyzed ASE soma size lateralities in two different genetic contexts in which both neurons are transformed to the ASEL fate ('2 ASEL'; as assessed by gcy chemoreceptor gene expression), namely in animals carrying loss of function mutation in the ASER inducers cog-1 and in transgenic animals that ectopically express the ASEL-inducer lsy-6 bilaterally in both ASE neurons. In both genetic backgrounds, both ASE neurons now adopt the smaller size that is normally characteristic of ASEL (Figure 6B ). The effect of die-1 manifests itself not only on the soma size difference of ASEL/R, but also on difference in the number of nucleoli; they become bilaterally symmetric in the die-1 mutant (Figure 6C ). ASEL and ASER inducers act in a feedback loop [ 32 ]. We sought to determine which genes provide the output from this loop to size control. For the determination of left/right asymmetric chemoreceptor expression, die-1 is the output, as the effect of die-1 on all previously known lateralities is epistatic to any genetic manipulations in the loop [ 32 ]. We performed similar epistasis experiment, scoring asymmetric soma size. We find that die-1 is epistatic to both manipulations of cog-1 and lsy-6 activity (Figure 6B ). That is, the '2 ASEL size' phenotype of either cog-1(-) or lsy-6 misexpression is reverted to the '2 ASER size' phenotype in a die-1(-) background. The two transcription factors lim-6 (a LIM homeobox gene) and fozi-1 (a Zn finger transcription factor) act downstream of die-1 as effector genes, regulating a subset of left/right asymmetric features of ASEL and ASER (Figure 6A ) [ 32 , 33 ]. We find that these regulators have no impact on the ASEL/R soma size differential (Figure 6B ). Taken together, these findings show that size control is tightly controlled by a genetic regulatory mechanism that defines other aspects of laterality of the ASEL and ASER neurons as well. The control of left/right asymmetric size and chemoreceptor expression does, however, branch out downstream of die-1 (Figure 6A ), as lim-6 and fozi-1 affect chemoreceptor expression but not size. We hypothesize that die-1 regulates either directly or indirectly the expression of effector genes that control size. A candidate gene approach identifies the nucleolar protein FIB-1 as a size regulator The impact of the DIE-1 and CHE-1 transcription factors on lateralization of soma size is presumably mediated by gene(s) that are under control of these factors and possibly expressed in a left/right asymmetric manner. In an attempt to identify these effector genes, we tested a large number of candidate genes for an effect on ASEL/R soma size differences. These candidates encode proteins that have, in various different systems, been implicated in controlling cell size. The candidate genes that we tested-a total of 24 loci (some tested both with gain-and loss-of-function alleles)-are listed in Table 1 and results are shown Figure 7 . Among the tested strains are animals mutant components of the insulin receptor-like signaling system, the C. elegans Myc homolog mml-1 [ 34 ], regulators of ribosomal RNA synthesis like Brat/ ncl-1 [ 1 ], sma and lon genes [ 4 ], the C. elegans homolog of the nucleolar protein Fibrillarin, FIB-1, and a recently discovered set of genes involved in body size control in worms (CREB-like gene crh-1 , nucleostemin/ nst-1 , translational initiation factor eIF2B/ iftb-1 , tumor suppressor gene TFG/tfg-1 ) [ 6 ]. We also tested the impact of a calcium-dependent pathway that in other systems is involved in cell swelling in response to external/environmental challenges ('regulatory volume decrease') [ 35 ]. We found that reduction or elimination of only some of the candidate size regulators affect overall ASEL and ASER size (Figure 7A,B ). These include the phosphatase PTEN, the kinase AKT, the Brat tumor suppressor Brat/Ncl-1 and the small GTPase Rheb-1, but surprisingly, not canonical size regulators, such as the insulin/IGF-1 receptor (Figure 7A,B ). Of all the mutant animals tested, only one eliminated the difference in soma size between ASEL and ASER (Figure 7B ). These animals carry a deletion allele, ok2527 (kindly provided by the Oklahoma C. elegans knockout consortium; Figure 7C ) that eliminates the nucleolar protein Fibrillarin/FIB-1, an RNA methyltransferase involved in ribosome biogenesis [ 36 ]. This finding is in accordance with the observation that ASER contains more FIB-1 positive nucleoli than ASEL (Figure 2 ). Linking FIB-1 accumulation to the upstream gene regulatory factors, we find that in die-1 mutants, the number of FIB-1(+) nucleoli increases in ASEL (Figure 6C ). Even though fib-1 is required for the manifestation of the size differences, it is not sufficient, as we did not observe any effect on the size differential in transgenic animals that overexpress fib-1 bilaterally in both ASEL and ASER using the ceh-36 promoter (four transgenic lines tested; data not shown). We also note that loss of fib-1 has no effect on left/right asymmetric chemoreceptor expression ( gcy-5 and gcy-7; data not shown), corroborating the notion that size control can be decoupled from other aspects of ASEL/R laterality. In conclusion, our candidate gene analysis has uncovered a protein with a function in nucleolar biogenesis required for left/right differential size laterality in the nervous system.
Discussion We describe here a developmentally programmed size laterality of a functionally lateralized neuron pair. It is striking that the theme of lateralized soma sizes in functionally lateralized brain regions is conserved from higher vertebrates-for example, the optic tectum in chick [ 17 , 18 ]-to a simple invertebrate like C. elegans . The theoretical differences in passive voltage spread presented here (Figure 4 ) could have significant functional consequences. Other things being equal, one would expect stronger synaptic outputs from ASER in response to the same level of depolarization in the cilia of two neurons. Notably, it can be shown from first principles that for chemotaxis in a radial gradient, "off cells" like ASER (i.e. neurons responding to a decrease of a signal) are sufficient, whereas "on cells" like ASEL (i.e. neurons responding to an increase of a signal) are not [ 37 ]. Thus, worms with stronger ASER outputs would enjoy a selective advantage, which may have resulted in an increase in ASER size. If validated experimentally, differential voltage spread would join a growing list of several distinct properties of the ASEL versus ASER neurons, including differential sensation of taste cues, differential chemoreceptor expression, differential response to upsteps (ASEL) versus downsteps (ASER) of chemosensory cues and differential contributions to spatial orientation behaviors [ 36 , 38 ]. These features are layered upon otherwise largely symmetric characteristics of ASE [ 20 ]. However, in contrast to the invariant left/right asymmetric expression of chemoreceptors, we note that the ASER > ASEL size differences are only observed when averaged over a population. That is, there are individuals in which either no differences in size are observed or in which the size asymmetry is reversed. Whether this is due to experimental error or is an indication of distinct chemosensory capacities of individual animals within a population remains to be determined. We provide here three mechanistic insights into how differential size regulation is achieved. First, we find that size asymmetries are not activity-dependent, but developmentally controlled. Second, we have identified a transcriptional regulator, the Zn finger transcription factor DIE-1 (as well as its upstream regulators), which controls size laterality. The involvement of die-1 in controlling size parallels its involvement in controlling lateralized chemoreceptor expression. However, transcription factors acting downstream of die-1 , namely the lim-6 LIM homeobox gene and the fozi-1 Zn finger factor, which also affect chemoreceptor expression, do not affect differential size regulation. Regulatory pathways controlling size and chemoreceptor expression therefore branch downstream of die-1 (summarized in Figure 8 ). Third, we have identified the functionally as yet uncharacterized C. elegans fibrillarin gene fib-1 as a gene required for ASEL/R size laterality. fib-1 encodes a phylogenetically conserved RNA methyltransferase involved in ribosome biogenesis whose human homolog is a nucleolar autoantigen for the non-hereditary immune disease scleroderma [ 39 ]. Our demonstration that loss of fib-1 results in alterations on cell size may not be unexpected, given that yeast fibrillarin has been found to control pre-rRNA processing, pre-rRNA methylation and ribosome assembly [ 40 ] and that nucleolar size and ribosomal biogenesis have been previously linked to cell size control [ 1 ], but our results nevertheless provide the first direct implication of fibrillarin in cell size control and they also place fibrillarin activity and nucleolar size into a previously unknown cellular and functional context. fib-1 acts downstream, and is therefore a target of the die-1 Zn finger transcription factor, a conclusion based on our observation that the number of FIB-1(+) nucleoli increases - together with overall size - if normal die-1 expression in ASEL is lost. At this point, we can not tell whether the fib-1 locus is a direct transcriptional target of DIE-1 or whether differential FIB-1 accumulation in ASEL versus ASER is an indirect consequence of DIE-1 function in ASEL (or absence thereof in ASER). fib-1 is unlikely to be the sole (direct or indirect) target of DIE-1 in the context of size control since fib-1 , unlike die-1 , is not sufficient to impose ASER size. Work in yeast and flies has amply demonstrated that the genes encoding nucleolar proteins involved in ribosome biogenesis, such as fibrillarin, are co-regulated through common transcriptional control mechanisms ('Ribi regulon') [ 41 - 44 ]. Several distinct types of transcription factors are involved in controlling the Ribi regulon, such as the yeast Forkhead like protein Fhl1 or, in metazoans, the Myc transcription factor [ 42 - 44 ]. DIE-1 may either be directly involved in such a co-regulatory mechanism or may be involved in indirectly triggering such a mechanism via intermediary regulators (Figure 8 ). DIE-1 therefore joins the ever-growing list of transcriptional regulators of cell size; however, the role of DIE-1 in size regulation may be highly context dependent, as die-1 mutants do not display any gross defects in animal size. Our analysis of candidate size regulators has also identified a series of genes that control overall neuron size in a bilaterally symmetric manner (that is, both ASEL and ASER are affected). Given the paucity of known size regulators in the nervous system, some of our partially unexpected results raise questions and provide a starting point for future analysis. As expected from work in other systems [ 8 - 10 ], daf-18 /PTEN mutants show increased neuron size. However, a null mutation in the insulin/IGF-like receptor in worms, daf-2 , does not affect neuron size, even though the same signaling system does have profound effects on size and growth in other organisms [ 45 ]. Yet, loss of another gene in the daf-2 pathway, the Ser/Thr kinase akt-1 does significantly affect the size of both ASEL and ASER, suggesting that AKT may be coupled to a distinct upstream input. However, unlike in other systems, in which AKT negatively regulates size [ 46 ], ASEL and ASER size is increased in akt-1 mutants. A similar, unexpected 'sign reversal' is observed in animals lacking the size regulators rheb-1 , a small GTPase, or the nucleolar protein nucleostemin/ nst-1 , both known to be required to promote growth in other systems [ 47 , 48 ], but apparently inhibiting growth of both ASE neurons. Other known size regulators, such as cdk-4 [ 49 ], do not effect ASEL/R neuron size at all. We also found no effect of removing the canonical size regulator let-363/ TOR; however, these animals could only be scored at the first larval stage due to later larval lethality. The maternal load of TOR may have rescued any potential size regulatory effect. The same caveat holds for interpretation of the lack of effect of removing let-60/ Ras and tfg-1 /TFG. Lastly, we note that a transforming growth factor-β signaling pathway previously reported to control overall animal size in C. elegans [ 4 ] does not affect ASE neuron size, demonstrating that overall animal size is decoupled from neuron size. In conclusion, we have provided some of the first mechanistic insights into how lateralized neuron size is controlled and we have set a theoretic framework for the type of impact such size difference may have on neuron function. It is conceivable that lateralized neuron size differences in vertebrates may also be controlled via nucleolar mechanisms [ 50 ], a notion that is not a matter of course since known cell size control pathways do not necessarily work through regulation of ribosomal and hence nucleolar mechanisms [ 43 ]. Our findings also raise the possibility that lateralized neuron size control may be uncoupled from more canonical mechanisms of size control in other cell and tissue types. This is because we find that asymmetric neuron size control is established at a stage (embryo) when no other tissues undergo the generic growth that is characteristic of late embryonic and larval growth and because asymmetric neuron size control does not involve many of the canonical body size regulators. The identification of direct target genes of the die-1 transcription factor, the regulator we found to impinge on the ASEL/R size differential, will provide more insights into this pathway in the future.
Background Nervous systems are generally bilaterally symmetric on a gross structural and organizational level but are strongly lateralized (left/right asymmetric) on a functional level. It has been previously noted that in vertebrate nervous systems, symmetrically positioned, bilateral groups of neurons in functionally lateralized brain regions differ in the size of their soma. The genetic mechanisms that control these left/right asymmetric soma size differences are unknown. The nematode Caenorhabditis elegans offers the opportunity to study this question with single neuron resolution. A pair of chemosensory neurons (ASEL and ASER), which are bilaterally symmetric on several levels (projections, synaptic connectivity, gene expression patterns), are functionally lateralized in that they express distinct chemoreceptors and sense distinct chemosensory cues. Results We describe here that ASEL and ASER also differ substantially in size (soma volume, axonal and dendritic diameter), a feature that is predicted to change the voltage conduction properties of the two sensory neurons. This difference in size is not dependent on sensory input or neuronal activity but developmentally programmed by a pathway of gene regulatory factors that also control left/right asymmetric chemoreceptor expression of the two ASE neurons. This regulatory pathway funnels via the DIE-1 Zn finger transcription factor into the left/right asymmetric distribution of nucleoli that contain the rRNA regulator Fibrillarin/FIB-1, a RNA methyltransferase implicated in the non-hereditary immune disease scleroderma, which we find to be essential to establish the size differences between ASEL and ASER. Conclusions Taken together, our findings reveal a remarkable conservation of the linkage of functional lateralization with size differences across phylogeny and provide the first insights into the developmentally programmed regulatory mechanisms that control neuron size lateralities.
Abbreviations GFP: green fluorescent protein; IGF: insulin-like growth factor. Competing interests The authors declare that they have no competing interests. Authors' contributions AG conducted all experiments shown in this paper, SS conducted some initial size experiments, SL guided the voltage analysis, OH initiated and supervised this study, and AG and OH wrote the paper.
Acknowledgements We thank Q Chen for expert assistance with generating transgenic lines, D Hall (Albert Einstein College of Medicine) for providing access to the archive of electron microscopical sections and help in data collection, the C. elegans knockout consortia for providing strains, SJ Lo for the fib-1 reporter, B Tursun for ASE reporters and members of the Hobert lab for comments on the manuscript. This work was funding by the NIH (5R03NS067451-02). OH is an Investigator of the Howard Hughes Medical Institute.
CC BY
no
2022-01-12 15:21:37
Neural Dev. 2010 Dec 1; 5:33
oa_package/d1/41/PMC3014911.tar.gz
PMC3014913
21172008
Background The prevalence of end-stage renal disease (ESRD) in Singapore is high and projected to increase sharply due to the nation's aging population and the high prevalence of diabetes. New cases of kidney failure caused by diabetes rose from 47% in 1998 to 56% in 2003, an increase of 20%. Over the same five years, the number of patients on dialysis with diabetes-induced kidney failure doubled [ 1 ]. The total number of patients on dialysis in Singapore (either haemodialysis or peritoneal dialysis) increased from 2465 at the end of 1999 [ 2 ] to 3403 at the end of 2004 [ 3 ]. The impact of ESRD on a patient's quality of life (QOL) has become increasingly recognized as an important outcome measure [ 4 ]. Health-related quality of life (HRQOL) is the impact of a chronic disease and its related treatment on patients' perceptions of their own physical and mental function. The assessment of HRQOL can be challenging due to its subjective nature; HRQOL relates how patients feel about and are satisfied with matters relating to their condition and treatment. Some generic measures such as the 36-item Short Form health survey (SF-36) [ 5 ] are used to assess HRQOL. However, generic instruments are broad and produce scores for all domains of quality of life. They try to cover each area specifically and may not even address the primary symptoms. Disease-specific instruments have been developed to assess aspects of HRQOL in relation to a disease of interest, which are not adequately assessed by generic measures. They focus on concerns that are more relevant to a specific illness and treatment. Each instrument assesses a distinct and significant portion of the total HRQOL. Disease-specific instruments tend to be more effective in detecting treatment effects and are more responsive to changes in specific conditions [ 6 ]. The Kidney Disease Quality of Life-Short Form (KDQOL-SFTM) [ 7 ] is a disease-specific quality of life measure for ESRD patients. It includes both generic and disease-specific components for the assessment of HRQOL. The instrument has been validated and is widely used. It has been used in the Netherlands with adult ESRD patients [ 8 ], in Greece with ESRD patients [ 9 ], in Italy with severe renal failure patients [ 10 ], and in Turkey with patients who were on dialysis [ 11 ]. In Asia, the KDQOL-SFTM has been validated in Korea [ 12 ] with 164 patients on haemodialysis or peritoneal dialysis. Few studies have validated specific sub-scales of the KDQOL-SFTM. A study carried out with chronic kidney disease and ESRD patients in California has validated the cognitive function subscale of the KDQOL-SFTM [ 13 ], which consists of three kidney-specific items of the questionnaire. The KDQOL-SFTM has been used for patients on dialysis where QOL evaluations have focused on comparative approaches between treatment modalities, on longitudinal trends within a specific treatment modality, and on the impact of QOL upon the introduction of new therapies. The instrument was used to analyze data from the Dialysis Outcomes and Practice Patterns Study, which was carried out on haemodialysis patients in the US, Japan and five countries in Europe. The results showed that in all three continents, ESRD and haemodialysis have profound effects on HRQOL [ 14 ]. The study reported differences in the burden of kidney disease between patients from different countries, including a greater burden reported by Japanese patients. These differences could be due to patient characteristics, co-morbidities or even cultural mediation. However, because there are no national norm data for disease-targeted scales, it was not possible to determine which of the many potential explanations could explain the greater burden reported by patients from Japan. The study did show that cultural differences may play a role in the variations observed across continents or ethic groups. To date, the psychometric properties of the KDQOL-SFTM have not been evaluated and KDQOL-SFTM has not been validated in the Singapore population. The aim of this study was to determine the reliability and validity of the KDQOL-SFTM among haemodialysis patients in Singapore. We also aimed to increase our understanding of how our patient population perceives quality of life, and determine whether the KDQOL-SFTM instrument is applicable to the Singapore ESRD population.
Methods Sample In this cross-sectional study, our target population consisted of 1980 patients undergoing haemodialysis at 22 dialysis centers run by the National Kidney Foundation in Singapore (NKFS). The National Kidney Foundation provides subsidized haemodialysis to needy patients. Subsidy is offered to those who are unable to afford haemodialysis, as determined by financial assessment through a means test. Most of the patients of NKFS are of a lower socio-economic status. The patients were of Chinese, Malay and Indian ethnicity. Participation in this study was voluntary and data was gathered from December 2006 through January 2007. For inclusion, patients had to be at least 21 years of age, have ESRD, and have been receiving haemodialysis (not peritoneal dialysis) at the National Kidney Foundation dialysis center for more than three months. In Singapore, the majority of patients are on haemodialysis (79% haemodialysis vs. 21% peritoneal dialysis) [ 3 ]. Trained nurses explained the study to the patients. Patients who volunteered to participate were recruited into the study. Written consent was obtained from participants and confidentiality of data was assured before the data was gathered. This study was approved by the Institutional Review Board of Singapore General Hospital. Survey Instrument The disease-specific instrument used in this study was the Kidney Disease Quality Of Life-Short Form (KDQOL-SFTM) version 1.3, a self-report measure developed for individuals who have kidney disease and are on dialysis [ 7 ]. The KDQOL-SFTM is available in English and was translated into Mandarin Chinese and Malay (Singapore version) by the KDQOL-SFTM group and RAND [ 15 , 16 ]. The English version of the KDQOL-SFTM was used in surveying the Indian population, who mostly understood English. In this survey, very few participants (less than 10) completed the Chinese or Malay versions of the survey forms. In addition to providing translated versions of the KDQOL-SFTM, the study provided trained nurses conversant in Chinese, Malay, Tamil and English to answer any queries from the participants. The KDQOL-SFTM includes multi-item scales targeted at the particular health-related concerns of individuals who have kidney disease and are on dialysis. The instrument is composed of 36 general health items and 43 kidney-specific items. The items on general health are divided mainly between physical and mental health across eight sub-scales, with one item on overall health. The eight sub-scales are: Physical functioning, Role physical, Pain, General health, Emotional well-being, Role emotional, Social function and Energy/fatigue. Scoring algorithms given in the user manual [ 7 ] were used to calculate scores ranging from 0 to 100. The scores represent the percentage of total possible score achieved, with 100 representing the highest quality of life. The items ask about the patient's health and how the patient feels about his care. Items gather information regarding the patient's background such as gender, ethnicity, education, income, the number of days in their hospital stay, and the number of different prescription medications they were taking. This information is used to evaluate the care delivered and to enable a greater understanding of the effects of medical care on the health of patients [ 7 ]. The KDQOL-SFTM was self-administered. Treatment of Missing Data Of the 1180 participants who completed the survey, 980 provided age, gender and race information, and this data was used in the analyses. Of these, 1.6% missed marking one item and 1.4% missed marking two items. Missing data for an item was substituted with a figure calculated by averaging the scores of the other items in the particular scale to which the missing item belonged. Statistical/Psychometric Analysis The analysis was carried out using SPSS version 15 software. We first compared the sample demographic data with demographic data from the dialysis population listed in the Singapore Renal Registry, 2004 [ 3 ] to determine whether the sample was representative of the full dialysis population in Singapore. We used Analysis of Variance (ANOVA) and the t-test to examine the differences. We then used exploratory factor analysis to determine the basic structure of the KDQOL-SFTM. This technique can be used to group independent latent variables (those which cannot be measured directly: i.e., subjective) into categories based on similar characteristics or behavior. We explored the unknown domains of the KDQOL-SFTM scores by dividing the characteristics/items into independent sources of variation (factors). Here we used a deductive approach by hypothesizing the existence of particular dimensions and assessing whether our data fit a factor structure identical to the structure found by previous researchers [ 7 ] (i.e., how well the measure represented the construct of interest [construct validity]). For selecting the number of factors, we used the criteria of the factor having an eigen value (which measures the amount of variation) greater than one. Varimax rotation (orthogonal rotation of quadrants) was used to control for certain influences (of items on the sub-scale) on the overall result. The rotated factors delineate a distinct cluster of relationships, while unrotated factors successively define the most general patterns of relationships in the data. We used Cronbach's coefficient α to assess internal consistency reliability for the overall scale, and within individual sub-scales. Correlation coefficients were calculated to assess the strength of relationship between items within and outside each sub-scale. We also determined the mean and median of each sub-scale. We used Pearson Correlation (two tailed) to assess stronger relationships of items within scales and weaker relationships with items outside of the scale. We looked at the correlations between the overall health score and the Kidney disease-targeted scales of Symptoms, Effect of kidney disease, Burden of kidney disease, Work status, Cognitive function, Quality of social interaction, Sexual function, Sleep, Social support, Dialysis staff encouragement, and Patient satisfaction. We also looked at two-tailed significance of correlation coefficients between scores on the eight sub-scales and age, income, and education to determine convergent and divergent validity. Considering that higher scores on the SF-36 scales indicate good quality of life, we hypothesized that the KDQOL-SFTM total score would be positively correlated with measures of self-rated health, and of socioeconomic status - represented by educational status. We expected the duration of dialysis to be positively correlated with health.
Results Demographic Characteristics Of 1980 patients receiving haemodialysis in Singapore dialysis centers, 1180 (59%) agreed to participate in the study and completed the KDQOL-SFTM. Of these, full information regarding age, gender and race was available for 980 participants. Table 1 shows age, gender and race data for the evaluable sample (N = 980) and for the total ESRD (dialysis) population in Singapore. The study sample was representative of the total dialysis population in Singapore with the exception that patients of Indian ethnicity were over-represented in our sample. Our sample had more males than females (56% vs. 44%), about two-thirds of the participants were Chinese, and nearly 70% of the participants were over 50 years of age with mean participant age being 56.6 ± 21 years. About half (48%) of the participants were earning less than S$1500/month (Table 2 ). About 41% of participants had up to primary level of education while 37% had received above primary to secondary level of education. Sixty-three percent of the participants reported the cause of their Kidney Disease, out of which 50% had hypertension, 26% had Diabetes, 2% had IgA nephropathy, 1% had Polycystic Kidney Disease and 21% had another cause. Four patients had failed renal transplantation and resumed dialysis. This information was self reported by the patients. ANOVA was used to find differences between the ethnicities among the eight domains of quality of life described by the KDQOL-SFTM. A significant difference (p < .0001) was found only for General health. A Post Hoc (Tukey) comparison test showed that this significant difference was between the Malay and the Indian ethnicities. The mean ± standard deviation scores for Chinese, Malay and Indian were 50.38 ± 18.8, 56.07 ± 18.2, and 49.41 ± 20.0, respectively. In the age category (≤45 years, 46 to 65 years, and >65 years), significant differences were observed only for Emotional well-being (p = .001) and Physical function (p < .0001). Participants over 45 years of age showed higher scores on Emotional well-being compared to those ≤45 years of age (72.79 ± 14.7 vs. 68.58 ± 17.8). Younger patients showed higher scores on Physical function compared to older patients (68.9 ± 24 for age ≤45 years, 59.66 ± 24.7 for 46 to 65 years of age, and 50.43 ± 20.69 for >65 years of age). Regarding gender, men scored significantly higher (p = .003) than women on Physical function (62.52 ±55.43 vs. 56.17 ± 25.38). Construct Validity: Exploratory Factor Analysis Factor analysis with varimax rotation of the KDQOL-SFTM items revealed that the 36 general health items encompassed the eight factors/sub-scales proposed by the developers of the instrument. The number of factors indicated the number of substantively meaningful independent patterns of relationships of items. Varimax rotation gave higher factor loadings as compared to factor loading by unrotated factor method. Table 3 shows that the Role physical, Role emotional, Pain and General health sub-scales in particular exhibited a stronger relationship (>0.7) between items and sub-scales. Other factor loadings ranged from 0.215 to 0.807 (on a scale of 0 to 1). Low factor loadings were observed especially for the items "Bathing and dressing yourself", "Have you been a happy person?" and "Have you been calm and peaceful?" We found that "Bathing and dressing yourself" showed a very weak correlation with "Vigorous activities", an item from the same subscale (Physical functioning), as compared to the correlation of "Bathing and dressing yourself" with items from Role emotional (<0.1 vs. >0.1 for all items of Role emotional). It was also observed that "Have you felt calm and peaceful?" and "Have you been a happy person?" showed a stronger correlation with "During the past 4 weeks, how much of the time have your Physical health or emotional problems interfered with your social activities and activities with your family members?" as compared to correlation with other items from Emotional well-being (>0.3 vs. <0.3). Factor loadings tell us the pattern of relationships and the association of each characteristic with each pattern, which are interpretable as correlation coefficients. The scree plot derived from factor analysis supported the presence of eight sub-scales with eigen values of more than one. Comprehensiveness and strength of the eight sub-scales was measured using the percent of variance. The percent of variance individually explained by each of the eight sub-scales were as follows: Physical functioning: 31.39%, Role physical: 9.52%, Pain: 8.40%, General health: 4.81%, Emotional well-being: 4.40%, Role emotional: 3.75%, Social function: 3.1% and Energy/fatigue: 2.95%. Thus, the total variance explained by all eight sub-scales was 68.35% (not shown). Measures of Central Tendency and Reliability Table 4 presents the central tendency (mean and standard deviation, median), and reliability of the KDQOL-SFTM scales. Internal consistency reliability estimates (Cronbach's α) for the KDQOL-SFTM and its component sub-scales exceeded 0.7, the recommended score for good reliability [ 17 ], (except for Social function: 0.66). This indicates a high internal consistency of items within the sub-scales for all eight sub-scales (Table 4 , last row). Mean ± standard deviation scores for the eight sub-scales ranged from 50.2 ± 19.1 to 78.6 ± 38.2. Role emotional, Physical functioning, Emotional well-being and Pain scored above 70 while General health perception was the lowest at 50.2. Percent of floor effects (participants who have the lowest possible score for a scale) ranged from 0.2 to 31.4 and percent of ceiling effects (participants who have the highest possible score for a scale) ranged from 1.7 to 59.8 (not shown). We found that item discrimination indices for items for each sub-scale ranged from .5 to 1.0. Item discrimination indices indicate the mean percent of times an item in a particular sub-scale correlated significantly higher with its particular sub-scale total than with any other sub-scale total. For this study, correlation of items within subscales was higher than that of items outside subscales in 90% of cases. The validity of the KDQOL-SFTM was also confirmed by finding correlations of kidney disease-targeted scales with the overall health scale, which was calculated based on the eight sub-scales of the general health portion of the KDQOL-SFTM (Table 5 ). We observed a high correlation of overall health with the scales Symptoms, Effect of kidney disease, Quality of social interaction, Sleep, Social support, and Patient satisfaction (p < .01). Other kidney-targeted scales such as Burden of kidney disease, Work status, Cognitive function, Sexual function and Dialysis staff encouragement were correlated with overall health at p < 0.5. We also observed significant associations of general health sub-scales with demographic variables (not shown). Age showed an association with Physical function (-.264, p < .01) and General health (-.102, p < .05). Income showed an association with Physical function (.119, p < .05) and Energy/fatigue (.076, p < .05), while education showed an association with Sleep (.373, p < .05), Physical function (.107, p < .05), Role physical (.09, p < .05), General health (.085, p < .05), Emotional well-being (0.074, p < .05), Role emotional (.104, p = .01) and Social function (.145, p < .01). Duration of dialysis was significantly correlated with overall health (0.079, p < .05).
Discussion Most of the earlier studies that have assessed the validity of the KDQOL-SFTM did so in the context of a Western population, while few countries in South East Asia have used the KDQOL-SFTM. Our findings suggest that the KDQOL-SFTM demonstrated an acceptable level of reliability (as indicated by Cronbach's α values) and validity for use in understanding quality of life among haemodialysis patients in Singapore. The results of this cross-sectional study provide valuable information for the understanding of HRQOL among patients on haemodialysis in Singapore. Exploratory factor analysis supported the presence of eight sub-scales as proposed by the developers of the instrument. The sub-scales of Role physical, Role emotional, Pain and General health showed high factor loadings (>0.7) while the other domains showed a reasonably good relationship, indicating a strong correlation within items of these subscales. Internal consistency reliability estimates for the KDQOL-SFTM and its eight sub-scales exceeded scores for good reliability (with one exception), and items generally correlated with other items on their subscale more than with items in other subscales. The validity of the KDQOL-SFTM was also confirmed by the positive correlations of the overall health rating with kidney disease-targeted scales. This result was consistent with study conducted with 665 Greek ESRD patients [ 12 ]. As expected, we found that increased age was associated with a corresponding decrease in Physical function and General health. Education and income were found to be associated with a number of KDQOL-SFTM sub-scales, which indicates that these sub-scales could prove very useful in socioeconomically diverse populations with chronic kidney disease or ESRD. This result was consistent with the study conducted with 164 haemodialysis patients in Korea [ 9 ]. All of these results support the use of the KDQOL-SFTM with haemodialysis patients in Singapore. However, more attention should be given to the three items that showed a lower factor loading: "Bathing and dressing yourself", "Have you been a happy person?" and "Have you been calm and peaceful?" These lower factor loadings may be due to cultural differences. Singaporeans may have perceived these three items more as Role emotional than as Physical functioning or Emotional well-being. Limitations of Study The main limitation of this study was the lack of patient measures on any clinical parameters. The data collection was blinded, which made it impossible to correlate with clinical parameters, co-morbidities and intermediate outcomes of dialysis such as dose of dialysis, nutritional parameters and hemoglobin. As such, it was not possible to examine the associations of the KDQOL-SFTM with clinical parameters or clinical outcome. The second limitation was that the practical considerations prevented us from approaching the patients for a second interview. The study was confined to a single interview with respondents and hence we did not approach the patients for longitudinal tracking. However, since this instrument had been tested and retested for different populations and has been proven reliable and valid, we decided to conduct only the cross-sectional study to establish the internal consistency, reliability and validity for Singapore ESRD patients. Thirdly, the KDQOL-SFTM was not administered to patients receiving peritoneal dialysis, so no new data was gathered on these patients. The cross-sectional nature of the study precluded us from determining additional measures of reliability, such as test-retest reliability. Future studies should check the test-retest reliability of the KDQOL-SFTM and examine the associations of QOL with demographic characteristics. Clinical information should also be collected to analyze the effect of clinical parameters on QOL, and to gain a greater understanding of the possible associations between QOL scores and clinical outcomes. The KDQOL-SFTM should also be applied to peritoneal dialysis patients to examine whether there is a difference in their QOL compared to patients on haemodialysis. The response rate of 59% was reasonable given that many dialysis patients suffer from psychological and emotional exhaustion. Often patients treat their dialysis sessions to rest or bring their reading work to the centers. Response rate could have been enhanced by the renal physicians personally contacting each patient to encourage participation. To our knowledge, this has been the first study to evaluate the reliability and validity of the KDQOL-SFTM among ESRD patients in a Southeast Asian population. Not only was the sample size large enough (about one fourth of total ESRD population of Singapore), but it was also representative of the ESRD patient population of Singapore and showed a good response rate.
Conclusions In summary, this is the first time the shorter KDQOL-SFTM has been used in a large sample of ESRD patients in Singapore. The results demonstrate acceptable reliability, construct validity and discriminatory ability in representative ESRD patients in Singapore. We conclude that the KDQOL-SFTM can be used for assessing the quality of life of dialysis patients in Singapore.
Background In Singapore, the prevalence of end-stage renal disease (ESRD) and the number of people on dialysis is increasing. The impact of ESRD on patient quality of life has been recognized as an important outcome measure. The Kidney Disease Quality Of Life-Short Form (KDQOL-SFTM) has been validated and is widely used as a measure of quality of life in dialysis patients in many countries, but not in Singapore. We aimed to determine the reliability and validity of the KDQOL-SFTM for haemodialysis patients in Singapore. Methods From December 2006 through January 2007, this cross-sectional study gathered data on patients ≥21 years old, who were undergoing haemodialysis at National Kidney Foundation in Singapore. We used exploratory factor analysis to determine construct validity of the eight KDQOL-SFTM sub-scales, Cronbach's alpha coefficient to determine internal consistency reliability, correlation of the overall health rating with kidney disease-targeted scales to confirm validity, and correlation of the eight sub-scales with age, income and education to determine convergent and divergent validity. Results Of 1980 haemodialysis patients, 1180 (59%) completed the KDQOL-SFTM. Full information was available for 980 participants, with a mean age of 56 years. The sample was representative of the total dialysis population in Singapore, except Indian ethnicity that was over-represented. The instrument designers' proposed eight sub-scales were confirmed, which together accounted for 68.4% of the variance. All sub-scales had a Cronbach's α above the recommended minimum value of 0.7 to indicate good reliability (range: 0.72 to 0.95), except for Social function (0.66). Correlation of items within subscales was higher than correlation of items outside subscales in 90% of the cases. The overall health rating positively correlated with kidney disease-targeted scales, confirming validity. General health subscales were found to have significant associations with age, income and education, confirming convergent and divergent validity. Conclusions The psychometric properties of the KDQOL-SFTM resulting from this first-time administration of the instrument support the validity and reliability of the KDQOL-SFTM as a measure of quality of life of haemodialysis patients in Singapore. It is, however, necessary to determine the test-retest reliability of the KDQOL-SFTM among the haemodialysis population of Singapore.
Competing interests The authors declare that they have no competing interests. Authors' contributions This study was conducted by SingHealth, Centre for Health Services Research. The data was collected at National Kidney Foundation, Singapore with Dr. NM's approval. Dr. VDJ was involved with the design of survey questionnaire, performed the statistical analysis, interpretation and drafted the manuscript. Dr. JFYL supervised the complete project, critically reviewed the manuscript and encouraged the decision to submit the manuscript for publication. Dr. NM critically reviewed the manuscript. All the authors have read and approved the final manuscript. Pre-publication history The pre-publication history for this paper can be accessed here: http://www.biomedcentral.com/1471-2369/11/36/prepub
Acknowledgements The authors wish to acknowledge the sincere efforts by Mr. Andrew Lee, the Full bright scholar who was an intern for one year at Singapore Health Services Pte. Ltd. during year 2006 and 2007. He was involved with the study design, design of survey questionnaire and complete data collection. The authors are thankful for the efforts taken by Stephen Challinor who was an ex-employee of The National Kidney Foundation, Singapore for getting the permission to implement this study and helping with the administration of the project. The authors are grateful to the nurses employed by National Kidney Foundation, Singapore for helping with the recruitment of the patients for the data collection. The authors appreciate the support of the Singapore Clinical Research Institute for the editorial assistance provided by Jon Kilner, MS, MA (Pittsburgh, PA., USA). This study was supported by SingHealth Centre for Health Services Research, Singapore and US department of State Full bright program.
CC BY
no
2022-01-12 15:21:37
BMC Nephrol. 2010 Dec 20; 11:36
oa_package/37/b5/PMC3014913.tar.gz
PMC3014914
21126330
Introduction Porcine parvovirus (PPV) is an autonomous parvovirus belonging to the genus parvovirus, subfamily Parvovirinae , family Parvoviridae . It is one of the major etiological agents of reproductive failure in pigs. Reproductive failure caused by PPV is characterized by embryonic and foetal death, mummification, stillbirth, and delayed return to oestrus [ 1 ]. In addition, PPV has been implicated as the causative agent of diarrhea, skin disease, and arthritis in swine [ 2 ]. PPV has been reported from many different countries [ 3 - 5 ]. PPV is composed of a linear single-stranded segment of DNA approximately 5 kb long (Molitor, T.W., 1983), and its genome has more than two open reading frames (ORF) [ 6 ]. The 3' end of ORF1 encodes nonstructural proteins (NS proteins), and the 5' end of ORF2 encodes structural proteins (VP proteins). For diagnostic purposes, PPV can be rapidly and sensitively detected with polymerase chain reaction (PCR) assays [ 7 , 8 ]. However, current PCR assays for PPV often require multiple steps and do not provide quantitative data. In contrast, real-time PCR using SYBR Green and TaqMan is rapid, specific, and efficient for the large-scale screening, strain identification, and quantification of PPV [ 9 ]. NS1, which is encoded by the NS1 gene, is a main nonstructural protein of PPV and is associated with the early and late transcription of the virus. Given that inactivated virus used in current vaccines have only little NS1 protein which could not produce antibody, the presence or absence of antibody against NS1 protein could be used in an NS1-based diagnostic kit for determining in clinical settings whether pigs have been vaccinated with the inactivated-PPV or infected with wild-type PPV, and the test would give a negative result for for vaccinated/noninfected pigs. NS proteins are also important in virus research because they play an important regulatory role in viral replication even though they do not directly participate in the assembly of virus particles. In this study, a TaqMan-based real-time PCR assay was developed for the rapid and quantitative detection of PPV with a probe specific for the PPV NS1 gene. The results of the real-time PCR assays were compared with those of previously established, conventional PCR assays.
Materials and methods Primers and probes PCR primers and a TaqMan probe, which were designed with the program DNAStar and synthesized by Saituo Matrix Biotechnology (Haerbin) Co., Ltd, were used to amplify a 123-bp fragment of the NS1 gene. The sequences of the primers and probe were: NS1-FP (forward primer): 5'-GAAGACTGGATGATGACAGATCCA-3', NS1-RP (reverse primer): 5'-TGCTGTTTTTGTTCTTGCTAGAGTAA-3'. NS1-P (probe): FAM-AATGATGGCTCAAACCGGAGGAGA-BHQ1. The probe was labeled with 6-carboxyfluorescein (FAM) at the 5'-end and with BHQ1 at the 3'-end. Preparation of standard plasmid DNA PCR amplification of the NS1 gene was carried out in a reaction mix of 25 μL: 16.0 μL sterilized water, 2.5 μL of 10× buffer, 3.0 μL of dNTP, 1 μL of each primer (NS1-FP and NS1-RP), 1 μL of BQ strain DNA, and 0.5 μL of Ex TaqTM DNA Polymerase (Ex taq). The thermal conditions were as follows: one cycle at 94 C for 5 min; followed by 30 cycles at 94 C for 30 s, 58 C for 45 s, and 72 C for 30 s; with a final extension at 72 C for 7 min. The PCR product was inserted into a vector, pMD18-T (TaKaRa Biotechnology (Dalian) Co., Ltd.). After the culture was increased in DH5a host bacteria (TaKaRa Biotechnology Co., Ltd), the recombinant plasmid was purified using a commercial test kit (Watson Biotechnologies, Inc.). The products were kept at -20°C for later use. Establishment of real-time PCR The real-time PCR amplifications of the NS1 gene used 25-μL reaction mixtures containing 2.5 μL of 10× buffer, 3.5 μL of dNTP (TaKaRa Biotechnology Co., Ltd), 3 μL of MgCl 2 , 1 μL of each primer (10 pM/μL of NS1-FP and NS1-RP), 1 μL of the recombinant plasmid, 0.5 μL of the probe (PPV-P), 0.5 μL of Hotstar taq (TaKaRa Biotechnology Co., Ltd), and 12.0 μL sterile water. The reactions were carried out in an Rotor-Gene Thermocycler (Corbett Research. Co. Ltd.). The conditions were as follows: one cycle at 95°C for 30 s followed by 40 cycles at 95°c for 10 s, 58°c for 20 s, and 72°C for 20 s. The data were analyzed with the Rotor-Gene software. Sensitivity of the real-time PCR To determine the detection limit and efficiency of the assay, recombinant plasmid of standard DNA was used as a template and was 10-fold serially diluted with sterile water, giving 6.00 × 10 9 to 6.00 × 10 1 copies/μL. The sensitivity of the real-time PCR was compared with conventional PCR (Yue et al., 2009). Specificity of the real-time PCR To determine the specificity of the real-time PCR, the standard plasmid-positive template and different strains of PPV, five different viruses (PRRSV, CSFV, PRV, JEV, and PCV-2), and control (sterile water) were processed with the real-time PCR. Reproducibility of real-time PCR To determine the reproducibility of the real-time PCR, the standard plasmid was diluted to 6.00 × 10 8 , 6.00 × 10 6 , 6.00 × 10 4 , and 6.00 × 10 2 copies/μL. To obtain variation within an assay (within a block), each dilution was processed four different times, i.e., in four blocks, with the real-time PCR assay. Each block contained four repeated determinations for each dilution, giving 16 total determinations for each dilution. Coefficients of variation (CVs) for Ct values within each block and among blocks (using the mean values from each block) were determined. Detection of the clinical samples Forty-one clinical samples (20% organ suspensions stored at -70°C) that were suspected of being infected with PPV were subjected to the real-time PCR and conventional PCR.
Results Establishment of the standard plasmid-positive template The amplified NS1 gene fragment was about 123 bp long. The plasmid DNA concentration was 0.189 μg/μL before dilution. So, the DNA concentration was equivalent to 6.00 × 10 10 copies/μL before dilution. Establishment of a standard curve for the real-time PCR The standard curve was generated with a range of 6.00 × 10 9 to 6.00 × 10 2 copies/μL (Figure 1A ). The assays were linear in a dilution range of template DNA from 6.00 × 10 2 to 6.00 × 10 9 copies/μL, with R 2 values of 0.996 and reaction efficiencies of 100% for NS1. Quantitative data for Cycling A. FAM are shown in Figure 1B . The amplification product was about 123 bp long, and no false amplification was observed. Sensitivity of the real-time PCR The detection limit of the real-time PCR for the NS1 gene of PPV was 1.00 × 10 2 copies/μL(Figure 1B ). The conventional PCR assay showed a negative result when the solution was diluted to 1.00 × 10 4 copies/μL. These results indicate that, based on direct observation, the sensitivity of the real-time PCR is 100 times greater than that of the conventional PCR (Figure 1C ). Specificity of the real-time PCR The real-time PCR gave positive results for the standard plasmid of PPV strains (6.00 × 10 8 copies/μL) and negative results for the other porcine viruses involved in reproductive disorders (PCV2, PRV, PRRSV, CFSV, and JEV) and for the sterile water control (Figure 1D ). Reproducibility of the real-time PCR When the serially diluted standard plasmid (6.00 × 10 8 , 6.00 × 10 6 , 6.00 × 10 4 , and 6.00 × 10 2 copies/μL) was subjected to the real-time PCR, the within-block CV value (four replicate assays for each dilution performed at one time) and the among-block CV value (the means of four replicates from each of four times) were relatively small (Table 1 ). Detection of PPV in clinical samples by real-time PCR and conventional PCR Real-time PCR and conventional PCR were used simultaneously with 41 samples that had been collected from several swine herds for diagnostic purposes. Real-time PCR detected PPV in 32 samples and conventional PCR detected PPV in 11 samples; therefore, they were 100% sensitivity and 100% specificity (Table 2 ).
Discussion This study describes a real-time PCR assay for PPV based on detection of the NS1 gene. The real-time PCR assay was 100 times more sensitive than conventional PCR. The assay was specific in that it provided positive results with different strains of PPV but negative results with other viruses (PCV2, PRV, PRRSV, CFSV, JEV) associated with reproductive disorders of swine. The real-time PCR assay was also highly reproducible. The real-time PCR method has several advantages for detection of PPV. First, the real-time PCR is more sensitive than conventional PCR, and high sensitivity is required for early diagnosis of PPV in the clinic. Second, the real-time PCR is faster than conventional PCR because it does not require gel electrophoresis. Third, the TaqMan real-time PCR described here is less likely to produce a false positive than a conventional PCR assay or the SYBR Green I-based real-time PCR. TaqMan-based real-time PCR may be more specific than SYBR Green I-based real-time PCR because the former requires specific probes that bind with PPV DNA template, while the latter uses probes that can bind to any double-stranded DNA, even non-specific amplicons. Design of the primers and probe was based on the NS1 region of the PPV genome because this region is highly conserved. NS1 protein is believed to be a bifunctional protein with ATPase and helicase activities. In all parvoviral DNAs, both the 5'- and 3'-terminal palindromic sequences act as primers during replication [ 10 ].
Conclusion In conclusion, the TaqMan real-time PCR assay has been shown to be rapid, sensitive, specific, and reproducible for the detection and quantification of PPV. It should be an excellent tool for laboratory detection of PPV in tissue-culture samples as well as in field samples. The TaqMan real-time PCR assay will be useful for studying the molecular epidemiology of PPV infections in swine populations. The assay will also be very useful for early diagnosis of PPV and therefore for management of PPV.
A TaqMan-based real-time polymerase chain reaction (PCR) assay was devised for the detection of porcine parvovirus (PPV). Two primers and a TaqMan probe for the non-structural protein NS1 gene were designed. The detection limit was 1 × 10 2 DNA copies/μL, and the assay was linear in the range of 1 × 10 2 to 1 × 10 9 copies/μL. There was no cross-reaction with porcine circovirus 2 (PCV2), porcine reproductive and respiratory syndrome virus (PRRSV), pseudorabies virus (PRV), classical swine fever virus (CSFV), or Japanese encephalitis virus (JEV). The assay was specific and reproducible. In 41 clinical samples, PPV was detected in 32 samples with the real-time PCR assay and in only 11 samples with a conventional PCR assay. The real-time assay using the TaqMan-system can therefore be practically used for studying the epidemiology and management of PPV.
Acknowledgements The study was supported in part by funding from the National High-tech R&D Program (863 Program-2007AA100606) and the Chinese National Key Laboratory of Veterinary Biotechnology Fund (NKLVBP201002). Competing interests The authors declare that they have no competing interests. Authors' contributions CPS and CZhu carried out the molecular probe studies, and drafted the manuscript. CZha participated in the design of the study and performed the statistical analysis. SC conceived of the study, and participated in its design and coordination. All authors read and approved the final manuscript.
CC BY
no
2022-01-12 15:21:37
Virol J. 2010 Dec 2; 7:353
oa_package/e8/25/PMC3014914.tar.gz
PMC3014915
21129205
Introduction The temporal ordering of bacteriophage T4 development is assured, in great part, by the cascade activation of three different classes of promoters (see [ 1 , 2 ] in this series). However, control of phage development is also exercised at the post-transcriptional level, in particular by mechanisms of mRNA destabilization and translation inhibition [see earlier reviews [ 3 - 6 ]]. In this review we detail advances in understanding these processes, and summarize some of the other posttranscriptional processes that occur in T4-infected cells.
Conclusions Although post-transcriptional control in T4 development and gene expression has been appreciated and studied for decades, many of the molecular details, especially for specific RNA-protein interactions, have yet to be resolved. For most, crystal or solution structures of bound mRNA-repressor or RNA-nuclease complexes would significantly advance our understanding of complex formation and substrate interactions in catalysis. While clearly germane to T4 and the large diversity of T4-related bacteriophages in the biosphere, continued study of post-transcriptional processes directed by these phages will provide new advances in the biochemistry pertinent to all cellular systems. Undoubtedly new anti-virals and anti-microbials targeting these and related systems in pathogens can be anticipated.
Over 50 years of biological research with bacteriophage T4 includes notable discoveries in post-transcriptional control, including the genetic code, mRNA, and tRNA; the very foundations of molecular biology. In this review we compile the past 10 - 15 year literature on RNA-protein interactions with T4 and some of its related phages, with particular focus on advances in mRNA decay and processing, and on translational repression. Binding of T4 proteins RegB, RegA, gp32 and gp43 to their cognate target RNAs has been characterized. For several of these, further study is needed for an atomic-level perspective, where resolved structures of RNA-protein complexes are awaiting investigation. Other features of post-transcriptional control are also summarized. These include: RNA structure at translation initiation regions that either inhibit or promote translation initiation; programmed translational bypassing, where T4 orchestrates ribosome bypass of a 50 nucleotide mRNA sequence; phage exclusion systems that involve T4-mediated activation of a latent endoribonuclease (PrrC) and cofactor-assisted activation of EF-Tu proteolysis (Gol-Lit); and potentially important findings on ADP-ribosylation (by Alt and Mod enzymes) of ribosome-associated proteins that might broadly impact protein synthesis in the infected cell. Many of these problems can continue to be addressed with T4, whereas the growing database of T4-related phage genome sequences provides new resources and potentially new phage-host systems to extend the work into a broader biological, evolutionary context.
Posttranscriptional control by mRNA decay Endoribonuclease RegB and its role in inactivating phage early mRNAs The end of the early period, 5 minutes after infection at 30°C, is marked by a strong decline in the synthesis of many early proteins. This inhibition is due to the abrupt shut-down of the early promoters by a mechanism that is not completely understood [ 7 , 8 ]. In addition, the phage-encoded RegB endoribonuclease (T4 regB gene) functionally inactivates many early transcripts and expedites their degradation. As described below, this role of RegB is accomplished in part, with the cooperation of the host endoribonucleases RNase E and RNase G and the T4 polynucleotide kinase, PNK. The T4 RegB RNase exhibits unique properties. It generates cuts in the middle of GGAG/U sequences located in the intergenic regions of early genes, mostly in translation initiation regions. In fact, the GGAG motif is one of the most frequent Shine-Dalgarno sequences encountered in T4. Some efficient RegB cuts have also been detected at GGAG/U within coding sequences. RegB cleavages can be detected very soon after infection, earlier than 45 seconds at 30°C [ 5 , 9 - 14 ]. The RegB endonuclease requires a co-factor to act efficiently. When assayed in vitro , RegB activity is extremely low but can be stimulated up to 100-fold by the ribosomal protein S1, depending on the RNA substrate [ 9 , 15 , 16 ]. Functional inactivation of mRNA by RegB The consequence of RegB cleavage within translation initiation regions is the functional inactivation of the transcripts. The synthesis of a number of early proteins starts immediately after infection and reaches a maximum in four minutes before declining abruptly thereafter. In regB mutant infections, several of these early proteins continue to be synthesized for a longer time, resulting in twice the accumulation as compared to when RegB is functional. The abrupt arrest of synthesis of these proteins at ~4 min postinfection with wild-type phage results both from the sudden inhibition of early transcription and the functional inactivation of mRNA targets by RegB. However, in addition to down-regulating the translation of many early T4 genes RegB-mediated mRNA processing stimulates the synthesis of a few middle proteins, such as the phage-induced DNA polymerase, encoded by T4 gene 43 [ 11 , 12 ]. RegB accelerates early mRNA breakdown RegB accelerates the degradation of most early, but not middle or late mRNAs. Indeed, bulk early mRNA is stabilized about 3-fold in a regB mutant compared to wild-type infection. After ~3 min post-infection, mRNAs decay with a constant half-life of about 8 minutes for the remainder of the growth period at 30°C, irrespective of the presence or the absence of a functional RegB nuclease [ 11 ]. The host RNase E plays an important role in T4 mRNA degradation throughout phage development [ 17 ]. Total T4 RNA synthesized during the first two minutes of infection of the temperature-sensitive rne host mutant is stabilized 3-fold at non-permissive temperatures. When both genes, regB and rne , are mutationally inactivated, bulk early T4 mRNA is stabilized 8 to 10-fold (half-life of 50 min at 43°C), showing that both T4 RegB and host RNase E endonucleases are major actors in T4 early mRNA turnover (B. Sanson & M. Uzan, unpublished results). RegB could accelerate mRNA decay by increasing the number of entry sites for one or the other of the two host 3' exoribonucleases, RNase II and RNase R, which can attack the mRNA from the 3'-phosphate terminus left after RegB cleavage. An alternative pathway was suggested by the finding that some endonucleolytic cleavages within A-rich sequences depend upon RegB primary cuts a short distance upstream. This was interpreted as meaning that RegB triggers a degradation pathway that involves a cascade of endonucleolytic cuts in the 5' to 3' orientation [ 12 ]. The host endoribonucleases, RNase G and RNase E, are responsible for cutting at secondary sites, with RNase G playing a major role [ 14 ]. This finding appeared paradoxical since these two endonucleases have a marked preference for RNA substrates bearing a monophosphate at their 5' extremities [ 18 - 20 ], while RegB produces 5'-hydroxyl RNA termini. Therefore, we suspected that T4 infection induced an activity able to phosphorylate the 5'-OH left by RegB, and the best candidate for filling this function is the phage-encoded 5' polynucleotide kinase/3' phosphatase (PNK). This enzyme catalyzes both the phosphorylation of 5'-hydroxyl polynucleotide termini and the hydrolysis of 3'-phosphomonoesters and 2':3'-cyclic phosphodiesters. Indeed, Durand et al . (2008; unpublished data) showed that the secondary cleavages are abolished in an infection with a phage that carries a deletion of the pseT gene, encoding PNK. In addition, many cleavages detected over a distance of 200 nucleotides downstream of the initial RegB cut (mostly generated by RNase E and a few by RNase G), disappear or are strongly weakened in the PNK mutant infection. The availability of a mutant affected only in the phosphatase activity ( pseT1 ) made it possible to show that the phosphatase activity of PNK also contributes to mRNA destabilization from the 3' terminus. This presumably occurs through the conversion of 3'-phosphate into 3'-hydroxyl termini, making RNAs better substrates for polynucleotide phosphorylase, the only host 3' exoribonuclease that requires a 3'-hydroxyl terminus to act efficiently. The total inactivation of PNK increases the stability of some RegB-processed transcripts (Durand et al . 2008, unpublished data). Thus, both the kinase and phosphatase activities of PNK control the degradation of some RegB-processed transcripts from the 5' and the 3' extremities, respectively. This shows that the status of the 5' and 3' RNA extremities plays a major role in mRNA degradation (see also [ 21 ]). This was the first time a direct role was ascribed to T4 PNK in the utilization of phage mRNAs. In bacteriophage T4, as in other phages and bacteria where this enzyme is found, PNK is involved in tRNA repair, together with the RNA ligase, in response to cleavage catalyzed by host enzymes [ 22 , 23 ] (and see below). Durand's finding should prompt one to consider that, in addition to a role in RNA repair, prokaryotic PNKs might participate in the regulation of mRNA degradation. The data presented above show that RNase G, a paralogue of RNase E in E. coli , participates in the processing and decay of several phage transcripts [ 14 ] (Durand et al . 2008, unpublished data). Nevertheless, it seems clear that it does not have the same general effect on phage mRNA as RNase E. The plating efficiency of T4 is reduced only by 30% on a strain deficient in RNase G ( rng ::Tn5) relative to a wild-type strain (Durand et al . 2008, unpublished data). The RegB/S1 target site It has been obvious since the initial discovery of RegB activity that not all intergenic GGAG sequences are cleaved by this RNase [ 13 , 24 ], suggesting that the motif is necessary but not sufficient for cleavage. RNA secondary structure protects against cleavage and several phage mRNAs that carry an intergenic GGAG/U motif are resistant to the nuclease, including a few early, most middle and all late transcripts [ 11 ]. These GGAG-containing mRNAs are not substrates of the enzyme either in vitro or in vivo [ 11 ]. A SELEX ( s ystematic e volution of l igands by ex ponential enrichment; [ 25 ]) experiment, based on the selection of RNA molecules cleaved by RegB in the presence of the ribosomal protein S1, led to the selection of RNA molecules that all contained the GGAG tetranucleotide [ 26 ] and no other conserved sequence or structural motif. However, in most cases, the GGAG sequence was found in the 5' portion of the randomized region, suggesting that the nucleotide composition 3' to this conserved motif plays a role. More recently, by using classical molecular genetic techniques, Durand et al . [ 9 ] showed that this was indeed the case. The strong intergenic RegB cleavage sites share the following consensus: GG*AGRAYARAA, where R is a purine (often an A, leading to an A-rich sequence 3' to the very conserved GGAG motif) and Y a pyrimidine (the star indicates the site of cleavage) [ 9 ]. This unusually long nuclease recognition motif is reminiscent of cleavage sites for some mammalian endoribonucleases that function with auxiliary factors. One possible model assumes that the auxiliary factors bind the long nucleotide sequence and recruit the endonuclease [ 27 ]. Durand et al. [ 9 ] provided evidence that RegB alone recognizes the trinucleotide GGA, which it cleaves very inefficiently, irrespective of its nucleotide sequence context, and that stimulation of the cleavage activity by S1 depends on the base composition immediately 3' to -GGA-. RegB catalysis and structure The bacteriophage T4 RegB endoribonuclease is a basic, 153-residue protein. Although its amino acid sequence is unrelated to any other known RNase, it was shown to be a cyclizing ribonuclease of the Barnase family, producing 5'-hydroxyl and cyclic 2',3'-phosphodiester termini, with two histidines (in positions 48 and 68) as potent catalytic residues [ 28 ]. NMR was used to solve the structure of RegB and to map its interactions with two RNA substrates. Despite the absence of any sequence homology and a different organization of the active site residues, RegB shares structural similarities with two E. coli ribonucleases of the toxin/antitoxin family: YoeB and RelE [ 29 ]. YoeB and RelE are involved in the inactivation of mRNA translated under nutritional stress conditions [ 30 , 31 ]. Interestingly, like RegB, RelE, and in some cases YoeB recognize triplets on mRNAs, which they cleave between the second and third nucleotides. It has been proposed that RegB, RelE and YoeB are members of a newly recognized structural and functional family of ribonucleases specialized in mRNA inactivation within the ribosome [ 29 ] (Figure 1 ). How does S1 activate the RegB cleavage reaction? The E. coli S1 ribosomal protein is an RNA-binding protein required for the translation of virtually all the cellular mRNAs [ 32 ]. It contains six homologous regions, each of about 70 amino acids, called S1 modules (or domains) connected by short linkers. S1 binds to ribosomes through its two N-terminal domains (modules 1-2) while mRNAs interact with the C-terminal domain made of the four other modules (3-4-5-6) [ 33 ]. S1-like modules are found in many proteins involved in the metabolism of RNA throughout evolution. The structure of these modules, (based on studies of the E. coli S1 protein itself as well as RNase E and PNPase), are predicted to belong to the OB-fold family [ 34 - 38 ]. The modules required in RegB activation have been identified. The C-terminal domain of S1 (including modules 3-4-5-6) stimulates the RegB reaction to the same extent as the full-length protein. Depending on the substrate, domain 6 can be removed without affecting the efficacy of the reaction. The smallest domain combination able to stimulate the cleavage reaction significantly is the bi-module 4-5 [ 9 , 39 ]. Interestingly, small angle X-ray scattering studies performed on the tri-module 3-4-5 showed that the two adjacent domains 4 and 5 are tightly associated, forming a rigid rod, while domain 3 has no or only a weak interaction with the others. This suggests that the S1 domains 4 and 5 cooperate to form an RNA binding surface able to interact with the nucleotides of RegB target sites. Module 3 could help stabilize the interaction with the RNA [ 34 ]. The 3' A-rich sequence that characterizes strong RegB sites (see above) plays a role in the mechanism of stimulation by S1. Indeed, directed mutagenesis experiments showed that the stimulation of RegB cleavage by S1 depends on nucleotides immediately 3' to the totally conserved GGA triplet. The closer the sequence is to the consensus shown above, the greater the stimulation by S1 [ 9 ]. The affinity of S1 for the A-rich sequence is not better than for any other RNA sequence (S. Durand and M. Uzan, unpublished data); suggesting that the function of this sequence is not simply to recruit S1 locally. Rather, specific interactions of S1 with the conserved sequence might make the G-A covalent bond more accessible to RegB. In support of this view, RegB alone (without S1) is able to perform efficient and specific cleavage in a small RNA carrying the GGAG sequence, provided the GGA triplet is unpaired and the fourth G nucleotide of the motif is partly constrained [ 15 ]. The RegB protein shows very weak affinity for its substrates [ 26 , 28 ] and in fact, no RegB-RNA complex can be visualized by gel shift experiments. However, in the presence of S1, RegB-RNA-S1 ternary complexes can form, suggesting that the first step in the S1 activation pathway involves S1 interaction with the RNA (S. Durand and M. Uzan, unpublished observations). Taken together, these observations suggest that through its interaction with the A-rich sequence 3' to the cleavage site, the S1 protein promotes a local constraint on the RNA, facilitating the association or reactivity of RegB. As RegB is easily inhibited by RNA secondary structures, one possibility was that S1 stimulates RegB through its RNA unwinding ability [ 40 , 41 ]. However, Lebars et al . [ 15 ] provided evidence that does not support this hypothesis. Whether S1 participates in the RegB reaction as a free protein or in association with the ribosome or other partners in vivo remains to be determined. However, the structural and mechanistic analogy of RegB to the two E. coli RNase toxins, YoeB and RelE [ 29 ], which depend on translating ribosomes for activity [ 30 ], and the efficiency of RegB cleavage in vivo very shortly after infection [ 13 ], favor the likelihood of ribosomes participating in RegB processing of mRNAs in vivo . Regulation and distribution of the regB gene The regB gene is transcribed from a typical early promoter that is turned off two to three minutes after infection. The regB gene is also regulated at the post-transcriptional level, suggesting that the production of this nuclease must be tightly regulated. Indeed, RegB efficiently cleaves its own transcript in the SD sequence, indicating that RegB controls its own synthesis. Three other cleavages of weaker efficiency occur in the regB coding sequence, which probably contribute to regB mRNA breakdown [ 10 ]. Despite the fact that the RegB nuclease seems dispensable for T4 growth, the regB gene is widely distributed among T4-related phages. The regB sequence was determined from 35 different T4-related phages. Thirty-two of these showed striking sequence conservation, while three other sequences (from RB69, TuIa and RB49) diverged significantly. As in T4, the SD sequence of these regB genes is GGAG, with only one case (RB49) of GGAU. When experimentally tested, this sequence was always found to be cleaved by RegB in vivo , suggesting that translational auto-control of regB is conserved in T4-related phages [ 42 ]. Mutants of regB are viable on laboratory E. coli strains, although their plaques are slightly smaller in minimal medium than those of the wild-type phage. Also, T4 regB mutants form minute plaques on the hospital E. coli strain CTr5x, with a plating efficiency of one third that on classical laboratory strains (M. Uzan, unpublished data). What is the role of RegB in T4 development? Early transcripts are synthesized in abundance immediately after infection, reflecting the exceptional strength of most T4 early promoters. In fact, effective promoter competition for RNA polymerase can be considered one of the first mechanisms leading to shut-off of host gene transcription. Abundant and stable phage early transcripts would compete for translation with the subsequently made middle and late transcripts. Therefore, a specific mechanism leading to early mRNA inactivation and increased rate of degradation should free the translation apparatus more rapidly and facilitate the transition between early and later phases of T4 gene expression [ 5 ]. Functional mRNA endonucleolytic inactivation is certainly a faster means to arrest ongoing translation and rapidly re-orient gene expression in response to changes in growth conditions or the stage of development. In this regard, it is striking that the two toxin endoribonucleases, RelE and YoeB, to which RegB shows strong structural similarities (Figure 1 ) [ 29 ], also allow swift inactivation of translated mRNAs in response to nutritional stress. The finding that RegB shares structural and functional similarities with other toxin RNases that have antitoxin partners raises the possibility that an anti-RegB partner might be encoded by T4. On the other hand, RegB might not require an antitoxin to block its activity since its in vivo targets disappear through mRNA decay shortly after it acts in the infected cell. T4 Dmd and E. coli RNase LS antagonism T4 Dmd controls the stability of middle and late mRNAs The T4 early dmd gene ( d iscrimination of m essages for d egradation) encodes a protein that controls middle and late mRNA stability. Indeed, an amber mutation in dmd leads to strong inhibition of phage development. Protein synthesis is normal until the beginning of the middle period and collapses thereafter. A number of endonucleolytic cleavages can be detected in middle and late transcripts, which are not present in wild-type phage infection. Consistent with this observation, the accumulation of these RNA species drops dramatically and the chemical and functional half-lives of several middle and late transcripts were shown to be shortened [ 43 - 46 ]. The host RNA chaperone, Hfq, seems to enhance the deleterious effect of the dmd mutation [ 47 ]. These data strongly suggest that the arrest of protein synthesis in T4 dmd mutants is the consequence of mRNA destabilization and that the function of the Dmd protein is to inhibit an endoribonuclease that targets middle and late transcripts. The endoribonuclease responsible for middle and late mRNA destabilization in the dmd mutant is of host origin as shown by the fact that a late mRNA ( soc ) produced from a plasmid in uninfected bacteria undergoes the same cleavages as those observed after infection by a dmd mutant phage [ 43 , 48 ]. Yonesaki's group further showed that this RNase activity depends on a new endonuclease, RNase LS, for l ate gene s ilencing in T4. Several E. coli mutants able to support the growth of a dmd mutant phage were isolated, among which, two very efficiently reversed the dmd phenotype. Both mutations were mapped within the ORF yfjN , which was renamed rnlA [ 44 , 45 , 48 ]. Biochemical characterization of RNase LS Purified his-tagged RnlA protein cleaves the late soc transcript in vitro at only one site among the three usually observed in vivo after infection with dmd mutant phage. This cleavage is inhibited by purified Dmd protein [ 49 ]. Thus, RnlA has an RNase activity that responds directly to Dmd. Whether RnlA has targets in other T4 mRNAs remains to be determined. Biochemical experiments showed that RNase LS activity is associated with a large complex whose MW was estimated to be more than 1,000 kDa. More than 10 proteins participate in the complex. Two of them were identified: RnlA and triose phosphate isomerase. The latter is present in stoichiometric amounts relative to RnlA and binds very tightly to it [ 45 , 49 ]. Interestingly, a mutation in the gene for triose phosphate isomerase is able to partially allow the growth of a T4 dmd mutant, suggesting that RnlA and triose phosphate isomerase functionally interact. It is unclear whether RNase LS carries only one RNase activity (presumably that of the RnlA protein) or more, and if the activity of RnlA is modulated by other components of the complex. The multi-protein complex that constitutes RNase LS is not simply a modification of the host degradosome to contain the RnlA protein during T4 infection, since the dmd phenotype is not reversed in infection of an RNase E host mutant ( rne Δ131) unable to assemble the degradosome [ 48 ]. The specificity of RNase LS and coupling with translation The specificity and mode of action of RNase LS are not yet understood. Most of the ~30 cleavages analyzed in various middle and late transcripts occur 3' to a pyrimidine in single-stranded RNA. Also, nucleotides 3' to the cleavage site might play a role. Apart from these observations, no sequence or structural motif seems to be shared by the RNase LS target sites [ 43 , 44 , 50 , 51 ]. The presence of ribosomes loaded on the mRNA seems to be required for some RNase LS sites to be efficiently cut. The ribosomes may be either translating or pausing at a nonsense codon. In the later case, new cleavage sites by RNase LS appear at some distance (20-25 nucleotides) downstream of the stop codon [ 44 , 48 , 51 ]. It has been suggested that ribosomes act through their RNA unwinding property, maintaining the RNA in a locally single-stranded conformation. In the absence of translation, a number of potential RNase LS sites would be masked by secondary structure [ 51 ]. Whether this is the only role of the ribosome in RNase LS activation is an open question. The role of RNase LS in E. coli A mutation in the E. coli rnlA gene, whether a point mutation or an insertion, leads to reduction in the size of colonies on minimum medium, but has no effect on growth in rich medium. Growth of rnlA mutants is however dramatically affected in rich medium supplemented with high sodium chloride concentrations, thus providing a phenotype for rnlA mutants. RNA is stabilized by 30% on average in an rnlA mutant. RNase LS was shown to participate in the degradation of specific mRNAs as reflected by the prolonged functional lifetime of several mRNAs in the rnlA mutant. The rpsO , bla and cya mRNAs are stabilized 2 to 3-fold, in the rnlA mutant, while other transcripts are unaffected. The greater stability of cya mRNA (adenylate cyclase) in an rnlA mutant might indirectly account for the sensitivity of rnlA cells to NaCl [ 45 , 52 ]. In addition to moderately controlling the decay of some bacterial transcripts, it is possible that the first function of RNase LS is host defense against phage propagation and Dmd is a phage response to overcome the host defense. Other activities implicated in RNA decay during T4 infection The E. coli poly(A) polymerase (PAP), encoded by the pcnB gene, adds poly(A) tails to the 3' ends of E. coli mRNAs and contributes to the destabilization of transcripts [ 53 ]. T4 mRNAs are probably not polyadenylated. Indeed, it has been found that after infection with the closely related bacteriophage T2, host poly(A) polymerase activity is inhibited [ 54 ]. Also, no poly(A) extension could be detected at the 3' end of the soc and uvsY transcripts after infection with T4 [ 55 ], suggesting that bacteriophage T4 infection also leads to PAP inhibition. This could, for example, occur through ADP-ribosylation of the protein. Growth of bacteriophage T4 on an E. coli strain carrying the rne Δ131 mutation, which is unable to assemble the RNA degradosome, is unchanged relative to infection of a wild-type strain [ 48 ] (also, S. Durand and M. Uzan, unpublished data). However, the rne Δ131 mutation has no effect on the growth of E. coli either, despite affecting the stability of several individual transcripts [ 56 - 59 ]. Therefore, the question of whether the degradosome plays a role in the turnover of some T4 mRNAs or is modified after infection remains open. Similarly, whether the host RNA pyrophosphohydrolase, RppH [ 21 , 60 ] is implicated in T4 mRNA turnover has not yet been determined. Infection with bacteriophage T4 expedites host mRNA degradation. The two long-lived E. coli mRNAs, lpp and ompA , are dramatically destabilized after infection with T4. The host endonucleases, RNases E and G, are responsible for this increased rate of degradation [ 61 ]. Phage-induced host mRNA destabilization requires the degradosome. Indeed, the lpp mRNA is not destabilized after infection of a strain that carries a nonsense mutation in the middle of the E. coli rne gene (encoding RNase E), leading to a protein unable to assemble the degradosome. A viral factor is also involved, since a phage carrying the Δ tk2 deletion that removes an 11.3 kbp region of the T4 genome, from the tk gene to ORF nrdC .2, loses the ability to destabilize host transcripts. The gene implicated has not yet been identified [ 61 ]. There is certainly an advantage for a virulent phage to accelerate host mRNA degradation immediately after infection, as this provides ribonucleotides for nucleic acid synthesis, frees the translation apparatus for viral mRNAs, and facilitates the transition from host to phage gene expression. A list of the several endoribonucleases and other enzymes involved in mRNA degradation and modification during T4 infection is presented in Table 1 . Inhibition of translation initiation RegA translational repression Inhibition of middle transcription, some 12-15 minutes post-infection at 30°C, is concomitant with the strong activation of late transcription [ 62 ]. This is the consequence of competition among sigma factors and changing the promoter specificity of the modified host RNA polymerase. Indeed, transcription initiation at T4 late promoters requires the phage-encoded late σ-factor, gp55, which replaces the major host σ70, and the T4-encoded gp33, which ensures coupling of late transcription with ongoing viral DNA replication [ 1 , 62 - 64 ]. Superimposed on this transcriptional regulation, the translation of a number of transcripts is inhibited by the RegA translational repressor. This small, 122 amino acid protein competes with the ribosome for binding to the translation initiation regions of approximately 30 mRNAs [ 65 ] RegA protein The crystal structure of T4 RegA is a homodimer, with symmetrical pairs of salt bridges between Arg-91 and Glu-68 and pairs of hydrogen bonds between Thr-92 of both subunits [ 66 ] (Figure 2 ). The monomer subunit has an alpha-helical core and two anti-parallel beta sheet regions. Two of the beta strands in the four-stranded beta sheet region B were identified by Kang et al . [ 65 ] as having amino acid sequences similar to RNP-1 and RNP-2 that are well characterized RNA-binding motifs. In addition, two pairs of lysines, K7-K8 and K41-K42 are in the same position in the proposed RegA RNP-1 domain [ 66 ] as they occur in the U1A RNA-binding protein, where they comprise basic "jaws" that straddle the RNA. However, none of the regA mutations identified in either T4 or phage RB69 prior to the availability of the RegA structure affected these lysine residues [ 65 ]. Structure-guided mutagenesis summarized below also did not implicate the lysines or the RNP-like domains in direct RNA binding by RegA. Concurrent with the T4 RegA structure determination, E. Spicer's group reported a terminal deletion mutant having residues 1 - 109 that bound RNA with reduced affinity, with 28% of the free energy of binding attributed to the terminal 10% of the protein [ 67 ]. It was also shown by proteolytic cleavage of free RegA, and RegA bound to an RNA oligonucleotide (the gene 44 operator), that conformational change in RegA upon RNA binding affected access to the C-terminal region. The C-terminal region is part of beta sheet region A of RegA [ 66 ], appears to be solvent-exposed, and thus potentially could interact with RNA in some manner. However, with the RegA structure available, targeted substitutions in the protein would reveal that specific RNA recognition likely occurs in an entirely different region of the protein. Structure-guided mutagenesis of RegA was undertaken to evaluate some of these findings and for understanding the specific interactions for RNA binding. Binding stoichiometry of RegA:gene 44 RNA complexes, gluteraldehyde cross-linking of RegA, and mutagenesis of amino acids in the inter-subunit interface showed that T4 RegA is a dimer in solution (as also revealed in the crystal structure), but binds RNA as monomer [ 68 ]. A 1:1 RNA:RegA monomer stoichiometry was independently shown using electrospray ionization mass spectrometry [ 69 ]. Mutagenesis of Arg91 again suggested that at least some residues in the C-terminal region are involved in subunit interactions and in RNA recognition [ 66 - 68 ]; Arg91 appears more relevant for RNA binding, whereas Thr92 is more relevant for dimerization. Spicer and colleagues further demonstrated that 19 mutations substituting amino acids in T4 RegA surface residues of both beta structures, including residues similar to the RNP-1 and RNP-2 motifs proposed by Kang et al . [ 66 ], as well as the two paired lysines, had essentially no affect on RNA binding affinity or on RegA structure [ 70 ]. Together with mutations in helix-A, and interpretation of mutations in T4 and RB69 regA that were isolated prior to the structure determination [ 71 ], a somewhat unique RNA-binding helix-loop groove (or "pocket") of RegA was proposed to provide the primary RNA recognition element for the protein. Modeling of the 78% conserved phage RB69 RegA protein showed that it also likely contains this unique RNA binding structure [ 72 ]. Exposed residues on helix-A (i.e., Lys14, Thr18, Arg21) are conserved and substitutions reduce RNA binding substantially. Additionally, a conserved loop Trp81 to Ala81 substitution in both proteins abolishes RNA binding [ 72 ]. Phe106, earlier shown to crosslink with bound RNA, is positioned in a loop bordering the other end of the helix and further defines the apparent binding pocket [ 67 , 70 , 72 ]. Figure 2 summarizes these findings. In summary, biochemical and structural studies of T4 and RB69 RegA have led from inferences of possible motifs in RNA binding to structure guided mutagenesis revealing a unique protein pocket or groove that, in the monomer form, accommodates the many different mRNAs that RegA proteins bind to cause translational repression. The apparent binding domain and exposed amino acids are largely conserved in RegA proteins from diverse phages sequenced to date (Figure 3 ). As for gp32 and gp43, a RegA-RNA complex has not been structurally resolved and additional analysis of RegA-RNA interactions in the helix-loop groove would be of interest. RegA RNA operators Early genetic and translational repression assays confirmed that RegA binding sites on mRNA overlap the AUG translation initiation codon, or are located immediately 5' to the AUG, and occluding the site reduces formation of the ternary translation initiation complex; decay of the repressed messages is then enhanced [ 65 ]. The lack of clear sequence conservation or secondary structure to define RegA binding sites in the ~30 mRNAs repressed, prompted use of RNA SELEX with T4 RegA to capture high-affinity RNA ligands. This RNA binding site selection was thus performed in the absence of constraints imposed on the sequence by 30S ribosome subunits that bind the same region of mRNA for translation initiation [ 73 ]. Emerging from multiple rounds of SELEX was an RNA consensus sequence of 5'-aaAAUUGUUAUGUAA-3' that bound RegA with an apparent Kd of 5 nM (the lower case 5' bases were already present in the starting, non-variable regions of the RNA). The sequence showed no apparent structure using nuclease or base-modifying chemical probes and is consistent with earlier observations that biologically relevant RegA binding sites lack clear RNA secondary structure. Although the T4 RegA SELEX sequence is similar to mRNA sequences repressed by RegA (i.e., T4 gene rIIB , AAAAUUAUGUAC; gene 44 , AAAUUAUGAUU; dexA , AAAAUUUAAUGUUU), there was no exact match between it and the repressed T4 messages [ 73 ]. These findings emphasize that T4 RegA binding sites are A+U rich; include an AUG and a 5' poly(A) tract; lack apparent structure; and in general, illustrate how an RNA binding determinant has evolved for occurring on many different mRNAs where fMet-tRNA and the 30S ribosome subunit also bind. RNA sequences bound by phage RB69 RegA have also been examined [ 65 , 72 , 74 , 75 ]. Translational repression occurs at RNAs from both phages, although binding affinities displayed by the two proteins are different in vivo and in vitro ; a hierarchy of early and middle genes repressed by T4 RegA is also seen with RB69 RegA. For RB69 RegA, the protein protected a region between the gene 44 and gene 45 Shine-Dalgarno and AUG, but not the initiator AUG itself [ 72 ]. The protein would still compete for the same binding site as the ribosome. Using a stringent but reduced number of selection cycles, RNA SELEX was performed using immobilized RB69 RegA and a variable sequence of 14 bases [ 75 ]. The selected RB69 RegA RNAs were predominately 5''AAUAAUAAUAAnA-3', which also did not contain a conserved AUG but were clearly A+U rich. As discussed by Dean et al . [ 75 ], a stop codon (i.e, UAA) for an upstream gene within the ribosome binding site region of the adjacent downstream gene, may contribute a relevant sequence for RNA recognition by RegA proteins. All of these findings emphasize the range of RegA repression efficiencies at different sites, lack of RNA structure in binding sites, and the variable mRNA sequences to which the protein binds. Specific autocontrol of translation: gp32 and gp43 Besides the two general post-transcriptional regulators, RegA and RegB, the T4 DNA unwinding protein, gp32, and the DNA polymerase, gp43, both involved in DNA replication, recombination and repair, autogenously regulate their translation. Control of gene 32 translation and mRNA degradation Gene 32 encodes a single-stranded DNA binding protein (gp32) essential for replication, recombination and repair of T4 DNA. It appears after a few minutes of infection, reaches a maximum around the 12-14 th minute and declines thereafter. In addition to being temporally regulated at the transcriptional level, gp32 inhibits its own translation when the protein accumulates in excess over its primary ligand, single-stranded DNA. This regulation is achieved through binding of gp32 to a pseudoknot RNA structure located 5' in region 67 nucleotides upstream of the gene 32 translation initiation codon. This binding is thought to nucleate cooperative binding through an unstructured A+U-rich sequence (including several UUAA(A) repeats 3' to the pseudoknot) that overlaps the ribosome binding site [ 3 , 6 , 65 ]. Gp32 is a Zn(II) metalloprotein with three distinct binding domains [ 76 ]. To date, the structure of full-length gp32 has not been determined, nor has the protein in complex with RNA been structurally examined. It has been presumed that DNA and RNA are alternative ligands that bind in the same cleft. Although there is substantial study of gp32 interactions with ssDNA, and with proteins of the DNA replication apparatus, few studies have investigated either the RNA pseudoknot in the mRNA autoregulatory site or the molecular details of gp32-RNA interactions. NMR analysis of the phage T2 gene 32 pseudoknot revealed two A-form helices coaxially stacked, with two loops separating the two helical structures [ 77 ] (Figure 4 ). A related translational regulatory structure is present in gene 32 leader mRNA of the phylogenetically related T4-type phage RB69 [ 78 ]. In this case, sequence alignment, chemical- and RNase-sensitivity, and gp32-RNA footprinting revealed mRNA operator similarities and differences that explain overlapping yet distinct RNA-binding properties by the two gene 32 proteins [ 78 ]. However, the T4-type coliphage RB49 genome sequence revealed no conserved pseudoknot or an A+U-rich sequence near the predicted ribosome binding site of its gene 32 mRNA [ 79 ]. More thorough study of translational autocontrol by gp32 in diverse T4-related phages is needed. To date, the T4-type phage gene 32 RNA pseudoknot may still be the only viral example of this structure used in autoregulation of translation. The various biological roles of viral RNA pseudoknots was well reviewed by Brierley et al . [ 80 ]. The gene 32 transcripts are more stable than any other T4 mRNAs. A half-life of 15 minutes was measured at 30°C and, under derepression conditions (in a T4 gene 32 mutant infection unable to achieve translation repression), the half-life can reach 30 minutes [ 81 , 82 ], indicating that translation of the gene 32 mRNA positively affects its stability. All the gene 32 mRNA species are processed by RNase E, 71 nucleotides upstream of the translation initiation codon of the gene [ 83 , 84 ]. In addition to the cleavage at -71, two other major cleavages were identified, one far upstream in the polycistronic transcripts (-1340) and the other at the end of the coding sequence of gene 32 (+831) [ 85 , 86 ]. The conservation of all three RNase E processing sites in 5 different T4-related phages, in spite of significant changes in the organization of the upstream regions, suggests that these cleavages play an important role in controlling expression of gene 32 and/or its upstream genes [ 86 ]. The new 3' ends created by RNase E processing are potential entry sites for the host 3'-5' exoribonucleases. In fact, portions of the transcript upstream of the -71 and -1340 cleavage sites were shown to be rapidly degraded [ 84 , 85 ]. The RNase E cleavage at +831 has no consequences on the functional decay of the gene 32 mRNA, while it affects the chemical decay [ 17 ]. It is noteworthy that this RNase E site is very close to the translation termination codon of gene 32. The E. coli ribosomal protein S15, encoded by the rpsO gene, autogenously regulates its own translation. The rpsO transcript carries a pseudoknot in its translational operator [ 87 ], like the T4 32 mRNA. Also, a strong RNase E cleavage site, involved in rpsO mRNA decay, lies at the end of the structural gene, in close proximity of the translation termination codon. Interestingly, ribosomes were shown to inhibit this distal RNase E cleavage [ 88 ]. On this basis, it is tempting to suggest that a ribosome that reaches the end of gene 32 transcript would hinder the accessibility of the distal RNase E site to RNase E. Thus, gene 32 transcripts that undergo RNase E processing at this site might be only those that have been already translationally inactivated, e.g., under repression conditions (excess of gp32 over single stranded DNA). This situation would promote rapid elimination of the untranslated gene 32 transcripts. Autocontrol of gene 43 translation Like gp32, T4 DNA polymerase (gp43) is an autoregulatory translational repressor protein; it binds an RNA operator sequence that includes a hairpin about 40 bases upstream of its translation initiation codon and sequence that overlaps the ribosome binding site [ 89 ]. Most T4 gene 43 transcripts are synthesized early during infection and have a half-life of approximately 3 minutes, yet it is these transcripts on which the polymerase exerts translational repression when not engaged in DNA replication [ 65 ]. gp43 RNA-binding determinants The structure of the closely related gp43 DNA polymerase of phage RB69 serves as an excellent model for α DNA polymerases that are conserved across phylogenetic domains [ 90 , 91 ]. Due to the availability of the RB69 gp43 structure, more recent RNA binding studies have been conducted using this protein and its RNA operator. RB69 operator RNA chemically crosslinks with gp43 in the DNA binding "palm" domain, but other sites and residues protected from protease when the protein is bound to specific RNA were distributed across domains of the polymerase. These numerous affects were attributable to either direct interactions, or conformational changes induced by RNA binding [ 92 ]. As for the gp32-RNA interactions, full appreciation of the contacts and conformational changes during binding of gp43 to its specific RNA target will require solution or crystal structure of gp43-RNA complexes. Gene 43 mRNA autoregulatory site The gene 43 RNA operator includes an upstream hairpin, but there is no evidence that it forms a pseudoknot structure like that of the gene 32 binding site. While the T4 hairpin-loop operator is 18 bases and that of RB69 is 16 bases, the top 10 bases are identical, including nucleotides in the loop [ 93 ]. The -UAAC- loop sequence of the T4 & RB69 operators were also the predominant bases selected in the first RNA SELEX experiment that used gp43 for RNA binding site characterization [ 25 ]; it will be interesting to see whether any phage gp43 proteins closely related to the T4 protein have the SELEX major variant loop sequence (-CAAC-) in their native, autoregulatory RNA hairpins. Phage RB49 contains -UAAA- in its RNA loop, and various repression and RNA-protein interaction assays point to the 3' AC and AA loop bases as especially relevant for binding by these three phage proteins; however, some T4-related phages encode gp43 DNA polymerases that do not autoregulate translation [ 92 - 94 ]. Other T4 post-transcriptional control systems RNA structure at translation initiation regions RNA structure influences translation initiation of T4 mRNAs, especially as they target protein binding in translational repression (i.e., gp32 and gp43 above; [ 65 , 95 ]). In addition, some T4 mRNAs form intramolecular RNA structures that directly contribute to translation initiation efficiency of the respective mRNAs. Only a few advances have been made in the last decade on these cis -acting RNAs, which are briefly summarized here. We should note that no riboswitch system [ 96 , 97 ] or small, trans -acting regulatory RNA has been functionally characterized from T4; maybe some of the genome sequences of T4-related phages will suggest good candidates for these types of RNAs. Two small RNAs, RNAC and RNAD, are transcribed from the T4 tRNA region, but their biological roles are unknown [ 95 ]. Examples of inhibitory RNA structures at translation initiation regions include mRNAs encoded by T4 genes e , soc , 49 , and I-TevI [ 65 ]. In each case, the Shine-Dalgarno and/or the AUG start codon are sequestered in an RNA helix that reduces 30S subunit binding in forming the ternary translation initiation complex [ 98 ]. The well-documented case for gene e (T4 lysozyme) is that early during infection longer transcripts are made that extend into e and if translated could potentially lead to premature cell lysis. However, the longer transcripts clearly form the inhibitory RNA structure [ 98 ], reducing synthesis of lysozyme 100-fold [ 99 ] relative to transcripts lacking RNA structure. Transcripts initiated from either of two T4 late promoters immediately upstream of the ribosome binding site lack the 5' portion of the gene e mRNA inhibitory structure and are well translated. Although there is no additional analysis of the e mRNA structures, similar leader RNA sequences are predicted from genome sequences of closely related T4-type phages, and each has a T4-type late promoters in upstream region the encodes the 5' strand of the RNA structure. Therefore, early translation of these lysis genes may also be inhibited by intramolecular RNA structures (Figure 5 ). The T4 thymidylate synthase gene ( td ) contains an intron, wherein the intron encodes a homing endonuclease, I-TevI [ 100 ]. Similar to gene e , early and middle period transcripts that extend through the td 5' exon and into the intron do not yield translated I-TevI because of the tightly regulated, sequestered ribosome binding site [ 101 - 103 ]. Recently, Edgell and colleagues [ 104 ] showed that deletion of nucleotides that comprise the late promoter proximal 5' portion of the RNA structure leads to increased levels of I-Tev1 throughout infection that is translated from upstream-initiated early and middle promoters. In the presence of added thymidine, the mutant phage (ΔHP) showed no reduction in T4 viability or burst size attributable to increased translation initiation of I-TevI mRNA. However, a series of phage growth, RT-PCR, and tRNA suppressor assays, led Gibb and Edgell [ 104 ] to conclude that tight regulation of I-TevI translation initiation by the RNA secondary structure increases intron splicing. That is, the structure reduces ribosome loading and movement through the intron RNA, thereby promoting structure formation (P6, P6a and P7) in the intron. Loss of translation inhibition disrupts intron RNA folding and splicing, and prevents proper accumulation of thymidylate synthase [ 104 ]. Similar structures predicted to cause negative translational regulation in I-TevI RNAs have been identified in T4-related phages, and also in the translation initiation regions of phage I-TevII and I-TevIII homing endonuclease genes [ 105 , 106 ]. Stand-alone homing endonucleases (not located within an intron) also have RNA structures that have been shown ( Aeromonas phage Aeh1 mobE ; [ 106 ]) or implicated (T4 segB ; [ 107 ]) to reduce translation initiation. Intramolecular RNA structures in T4 mRNAs also have been shown to improve translation initiation. Of particular note are T4 genes 38 and 25 [ 108 ]. In these cases, suboptimal, extended spacing between the Shine-Dalgarno and AUG start codon is brought to a functional distance by an RNA secondary structure between the SD and AUG. For gene 38 mRNA, the spacing of 22 nucleotides is reduced to 5 nucleotides with the structure; for gene 25 the structure reduces the spacing from 27 to 11 nucleotides [ 108 ]. Mutations in the intervening sequence that destabilize the structure reduce translation initiation efficiency. More recently, Malys and Nivinskas [ 109 ] used reporter assays of the gene 25 TIR region fused to lacZ , in conjunction with DMS probing of the intervening RNA structure, to confirm the "split" RBS-SD arrangement and its use for effective expression of gene 25 . Phylogenetic evaluations of 38 T4-related phages revealed that the close T-even phages all have the intervening RNA structure in the split TIR configuration, but more distant, non-coliform T4-related phages lack this arrangement (Figure 5 ). This suggested an evolutionary history for the gene 25 split TIR, along with the enhancing, intervening RNA structure, where the arrangement arose after the close T-even phages diverged from other members of the phage group [ 109 ]. T4 exclusion and the mechanism of bacterial PrrC anticodon nuclease T4 mutants defective in polynucleotide kinase (Pnk) or RNA ligase 1 (Rli1) grow normally on E. coli laboratory strains, but are restricted on some E. coli hospital strains. The restrictive hosts are referred to as prr + for T4 p nk - or r li - mutants. T4 intergenic suppressors of the restriction of pnk or rli mutants on prr + hosts define the T4 stp locus (see early reports cited [ 23 , 110 ]). The system of growth restriction results from activation of a host anticodon RNase (ACNase), PrrC, by the phage-encoded Stp protein. The bacterial PrrC RNase cuts within the tRNA Lys anticodon loop, upstream of the wobble nucleotide and causes the arrest of phage protein synthesis and phage growth. T4 has evolved a tRNA repair mechanism to escape this restriction by way of the phage-induced polynucleotide kinase - 3' phosphatase ( pnk gene), which converts the tRNA 5'-hydroxyl and 3'-phosphate termini left by PrrC, into 5'-phosphate and 3'-hydroxyl ends. Subsequently, the T4 RNA ligase 1 rejoins the tRNA ends. Stp, Pnk and Rnl1 are all under the delayed early mode of expression [ 23 ], meaning that restoration of the cleaved tRNA Lys takes place early during infection. The E. coli prrC gene is located within a group of genes that encode type Ic restriction-modification (R-M) proteins, Eco prrI, in the order hsdM-hsdS-prrC-hsdR ( or prrABCD) . The Hsd enzymes are assembled in a multimeric complex, HsdR 2 M 2 S [ 23 , 110 - 112 ]. Stp alleviates type Ic restriction and activates the tRNA Lys ACNase Although only 26 residues long, the Stp polypeptide is necessary and sufficient to elicit the tRNA Lys ACNase activity and mutations in the stp gene abolish activation of the ACNase. Expression of Stp protein from a plasmid also elicits ACNase activity in an uninfected prr + strain [ 113 ]. Stp alleviates Eco prrI-mediated DNA restriction, indicating that this protein targets the Eco prrI complex rather than just PrrC directly. Several observations support this explanation: a) Growth of the lambdoid phage, HK022, propagated on prr 0 cells, is heavily restricted upon plating on a prr + strain; b) Expression of Stp from a plasmid in prr + cells alleviates this restriction; c) prr + cells do not restrict growth of phage HK022 prepared on a prr + host expressing Stp. This strongly suggests that Stp inhibits Eco prrI restriction enzyme but does not affect the modification activity. Also, the fact that Eco R124I, another type Ic R-M system that does not include an ACNase, is inhibited by Stp, strongly supports the above conclusion. Stp is specific for type Ic R-M systems; it has no effect on the type Ia R-M systems, Eco KI and Eco BI [ 113 ]. The N-proximal 18 amino acids of Stp protein are probably involved in the interaction with Eco prrI since a number of missense stp mutants deficient in ACNase activation have been detected among revertants of T4 pnk - or rli - mutants that are able to grow on the E. coli prr + host. The majority of these suppressors cluster between residues 4 and 14 in the N-terminal part of the Stp polypeptide. In contrast, a deletion of 8 codons from the C-terminus only moderately decrease the two activities of Stp. Alignment of Stp sequences from eight T4-related phages with that of T4 reveals an almost absolute conservation of the 18 N-terminal residues, whereas polymorphism is evident in the remainder of the polypeptide. In most cases, the amino acids important for ACNase activation are also implicated in Eco prrI inhibition, suggesting some shared features [ 113 ]. We direct the reader to the primary literature by Kaufmann and colleagues [ 113 ] that hypothesizes on the evolutionary history of Stp in counteracting host DNA restriction enzymes, while also activating host ACNase. Interaction of PrrC with EcoprrI and mechanism of ACNase activation It appears that PrrC is maintained in a latent, inactive form, due to its association with the Eco prrI proteins. Antibodies against the closely related Eco R124I R-M system co-immunoprecipitate the PrrC protein. Conversely, antiserum against PrrC precipitates the HsdR (PrrD) protein [ 114 , 115 ]. Activation of latent ACNase in prr + cell extracts requires both Stp and GTP and is likely accompanied by GTP hydrolysis since addition of the non-hydrolysable analogue, GTPγS, is inhibitory. DNA is another positive effector of ACNase. Indeed, if the cell extract is treated with DNase I, activation by Stp is abolished. The activating DNA must carry cleavable (unmodified) Eco prrI restriction sites to be effective in ACNase activation. This led to the proposal that Stp activates the latent ACNase when its Eco prrI partner is tethered to Eco prrI DNA substrates [ 114 - 116 ]. Induction of prrC from a multicopy plasmid elicits ACNase activity in uninfected E. coli cells or in cultured mammalian cells [ 116 , 117 ]. This occurs in the absence of any other prr genes. In E. coli , this core ACNase is highly labile (t 1/2 < 1 minute at 30°C) while the ACNase found in extracts of prr + cells is rather stable, indicating that the association with the Hsd proteins stabilizes PrrC [ 115 ]. In crude cell extracts as well as with a partially purified leaky mutant form of PrrC (more stable than the wild-type enzyme; see [ 22 ]) core ACNase is not affected by Stp and is indifferent to the presence of DNA. This suggests that the role of these two effectors is to alleviate the Hsd masking effect on PrrC [ 116 ]. dTTP and other pyrimidine nucleotides, but not GTP or ATP, stimulate core ACNase activity at physiological concentrations, most probably by stabilizing the protein. ATP, GTP and dTTP bind to the NTP-binding domain of PrrC (see below) [ 22 , 116 ]. Unexpectedly, GTP is inhibitory. The reason why core ACNase does not respond to GTP like the holoenzyme is unclear. Although this nucleotide binds PrrC (see below), it is possible that the GTPase catalytic site becomes active only when PrrC is associated with the Hsd component. However, it must be noted that the purified PrrC used in this study bears a leaky mutation, D222E, which confers a higher stability to the protein and permits its purification. Unfortunately, this mutation lies in the Walker B motif, which might affect the GTPase activity [ 22 , 116 ]. Several pyrimidine nucleotides are able to activate the latent ACNase in the absence of Stp, but at concentrations far above those required to protect the core ACNase or to UV-crosslink with PrrC (see below). dTTP is the most potent of them [ 116 ]. Like for Stp, the activation by dTTP requires GTP hydrolysis and Eco prrI DNA substrate. However, unlike Stp, which targets the Hsd complex, dTTP targets PrrC directly. The physiological meaning of this alternative mode of activation is not clear. It has been interpreted to mean that ACNase may be mobilized under cellular stress conditions not related to T4 infection. An alternative, but not exclusive, model assumes that dTTP is an obligatory co-activator working in concert with Stp. Because dTTP binds PrrC with high affinity, trace amounts of this nucleotide in crude extracts would be sufficient to allow latent ACNase activation upon Stp addition. Excess dTTP would by-pass the requirement for Stp [ 22 ]. PrrC structure, domain organization and distribution The N-proximal two-thirds of the PrrC protein (ca. 265 residues out of 396) harbors a nucleotide-binding site and is thought to mediate activation of the latent ACNase (Figure 6 ). It features motifs that resemble those found in typical ABC-transporter ATPases: a somewhat degenerated ABC signature motif, Walker A (phosphate-loop) and Walker B motifs and an H-motif that contains a highly conserved His (the linchpin His) [ 22 ]. Mutations in the universally conserved residues of the Walker A motif of PrrC abolish ACNase activity [ 116 ]. ATP, GTP and dTTP bind PrrC to this region, as a mutation lying immediately upstream of the ABC signature severely decreases the ability of the protein to UV-crosslink with all three nucleotides. Because dTTP activates ACNase by targeting PrrC directly and requires GTP hydrolysis, the binding sites for the two nucleotides are likely different in PrrC oligomer. GTP and ATP likely share the same site. The interaction of dTTP with PrrC departs from that of the two other nucleotides in several respects. Mutations in the N-proximal Walker A motif, in the ABC signature sequence, or in the linchpin His do not affect the binding of ATP or GTP while they abolish dTTP binding to PrrC. Also, dTTP affinity to PrrC is three orders of magnitude higher than that of the two other nucleotides. Furthermore, a mild heat inactivation of ACNase has little consequence on ATP or GTP binding but abolishes dTTP binding. This suggests that the dTTP binding site is distinct and is sensitive to small changes in PrrC structure [ 22 , 116 ]. The C-terminal third of PrrC is implicated in tRNA recognition and catalysis. Several missense mutations affecting ACNase activity are located in close proximity here (Figure 6 ). These were selected as mutations conferring the ability to survive the lethal overproduction of PrrC [ 118 ]. Because most of these substitutions are clustered in a short sequence highly conserved in a subset of the known PrrC homologues (residues 287 to 303) [ 22 , 118 , 119 ], the behavior of an 11-residue peptide (residues 284 to 294) was examined for RNA substrate interactions. This peptide forms UV-induced crosslinks with tRNA Lys anticodon stem-loop analogs and inhibits the ACNase activity of PrrC. Introducing certain substitutions in the peptide that are known to inactivate full-length PrrC, or shortening it by one amino acid from either end, leads to strong decrease in its ability to inhibit ACNase and to UV-crosslink with the anticodon stem-loop substrates [ 119 ]. Thus, this sequence is likely a part of the PrrC protein that interacts with the tRNA. In addition, substitutions in Arg320, Glu324 and His356 that are 100% conserved in all known PrrC homologs and suspected to participate in the acid-base catalytic mechanism, completely abolish ACNase activity. Null mutations in Arg320 and Glu324 can be rescued chemically by small molecules, indicating that the ACNase deficiency does not arise from a change in the structure of the protein, but rather from the lack of the correct amino acid side chain. This is compatible with the notion that at least two of the three conserved residues are implicated in catalysis [ 22 ]. Orthologues of prrC were found in 19 distantly related bacteria, all linked to genes for type Ic R-M enzymes. All of the orthologue proteins share in their N-terminal domains the NTP-binding site and a sequence of 15 residues called the "PrrC box". Also, their C-terminal domains contain the catalytic amino acid triad mentioned above. Thus, the PrrC proteins form a family whose members are strongly suspected not only to possess anticodon nuclease activity (as shown to be the case for those encoded by Haemophilus influenzae and Streptococcus mutans [ 120 ]), but also to be regulated like E. coli PrrC. However, their substrates may vary since the sequence involved in tRNA recognition varies among the PrrC proteins [ 22 , 119 ]. ACNase activity co-elutes from a gel filtration column with a homo-oligomer of ca. 200 kDa, suggesting that active PrrC could be a tetramer. Glutaraldehyde protein-protein crosslinking experiments confirm this, as mostly dimers and tetramers are produced [ 22 ]. Klaiman et al . [ 119 ] showed evidence suggesting that the C-terminal region of PrrC, involved in tRNA recognition, interacts with the substrate as a parallel dimer. Thus, while the N-terminal domain of PrrC is expected to associate in a head-to-tail dimer, by analogy with known structures of ABC transporter ATPases, the C-terminal region seems to dimerize in opposite orientation. To account for this situation, Klaiman et al. [ 119 ] proposed a model in which the PrrC subunits are associated in a unique tetramer conformation. Clearly, additional structural studies are necessary to elucidate the oligomeric structure of PrrC. ACNase specificity Kaufmann and colleagues have shown that the tRNA Lys anticodon stem-loop region plays a prominent role in PrrC recognition: (a) PrrC, when overproduced in cells, cleaves other tRNAs in addition to tRNA Lys . The anticodon sequences of all these secondary tRNA substrates share sequence similarities with that of tRNA Lys [ 118 ]. (b) Expression of PrrC in human HeLa cells elicits cleavage of intracellular tRNA Lys3 that shares with the E. coli tRNA Lys the same anticodon loop sequence [ 117 ]. (c) Most mutations in the tRNA Lys anticodon sequence make the resulting tRNAs very poor substrates for ACNase. One of them, however, (U35 -> C leading to UCU anticodon) leads to relaxed site specificity as new cleavages occur upstream and downstream of the usual cleavage site [ 121 ]. (d) A chimeric, unmodified, tRNA Arg1 carrying the UUU lysine anticodon instead of its own anticodon, is as efficiently cleaved as the unmodified tRNA Lys [ 121 ]. (e) PrrC quite efficiently cleaves a fragment of the tRNA Lys encompassing only the anticodon loop and the first 5 base pairs of the associated stem (17 nucleotides altogether) [ 122 ]. (f) Cleavage of tRNA Lys that lacks either of the two modifications of the uridine wobble base (2-thio- and 5-methylaminomethyl) is severely affected. Interestingly, three substitutions of PrrC Asp287 (D287Q, D287H and D287N), known to reduce the efficiency of cleavage of normally modified E. coli tRNA Lys , reverse the negative effect of the hypomodifications of the wobble base. This strongly supports the notion that Asp287 directly contacts the modified wobble base. Experiments carried out with the anticodon stem-loop (17-mer) as substrates reinforce this conclusion. Indeed, Jiang et al . [ 122 ] showed that the wobble base modification present in the anticodon stem-loop derived from mammalian tRNA Lys3 (5-methoxycarbonyl-2-thiouridine instead of 5-methylaminomethyl-2-thiouridine) is inhibitory to ACNase activity. However, D287H PrrC, poorly active on the fully modified E. coli anticodon stem-loop counterpart, overcomes this inhibitory effect [ 121 , 122 ]. (g) The influence of the stem stability and of the three different modifications in these anticodon stem-loop structures was examined in great detail. The picture that emerges is the following. A stable stem is inhibitory to ACNase activity. Some breathing of the duplex seems necessary, possibly to facilitate conformational changes of the tRNA upon interaction with PrrC. Also, PrrC seems to favor base modifications that help stack the anticodon nucleotides into an A-RNA conformation [ 122 ]. Thus, three elements are recognized by PrrC: the anticodon sequence, the base modifications and base-pairing of the stem. Although the anticodon stem-loop region of tRNA Lys is the predominant element of PrrC specificity, other sequence and/or structural elements of tRNA Lys seem to be involved. This is indicated by the fact that chimeric tRNAs, other than the tRNA Arg1 , carrying the lysine anticodon, are not substrates for PrrC. Also, any substitution of the discriminator nucleotide (A73) of the tRNA Lys , a major identity element of LysRS that lies in the acceptor arm, reduces, though moderately, the ACNase cleavage efficiency. Furthermore, trimming the 3'-terminal ACCA overhang nucleotides has little effect on ACNase activity but relaxes the cleavage site specificity in a manner similar to the U35 -> C mutation [ 121 ]. These data suggest additional interactions between PrrC and the acceptor region of tRNA Lys . Gathering the data into a model Taken together, the above data suggest the following cascade of events. A few minutes after infection of prr + E. coli cells, the T4-encoded Stp polypeptide binds the bacterial Eco prrI component and inhibits its DNA restriction activity (Figure 6 ). This modifies the Eco prrI/PrrC interaction, inducing a change in PrrC conformation that unmasks ACNase activity. This process requires GTP hydrolysis. dTTP, bound to PrrC, is a co-activator with Stp. Its role could be to stabilize PrrC that would otherwise be labile in its activated conformation. The tRNA Lys anticodon is then bound and cleaved by the respective ACNase regions. But the phage provides the healing (Pnk) and sealing (Rnl1) enzymes required to restore the affected tRNA, allowing the phage to escape the cellular defense. The phage exclusion mechanism depicted here and the way the phage wards off this cellular defense revealed an intimate physiological link between restriction-modification regulation and translational activity. The distribution of PrrC homologs in unrelated bacteria and their systematic link with type Ic R-M systems, suggest that the PrrC proteins have a cellular function not related to phage infection, possibly to disable protein synthesis under conditions of stress that affect activity of type I DNA restriction endonucleases [ 22 , 111 , 120 ]. Cellular RloC proteins Using a bioinformatical approach, Davidov et al . [ 120 ] recently found a new class of PrrC homologs called RloC ( r estriction l inked o rf). RloC proteins are widespread in bacteria, although they are not present in E. coli and only one was found in Archaea and none in Eukarya. Genes for some of these proteins were first characterized as linked to genes for type I or III R-M enzymes in Campylobacter jejuni [ 123 ] however, now only a minority of the rloC genes map to R-M loci. RloC orthologues share with E. coli PrrC the presence of ATPase motifs in the N-termini and the amino acid triad thought to constitute the catalytic site in the C-termini. This structural homology is accompanied by a functional homology: when expressed in E. coli , RloC from the thermophilic Geobacillus kaustophilus exhibits "ACNase" activity. Also, alanine substitutions of the three amino acids of the triad abolish RloC ACNase. However, RloC differs from PrrC in several respects: (1) RloC substrate is still uncertain but it is not tRNA Lys ; (2) RloC ACNase actually excises the wobble nucleotide rather than just cleaves upstream; and (3) Like the other RloC orthologues, the G. kaustophilus protein is larger than PrrC because the N-terminal NTPase domain is interrupted by a large coiled-coil fragment that's similar to sequences found in proteins implicated in DNA repair. This fragment contains a typical "zinc hook" motif able to co-ordinate Zn +2 ions. Mutations in the zinc-hook motif lead to increased ACNase activity and conversely, Zn +2 ions are inhibitory [ 120 ]. The RloC proteins show quite interesting and new properties that lead to several questions. a) Is the RloC-dependent ACNase normally maintained in a latent, inactive form that is activated upon phage infection? b) Since RloC excises the wobble nucleotide, is there a phage that repairs this lesion? If not, this would be an efficient mechanism of cellular defense against phages. c) Are there stress conditions, unrelated to phage infection, that elicit RloC ACNase activation? d) Are the RloC proteins associated with restriction-modification proteins? If so, do they respond to the presence of DNA? By analogy with the PrrC ACNase, Kaufmann and colleagues [ 120 ] speculate that, in addition to conferring a mechanism of phage exclusion, the RloC proteins couple DNA damage that occurs under stress conditions, to translation inactivation via tRNA cleavage. Their model is based on two main observations: a) some proteins containing zinc-hook/coiled-coil domains are implicated in DNA repair; and b) DNA damage leads to alleviation of type Ia and Ic restriction enzymes, a process aimed at protecting unmodified, newly synthesized DNA during the process of repair and recovery from damaged DNA. The model assumes that the RloC protein would sense DNA damage signals via its zinc-hook and would convey activation to the ACNase domain, possibly via conformation changes driven by NTP hydrolysis. Such a model requires demonstrating a link between RloC proteins and DNA. T4 exclusion by Gol-activated proteolysis of EF-Tu Translation elongation is targeted in the T4 gol-lit phage exclusion system [ 23 , 110 ]. Inhibition of translation occurs when T4 gene 23 (the major head protein) is translated during infection of E. coli cells that harbor the defective prophage e14. The e14 element carries the lit gene, which encodes a latent protease that, somewhat similar to allosteric activation of latent PrrC ACNase activity, is active on EF-Tu when the so-called gol region of gene 23 is translated. Biochemical analyses of Lit/Gol/EF-Tu interactions have revealed the process by which phage exclusion occurs through proteolysis of EF-Tu. A short 29-residue region of the gp23 polypeptide defines Gol function, but a more stable interaction with Ef-Tu appears to occur with 100 amino acids from the first 1⁄4 of gp23. Scanning mutagenesis showed 13 residues in a 20 amino acid core region of Gol to be most important for its activity [ 124 ]. Binding of Gol to EF-Tu is required to promote Lit reactivity. By binding to domains II and III of EF-Tu, the Gol peptide promotes Lit-mediated hydrolysis of EF-Tu between Gly59 - Ile60. Binding of Gol peptide is preferential for the open EF-Tu:GDP complex, and binding itself inhibits the EF-Tu GTPase of domain I. When Gol is bound to EF-Tu, it appears that EF-Tu domain I is more accessible to Lit, leading to "substrate-assisted" or "cofactor-induced" activation of cleavage by the protease [ 124 , 125 ]. Lit is a zinc metallo-protease with the active site motif HEXXH of this protease class, but Gol does not contribute directly to active site residues [ 124 - 126 ]. Kleanthous and colleagues [ 124 ] have noted that the gp23 Gol region is the most conserved region, in the overall conserved gp23 major head protein of sequenced T4-related phages. They suggest that gp23 of these phages interacts with EF-Tu of all the respective hosts. While Gol interactions may be broadly relevant for translation and folding of this extremely abundant capsid protein, other extant prophage-encoded, Lit-type proteases may also elicit "cellular suicide" via Lit/Gol/EF-Tu proteolytic assemblies. Programmed translational bypassing Topoisomerase of phage T4 is encoded by three genes: 39 , 60 and 52 . Most type II topoisomerases are comprised of two distinct subunits (i.e., gyrA and gyrB of DNA gyrase) that are assembled as tetrameric A 2 B 2 enzymes. The adjacent T4 genes 39 and 60 are separated by 1010 nucleotides that include an apparently defective HNH homing endonuclease gene ( mobA ) and ORF 60.1 [ 95 ] (see [ 127 ] for a recent summary). Following their respective translation, gp39 and gp60 assemble to comprise the "gyrB-like" large, ATP-hydrolyzing subunit of the T4 topoisomerase. In all other T4-related phages sequenced to date, this subunit is encoded by a single open reading frame that is typically annotated as gene 39 (Figure 7 ). An interesting, post-transcriptional feature in this region of the T4 genome that has received considerable attention is the presence of a 50 nucleotide "intervening sequence" in the 5' coding region of gene 60 that is transcribed into mRNA but is not translated into gp60. The ribosome "hops" or "bypasses" the extra 50 bases in the mRNA to produce the gp60 polypeptide in a process termed programmed translational bypassing [ 128 , 129 ]. In all other T4-related phages, not only are genes 39 and 60 joined as a single gene (they lack the mobA - 60.1 insertion), but none appears to have the intervening gap nucleotides that would suggest programmed translational bypassing. The process appears to be unique to T4 gene 60 , but its study has shed new light on the mechanisms of translation. Atkins, Gesteland and colleagues at the University of Utah have studied many of the features that promote programmed translational bypassing by E. coli ribosomes on this unique T4 mRNA; the reader is urged to read further details through their primary research articles and reviews on the topic [ 129 - 131 ]. It is important to emphasize that experiments elucidating processes affecting gene 60 translational bypassing have provided new insights to the general mechanisms of mRNA decoding, including the roles of mRNA sequence and structure, peptidyl-tRNA interactions within the ribosome, occupancy of ribosome decoding sites, and many other features of translating ribosomes. Figure 7 summarizes the major components of the T4 programmed translational bypass in gene 60 transcripts, as elaborated by the Utah group. Some of principal features include: a domain of the nascent gp60 polypeptide preceding the hop, the glycine 46 codon GGA at the end of the initial ORF (the "take-off site"), a UGA stop codon in the gap right after the GGA take-off codon, a stem-loop mRNA structure with a stabilizing tetraloop that is formed by gap RNA nucleotides, the peptidyl-tRNA 2 Gly occupying the P site of the ribosome, rRNA:mRNA interactions during scanning of the gap by the ribosome, the L9 subunit of the ribosome, and the 50 nucleotide distal GGA codon after the gap nucleotides (the "landing site") [ 127 , 132 - 138 ]. Protein fusions, mass spectrometry, targeted mutations and a number of analyses have combined to address the roles of each component, leading to an approximately 50% efficiency of ribosomes bypassing the intervening 50 nucleotides to land correctly at the downstream GGA codon. Translation then resumes as the next UUA codon and cognate tRNA enter the A site. Again, each of these features shed new light not only on mechanisms of translational bypassing and aspects of "re-programming" the basic genetic code, but also on the dynamics and numerous interactions occurring in all translating ribosomes. It will be interesting to see whether instances of programmed translational bypassing occur in other genes of the many T4-related bacteriophages. ADP-ribosyltransferases in post-transcriptional control T4 encodes three enzymes that covalently modify proteins via ADP-ribosylation during the infection cycle: Alt, ModA and ModB. Alt is injected with phage DNA to immediately initiate ADP-ribosylation of one of the α-subunits (at arginine 265) of RNA polymerase, and by about 4 minutes post-infection newly synthesized ModA completes modification of both α-subunits at the same arginine. The biochemistry of T4-directed ADP-ribosylation of RNA polymerase and its impact on phage promoter selection have been reviewed [ 2 , 139 ]. Other E. coli proteins have been recognized as undergoing ADP-ribosylation during T4 infection, which primarily appears to be due to the activity of Alt and ModB. Alt ADP-ribosylates β, β' and σ subunits of RNA polymerase and also other host proteins. The modifications also include proteins of the translation apparatus, as shown by Ruger and colleagues using the purified proteins [ 140 ]. An in vitro system incubated with total E. coli proteins (cell extract), purified Alt or ModB, and 32 P[NAD + ] showed with mass spectrometry that ADP-ribosylation occurred on as many as 27 proteins by Alt and on approximately 8 proteins by ModB [ 141 ]. For Alt, these included EF-Tu, trigger factor, prolyl-tRNA synthetase and GroEL that are known to have important roles in translation or protein folding. ModB also ADP-ribosylates EF-Tu and trigger factor, as well as ribosomal protein S1 [ 142 ]. For trigger factor (a chaperone of newly translated proteins), arginine 45 is ADP-ribosylated by ModB, but not by Alt, which must target a different, as yet unidentified amino acid [ 141 ]. Arg45 lies in the Phe44-Arg45-Lys46 domain that interacts with ribosomal protein L23, and thus might affect ribosome conformation and translation, as certainly would modifications to EF-Tu and S1. Early studies noted rapid & immediate shut-off of host mRNA translation during T4 infection [ 143 ], but no clear mechanism has been elucidated. The identification of pivotal proteins in the translation apparatus as targets of T4 ADP-ribosyltransferases, together with the observed delivery of Alt into the cell with the injected DNA and the lethality to the cell of over-expressed ModB [ 142 ], suggest that mechanistic studies into the impact of these T4 enzymes on translation of host and phage mRNAs is warranted. Competing interests The authors declare that they have no competing interests. Authors' contributions MU and ESM wrote the manuscript and approved the final version.
Acknowledgements We thank Virology Journal for their support of this series on phage T4 and its relatives.
CC BY
no
2022-01-12 15:21:37
Virol J. 2010 Dec 3; 7:360
oa_package/d2/e5/PMC3014915.tar.gz
PMC3014916
21134292
Background Over the years, influenza has become a serious public health problem. With the potential for sudden outbreaks, rapid spread, and high incidence of complications, the prevalence of influenza infections has caused tremendous loss of human life and material resources [ 1 , 2 ]. Thus, it is important to develop new approaches towards preventing seasonal infections as well as potential pandemics of influenza. Based on their internal protein antigens, different influenza viruses can be divided into 3 types: A, B, or C. The surface antigens, hemagglutinin (HA) and neuraminidase (NA) are also used to identify different subtypes. At present, the prevalent human influenza viruses are the type A H3/H1 and type B viruses. However, in recent years, multiple subtypes (H5/H7/H9) of the avian influenza virus (AIV) have been able to cross the species barrier to infect humans [ 3 , 4 ]. Around the world, the highly pathogenic avian influenza virus subtype H5N1 has caused infectious outbreaks in various human populations [ 5 ]. Influenza vaccines based on the conventional subtypes of each species have been unable to effectively prevent this rising trend. Creating vaccines which can provide long-term protection against more than one subtype of influenza has become a hot topic in vaccine development. However, due to the rapidly changing influenza virus or the phenomena of "antigenic shift" and "antigenic drift", developing a vaccine that can protect against all possible circulating viruses is extremely challenging. Immunogenic epitopes in an antigen is determined by the major histocompatibility complex (MHC) class I for cytotoxic T cell lymhocytes (CTL) and MHC class II for T helper (Th) cells. These polymorphic MHC molecules present short peptides that are processed after an exogenous antigen (such as a viral protein) is taken up by antigen presenting cells (APC) such as macrophages and dendritic cells. These APC then "present" the peptide to the immune cells that recognize the MHC/peptide complex via the T cell receptor (TCR) or B cell receptor (BCR). Theoretically, given any set of MHC II restricted peptides presented to the Th cells, the optimal sequence would be those that could also stimulate B cells to produce antibodies since activation of antigen-specific Th cells also promote antibody production. By understanding the specific epitopes from pathogens that can stimulate optimal immune responses, we will better understand how to tailor vaccines to a specific population and/or pathogen. Indeed, many studies have shown the efficacy of peptide-based vaccines in animal models [ 6 ], as well as in clinical studies against infectious diseases, including malaria [ 7 , 8 ], hepatitis B [ 9 ] and HIV-1 [ 10 , 11 ]. Development of an epitope-based vaccine for influenza may also be a useful strategy to overcoming the challenge of inducing a specific immune response against this constantly evolving virus. CTL epitopes mediate cytolytic effects on infected cells and induce inflammatory factors during viral clearance, while B cell epitopes can induce protective antibody-mediated humoral immune responses. Th epitopes can activate CD4+ T cells to carry out important immune regulatory functions, and the identification of specific epitopes derived from influenza virus has significantly advanced the development of peptide-based vaccines [ 12 - 15 ]. Improved understanding of the molecular basis of antigen recognition and human leukocyte antigen (HLA) binding motifs has allowed the development of rationally designed vaccines based on motifs predicted to bind to human class I or class II MHC. Therefore, identification of the corresponding functional influenza epitopes will have important theoretical and practical value in studies on immunity against virus infection and on vaccine development. Presently, standard inactivated vaccines based on one or a few circulating strains are mainly utilized for prevention of influenza infection, but they cannot effectively deal with the current trend of increasing variations of the circulating viruses. A new influenza vaccine that can afford long-term and cross-species protection against multiple subtypes of influenza is imperative. Developing DNA vaccines that can stimulate both humoral and cellular immunity is a promising area of research. In particular, a multi-epitope DNA vaccine which expresses antigen genes in tandem can efficiently present the defined protective epitopes to stimulate the immune system while eliminating non-essential components or potential toxic fragments of traditional inactivated vaccines. Additionally, the development of such multivalent vaccines can be combined with other vaccine antigens to enhance immunogenicity. The advantage of combination vaccines is that they can potentially provide broader coverage to protect against rapidly mutating viruses such as influenza. We report here the generation and evaluation of the immunogenicity of a DNA vaccine expressing HA based on human influenza H3/H1 combined with a class II MHC multi-epitope antigen (hereafter referred to as the "multi-epitope" vaccine). The vaccine was evaluated for induction of humoral and cellular immune responses in a mice model as well as for the protective efficacy against lethal H1N1 subtype virus challenge. We expected the vaccine targeted towards human influenza subtype H3 and H1 to provide total protection against these strains while at the same time achieving some level of protective efficacy against other influenza subtypes. This approach may be effective against rapidly mutating influenza and provide longer-term protection while laying the foundation for development of a new universal influenza vaccine.
Materials and methods Mice, viruses and cells Female BALB/c mice (6-8 weeks old) were used for immunization and challenge studies. All mice were maintained with free access to sterile food and water. A/New Caledonia/20/99 influenza virus (H1N1) (GenBank CY033622 ) and A/Wisconsin/67/2005 (H3N2) strains were stored in the laboratory. Virus stocks were propagated in the allantoic cavity of 10-day-old embryonated chicken eggs for 48 h at 37°C. The viruses were titrated by the Reed and Muench method to determine the median lethal dose (LD 50 ). Baby hamster kidney (BHK-21) cells were used for transient expression experiments. All experiments with influenza viruses were conducted under BSL-3 containment, including work in animals. Design of epitopes box and synthetic peptides The HA gene sequences of the influenza H5, H7, H9 subtypes which have crossed species barriers to infect mammals and become vaccine strains were downloaded from NCBI http://www.ncbi.nlm.nih.gov with the following main reference sequence accession numbers, respectively: ISDN125873 (A/Indonesia/5/05(H5N1)), AAR02636 [A/Netherlands/127/03(H7N7)], and DQ997437 [A/swine/Shandong/nc/2005(H9N2)]. MHC II restricted epitopes were predicted bioinformatically by the network server SYFPEITHI and Multipre, and B cell epitopes were predicted using the network server BCEPRED http://www.imtech.res.in/raghava/bcepred/ or the Biomolecule simulation software Insight II (Accelrys, 2005). Th cell epitope predictions were based upon their cumulative binding affinity to six of the most common HLA-DRß1 alleles (DRß1*0101, DRß1*0301, DRß1*0401, DRß1*0701, DRß1*1101, and DRß1*1501). The network server BCEPRED was used for linear B cell epitope prediction which screens sequences based on hydrophilicity, accessibility, flexibility, antigenicity, polarity, and exposed surface residues. The Th epitope prediction was narrowed down to include as much as possible the epitopes which overlapped with the predicted B cell epitopes in order to obtain epitopes with dual functions of stimulating both T and B cells. The Th/B cell epitope box was designed with "GPGPG" linkers between each epitope in order to reduce interference between epitopes and to ensure the proper processing and function of each epitope independently. The "KK" linker was also added to prevent the epitopes between subtypes from "splitting", that is, to avoid generation of new junctional epitopes [ 16 , 17 ]. Based on the design of the epitope box, the nucleotide sequences were codon-optimized and the peptides synthesized accordingly (Xu Guan Biological Engineering Co., Ltd. Shanghai, PR China). Peptides were dissolved in 20% DMSO and frozen at -80°C until use. Construction of plasmids The HA genes of influenza A/New Caledonia/20/99 [H1N1] and A/Wisconsin/67/2005 [H3N2] were obtained by RT-PCR amplification of the isolated RNA. The H3HA, H1HA1 and the epitope box (termed EHA) sequences were inserted into the pMD18-T vector after addition of restriction sites Nhe I/ Hind III, Cla I, Xho I, Cla I/ Xho I and Hind III/ Xho I, respectively, to yield pMD18-H3HA, pMD18-H1HA1 and pMD18-EHA. A eukaryotic expression vector, pVAX1 (Invitrogen, Carlsbad, CA, USA) was used to construct the following DNA vaccine vectors: pV-H3HA, pV-H1HA1, pV-H3-H1 and pV-H3-EHA-H1. The four DNA constructs were sequenced to confirm cloning accuracy before amplification in Escherichia coli JM109 and purification using endotoxin-free kits (QIAGEN, Valencia, CA). The final DNA preparations were resuspended in sterile saline solution and stored at -20°C until further use. Indirect immunofluorescence assay BHK-21 cells were transfected with purified DNA from pV-H3HA, pV-H1HA1, pV-H3-H1, pV-H3-EHA-H1 and pVAX1 using Lipofectamine 2000 (Invitrogen) according to the manufacturer's protocol. In brief, cell monolayers were grown on glass coverslips in a 6-well plate and were then transfected with the plasmid DNA (10 μg/well). At 48 h after transfection, the cells were fixed with 0.05% glutaraldehyde and permeabilized with 0.5% Triton X-100 in phosphate-buffered saline (PBS), followed by incubation with rabbit anti-HA of A/New Caledonia/20/99 (H1N1), A/Wisconsin/67/2005 (H3N2), A/Indonesia/5/05(H5N1), A/Netherlands/127/03(H7N7), A/swine/Shandong/nc/2005(H9N2) polyclonal antibody [1:200 in poly(butylene succinate-co-terephthalate) (PBST)] for 1 h at 37°C. Fluorescein isothiocyanate (FITC)-conjugated goat anti-rabbit IgG antibodies [in PBS/bovine serum albumin (BSA)] were added and then incubated for 1 h at room temperature. After mounting the samples, fluorescence images were scanned using an Olympus microscope (BX51; Olympus, Japan). Immunization and virus challenge In challenge experiments, four DNA vaccine groups and an empty vector pVAX1 control group of female Balb/c mice (n = 15) were immunized intramuscularly (IM, each with 100 μg of plasmid DNA in 100 μL of PBS [pH 7.4] in the two hind quadriceps). The immunization schedule consisted of 2 administrations with a 3-week interval, and bleeding was performed at 0, 1, 2, 3, 4 and 5 weeks after immunization for determination of antibody titers. To assess the efficacy of the cross-protective immunity of the 2 vaccine doses against lethal challenge 2 weeks after the second immunization, the immunized mice were anesthetized and intranasally challenged with 10 LD 50 (50% lethal doses) of the A/New Caledonia/20/99 H1N1 virus in a final volume of 100 μL. The challenge experiments were performed in a biosafety level 3 (BSL3) facility (Military Veterinary Institute, Changchun, PR China). Viral lung titer measurements To determine tissue viral titers, the lungs of surviving mice challenged with H1N1 were collected and homogenized by mechanical disruption. The viral titers were determined by plaque formation assay performed in MDCK cells in the presence of trypsin as previously described [ 18 , 19 ]. Serum cytokine assays A pre-coated enzyme-linked immunosorbent assay (ELISA) kit was used (Dakewe Biotech, PR China) to determine the cytokines levels of interferon (IFN)-γ and interleukin (IL)-4 in the immunized the mice according to the manufacturer's instructions. Serum samples (100 μl) from different groups of mice were tested in duplicate. After 36 h of incubation with the standards and samples, the plates were washed, followed by addition of 50 μl of the Streptavidin-HRP solution. The plates were incubated at 37°C for 60 min before washing again for at least five times, with a 1-2 min interval in between each wash. The diluted substrate was added at 50 μl per well and incubated at 37°C for 15 min. Finally, 50 μl of stop solution were added per well to terminate the reaction. Absorbance values were measured at 450 nm. Standard curves were drawn according to the instructions of the kits. The cytokines levels in the samples were calculated accordingly, expressed as ΔX ± SD, and differences between groups were analyzed statistically. IFN- γ ELISpot assays The frequencies of IFN-γ secreting splenocytes were analyzed using a commercially available mouse IFN-γ pre-coated ELISpot assay according to the instructions of the manufacturer (Dakewe Biotech, PR China). Lymphocytes from the spleen were removed aseptically 10 days after a boost immunization, and a single cell suspension (10 6 cells/well) was prepared and stimulated with 20 μg/ml of the inactivated whole virus antigen preparations of A/New Caledonia/20/99 (H1N1) and A/Wisconsin/67/2005 (H3N2) or the following HA antigen peptides (20 μg/ml) of A/Indonesia/5/05(H5N1), A/Netherlands/127/03(H7N7), and A/swine/Shandong/nc/2005(H9N2): H5HA 141-155 , H5HA 206-223 , H5HA 302-316 , H7HA 165-181 , H7HA 255-269 , H7HA 182-196 , H9HA 123-140 , H9HA 73-90 , H9HA 37-54 . The plates were placed in a CO 2 incubator at 37°C. The following day, the splenocytes were discarded, and the plates were extensively washed with pre-chilled PBS. IFN-γ spots were detected by a biotinylated anti-mouse IFN-γ specific antibody, followed by addition of streptavidin-horseradish peroxidase (HRP) and development with 3-amino-9-ethylcarbazole (AEC) substrate solution. The spots were counted using an automated ELISpot reader. The results were expressed as the number of spot-forming cells (SFC)/10 6 spleen cells. P -values were calculated using a permutation test stratified for the experiment. Antibody detection Virus antigen specific serum antibodies were detected by ELISA. The inactivated H1N1 and H3N2 virus (50 ng/well) or standard antigens of H5, H7 and H9 subtype were coated overnight in 96-well plates (Costar, Cambridge, MA, USA). Following blocking of non-specific binding, the serum samples were diluted 100 times in PBS containing 0.5% (wt/vol) gelatin, 0.15% Tween 20, and 4% calf serum (ELISA diluent) and applied in duplicate wells for a 1 h incubation at 37°C. The plates were washed five times with PBS and then reacted with a 1:2000 dilution of HRP-labeled goat anti-mouse IgG (Zhongshan Goldenbridge Biotech) for 1 h at 37°C. After another five washes with PBS, the substrate was added (10 mg ortho-phenylenediamine [OPD] + 20 mL 0.015% hydrogen peroxide in phosphate/citrate buffer). After incubation for 15 min at 37°C, the reactions were terminated with 2N H 2 SO 4 . Subsequently, the absorbance values were determined at 492 nm using a Sunrise automated plate spectrophotometer and analyzed with Microsoft Excel 2007 for Windows. P -values were calculated to detect significant differences among the groups. Statistical analysis The Lifetest procedure using the Kaplan-Meyer method and log rank test were applied for survival analyses between study groups (H1N1 survival study). All tests applied were two-tailed, and P -values of 5% or less were considered statistically significant. The data was analyzed using the SPSS Version 16.0 software.
Results Selection of epitopes SYFPEITHI and Multipre were used in different algorithms to predict Th epitopes. BCEPRED is an improved linear B cell epitope prediction method that utilizes multi-parameter analysis to predict potential B cell epitopes. Comprehensive analyses of both Th and B cell epitopes were performed to obtain a set of epitopes in which the predicted Th epitopes would also contain potential B cell epitopes. The selected epitope regions were re-evaluated for their spatial conformation and specificity to determine the final epitopes (Figure 1A-C ). A final total of 9 Th and B cell epitopes were obtained for the H5, H7 and H9 subtypes of influenza (Table 1 ), and the corresponding peptides were synthesized with BSA conjugated at the C terminus. Construction of the expression plasmid and immunogenicity assay Before testing the immunogenicity of the vaccines, the four DNA vaccine constructs (Figure 2 ) were confirmed by sequencing. Protein expression from these constructs were also verified by transfection of pV-H3HA, pV-H1HA1, pV-H3-H1, pV-H3-EHA-H1 and pVAX1 (empty vector control) in BHK-21 cells, and the HA antigens and epitopes were detected by an immunofluorescence assay with HA anti-serum at 48 h post-transfection (Figure 3 ). The results indicated that the pV-H3HA, pV-H1HA1, pV-H3-H1, pV-H3-EHA-H1 plasmids could successfully expressed their corresponding proteins and multi-epitopes, thereby validating the use of the plasmids in subsequent experiments. Analysis of cytokine levels Fourteen days after the boost immunization, the sera were collected and analyzed for IFN-γ and IL-4 levels levels by ELISA (Figure 4A, B ). The order of the IFN-γ levels detected in the immune sera were as follows: multi-epitope immune group (pV-H3-EHA-H1, 463) > two-subtype co-expression immune group (pV-H3-H1, 435) > H1 group (pV-H1HA1, 410) > H3 group (pV-H3HA, 398) > pVAX1 control group (201). The serum IFN-γ levels of the immunized groups were significantly higher ( P < 0.01) than that of the control group, indicating that all of the vaccines tested effectively stimulated Th1 type responses. The multi-epitope group displayed the highest level of IFN-γ secretion, although the difference was not significant compared with the other 3 groups ( P > 0.05). As for the detection of IL-4 levels in the immune sera, the order was determined as follows: multi-epitopes immune group (pV-H3-EHA-H1, 654) > two-subtype co-expression immune group (pV-H3-H1, 431) > H3 immune group (pV-H3HA, 383) > H1 immune group (pV-H1HA1, 315) > pVAX1 control group (188). The IL-4 levels of the immune groups were also significantly higher ( P < 0.01) than that of the pVAX1 control group. The IL-4 levels in the serum of the multi-epitopes group were significantly higher than that of the other immune groups ( P < 0.05), indicating that the immunized groups had significantly enhanced Th2 cell function. Combined with the analysis of IFN-γ levels above, these findings demonstrated that the multi-epitope vaccine induced the greatest levels of vaccine specific immune responses in mice, and the use of these epitopes tended to produce Th2 cytokines and promoted humoral immunity. Cell-mediated immune responses induced in DNA plasmid immunized mice To evaluate the cellular responses to vaccination, splenocytes were harvested from five immunized mice from each group at 35 days after vaccination. Representative data from three repeated ELISpot assays detecting IFN-γ secretion from virus or peptide stimulated splenocytes are shown in Figure 5 . Significant IFN-γ responses were observed in the immunized group as compared to cells from non-immunized mice following in vitro incubation with whole inactivated viruses [influenza virus A/Wisconsin/67/2005 (H3N2) and influenza virus A/New Caledonia/20/99 (H1N1)] (Figure 5A ). The multi-epitope DNA group (pV-H3-EHA-H1) produced the most spots under the stimulations described above, but the differences were not significant ( P > 0.05) compared to other immunized groups. However, the multi-epitope DNA group did have significantly ( P < 0.05) higher levels of IFN-γ secretion than the other groups in response to peptide antigens (Figure 5B ). Antibody responses induced in DNA vaccine immunized mice In evaluating the development of virus-specific IgG against the H3 and H1 subtypes of influenza by ELISA (Figure 6A, B ), the antibodies were detectable from the first week after immunization, rising after the second week, and then decreased slightly from the third week after a rapid increase to its peak. At 35 days post-inoculation (DPI), virus specific antibody levels in all immunization groups were significantly higher than that in the control group ( P < 0.01), but the levels of the different vaccine groups were not significantly different from each other ( P > 0.05). That is, the IgG antibody levels induced were equivalent between the multi-epitope vaccine group (pVAX1-H3-EHA-H1), the single antigen expressing groups (pV-H3HA and pV-H1HA1) and the dual antigen expressing group (pV-H3-H1). From the analysis of H5, H7, H9 subtypes of influenza virus-specific IgG antibodies (Figure 6C ), the various vaccine groups generated significantly higher antibody levels than the control group in the ELISA ( P < 0.01) at 35 DPI. Furthermore, the Th/B multi-epitope group (pV-H3-EHA-H1) had significantly higher H5, H7 and H9 subtype IgG levels than the other immunized groups (pV-H3HA, pV-H1HA1 and pV-H3-H1), suggesting that the selected epitopes could effectively stimulate virus-specific antibodies. The highest antibody levels detected by ELISA were against the H5 epitopes, suggesting that this vaccine would theoretically be more effective against this virus subtype in mice. Protection against lethal dose challenge with influenza H1N1 To test the efficacy of the vaccines, BALB/c mice (6 weeks) were immunized IM with 200 μg of each vaccine and challenged with 10 LD 50 of A/New Caledonia/20/99 (H1N1) influenza strain. Their survival rates were monitored for the following 14 days. The mice began to show clinical signs or death from influenza infection on day 5 post-challenge. Figure 6 shows the survival curve following these immunizations, and the final survival rates of the pV-H1HA1, pV-H3-H1, and pV-H3-EHA-H1 immunized mice were 90%, 80%, and 100%, which were significantly higher ( P < 0.01) than that of the pV-H3HA and pVAX1 control groups (20% and 0%, respectively; Figure 7 ). Immunization with the multi-epitope vaccine resulted in complete protection against the lethal dose virus challenge, which was better than the single expression (pV-H1HA1) and co-expression immunized group (pV-H3-H1). Lung viral titers The lungs were harvested from mice which survived the viral challenge, and viral titers were determined by plaque formation assays in MDCK cells. Because almost all the mice which received the pVAX1 control and pV-H3HA immunizations did not survive, lungs from dead mice in these groups had to be selected to test for viral titers. As expected, the mice of the pVAX1 control group and pV-H3HA group all had positive viral titers in the lung. By contrast, no measurable virus titers were detected in the lungs in the multi-epitope immunized group, and somewhat lower levels of virus (expressed as plaque forming units, or PFU) were observed in the pV-H3-H1 and pV-H1HA1 immunized groups (Figure 8 ).
Discussion In light of the recent 2009 H1N1 pandemic, there is an urgent need to develop new influenza vaccines. New influenza vaccines should have the following characteristics: low cost, high level immunogenicity, rapid preparation, protection against rapid virus mutation and long-term protection against multiple subtypes of influenza, especially against potential influenza pandemic strains. The purpose of this study was to further develop and evaluate a novel approach to vaccination based on multi-subtype influenza epitopes using mice as a mammalian model. We assessed the immunogenicity of an H3/H1-derived multi-epitope DNA vaccine and its protective efficacy against H1N1 virus challenge. The experiment was set up to systematically compare the specially designed multi-epitope vaccine with the separate antigen components of the immunized groups and a control group. The MHC II molecule pathway of antigen processing is first activated in an APC by phagocytosis, pinocytosis, or receptor-mediated endocytosis of an exogenous antigen. The phagocytic lysosome products are digested into linear epitopes and then later associated with MHC II molecules to be presented on the surface of the APC. Th cells and APC recognition between cells and signal transduction and the resulting induced Th cell activation play an important role in the initiation of an acquired immune response, maintenance of responses in chronic infections and development of immune memory. Activated Th cells produce cytokines that can effectively regulate cytotoxic T cells, B cells and phagocytic cell functions [ 20 ]. B cell epitopes form the basis of humoral immunity in that they determine the specificity of antibodies. B cells can capture antigens through the BCR and function as APC to activate Th cells. Activated Th cells can also activate B cells in turn to produce antibodies against the corresponding antigen. An ideal immunogenic epitope is one that elicits responses from both Th and B cells [ 21 ]. Therefore, the purpose of this study was to design a vaccine with a minimal set of epitopes that are predicted to cross-stimulate both Th and B cell subsets. IFN-γ is the defining Th1 type cytokine, with important immunoregulatory functions including the ability to activate macrophages, induce monocyte cytokine secretion, affect the body's Th1/Th2 balance, regulate antigen presenting cells, and significantly increase MHI-1 and MHC-II molecule expression [ 22 ]. IL-4 is the representative Th2 type cytokine, with the ability to promote B cell proliferation and antibody production [ 23 ]. In this study, we examined these two cytokines to evaluate the immune bias induced by the multi-epitope vaccine. Our analyses revealed that the IFN-γ level of the multi-epitope vaccine immunized group was the highest, indicating effective stimulation of the Th1 response, although it was not significantly different from the other three vaccine groups ( P > 0.05). However, the stimulation of IL-4 levels by the multi-epitope vaccine was significantly higher than the other vaccine groups ( P < 0.05), indicating that the multi-epitope pV-H3-EHA-H1 vaccine could enhance Th2 cell immune function. Therefore, the multi-epitope vaccine tended to induce a dominant Th2 response which could then promote humoral immune responses. Production of antibodies against viral infection is an important effector function, and the level of humoral immunity reflects the ability and strength of the body to block infection, to rid itself of the virus if infection occurs, or at least to prevent tissue damage [ 24 ]. In perspective, the four DNA vaccines were effective in stimulating specific antibodies against the major antigens from the H3 and H1 subtypes of influenza in mice. Specific antibody levels increased with the number of immunizations, rapidly rising after the boost immunization. At 35 DPI, the multi-epitope immunization group had virus-specific antibody levels equivalent to the H1 and H3 immune groups, suggesting that the epitopes induced by the structural HA antigen is still a major determinant of specific antibody production. From analyzing the virus-specific IgG epitopes associated with H5, H7, H9 subtypes of influenza, the multi-epitope immunized group showed advantages compared with the pV-H1HA1, pV-H3HA and pV-H3-H1 groups ( P < 0.05), which confirmed that epitopes in a multi-epitope vaccine could induce functional antibodies, especially those of the H5 subtype. The DNA vaccines also effectively stimulated cellular immune responses in mice as evaluated by the IFN-γ ELISpot assays. For the H3 and H1 IFN-γ responses, the multi-epitope group produced the highest number of spots, but not significantly different compared with the single antigen expressing and the dual antigen expressing groups ( P > 0.05). When stimulated with peptides associated with the H5, H7, H9 antigens, the splenocytes of the multi-epitope vaccine group produced the highest number of IFN-γ spots which was significantly different from that of the other groups. Together these results indicate that the HA subtype and the specific epitopes may play major roles in inducing Th1 cellular immune responses. Further evidence of the efficacy of the multi-epitope vaccine was provided by the in vivo challenge studies. These protection experiments showed differences in the morbidity of the animals as well as a statistically significant difference between the survival curves of the vaccinated groups and the pVAX1 control groups. The multi-epitope group showed 100% protection against lethal challenge, but it was not significantly different compared to the pV-H1HA1 and pV-H3-H1 groups ( P > 0.05). No measurable virus titers were detected in the lungs of the mice in the multi-epitope immunized group. Somewhat lower virus titers were observed in the pV-H3-H1 and pV-H1HA1 groups, which indicated that the constituent epitopes contributed to the cross-protective immunity against H1N1 viral challenge.
Conclusion Overall, the multi-epitope DNA vaccine induced significant levels of humoral and cellular responses as well as provided cross-protective immunity. This study demonstrates the proof-of-principle that a universal DNA vaccine with engineered epitopes may protect against multiple subtypes of influenza virus, afford long-term immune protection, and prevent cross-species transmission.
Background Multiple subtypes of avian influenza viruses have crossed the species barrier to infect humans and have the potential to cause a pandemic. Therefore, new influenza vaccines to prevent the co-existence of multiple subtypes within a host and cross-species transmission of influenza are urgently needed. Methods Here we report a multi-epitope DNA vaccine targeted towards multiple subtypes of the influenza virus. The protective hemagglutinin (HA) antigens from H5/H7/H9 subtypes were screened for MHC II class-restricted epitopes overlapping with predicted B cell epitopes. We then constructed a DNA plasmid vaccine, pV-H3-EHA-H1, based on HA antigens from human influenza H3/H1 subtypes combined with the H5/H7/H9 subtype Th/B epitope box. Results Epitope-specific IFN-γ ELISpot responses were significantly higher in the multi-epitope DNA group than in other vaccine and control groups ( P < 0.05). The multi-epitope group significantly enhanced Th2 cell responses as determined by cytokine assays. The survival rate of mice given the multi-epitope vaccine was the highest among the vaccine groups, but it was not significantly different compared to those given single antigen expressing pV-H1HA1 vaccine and dual antigen expressing pV-H3-H1 vaccine ( P > 0.05). No measurable virus titers were detected in the lungs of the multi-epitope immunized group. The unique multi-epitope DNA vaccine enhanced virus-specific antibody and cellular immunity as well as conferred complete protection against lethal challenge with A/New Caledonia/20/99 (H1N1) influenza strain in mice. Conclusions This approach may be a promising strategy for developing a universal influenza vaccine to prevent multiple subtypes of influenza virus and to induce long-term protective immune against cross-species transmission.
Competing interests The authors declare that they have no competing interests. Authors' contributions LT performed most of the experimental work and drafted the manuscript. DZ, BH and ZYW participated in the analysis of humoral and cellular responses. MYT participated in the immunization of mice. NYJ and HJL revised the manuscript for important intellectual content and gave final approval of the version to be published. All authors read and approved the final manuscript.
Acknowledgements This work was supported by grants from the National High Technology Research and Development Program of China ("863" Program) (No: 2006AA10A205), the National Key Technology R&D Program (No: 2006BAD06A05) and the National Key Program for Infectious Disease of China (2009ZX10004-103).
CC BY
no
2022-01-12 15:21:37
Virol J. 2010 Dec 7; 7:363
oa_package/36/b0/PMC3014916.tar.gz
PMC3014917
21114871
Background Burkholderia pseudomallei is an environmental Gram-negative bacterium that causes a severe and often fatal disease called melioidosis. This is an important cause of sepsis in south-east Asia and northern Australia, a geographic distribution that mirrors the presence of B. pseudomallei in the environment [ 1 ]. Melioidosis may develop following bacterial inoculation or inhalation and occurs most often in people with regular contact with contaminated soil and water [ 1 ]. Clinical manifestations of melioidosis are highly variable and range from fulminant septicemia to mild localized infection. The overall mortality rate is 40% in northeast Thailand (rising to 90% in patients with severe sepsis) and 20% in northern Australia [ 1 , 2 ]. A major feature of melioidosis is that bacterial eradication is difficult to achieve. Fever clearance time is often prolonged (median 8 days), antimicrobial therapy is required for 12-20 weeks, and relapse occurs in around 10% of patients despite an appropriate course of antimicrobial therapy [ 3 , 4 ]. The basis for persistence in the infected human host is unknown, although several observations made to date may be relevant to the clinical behaviour of this organism [ 2 , 5 ]. B. pseudomallei can resist the action of bactericidal substances including complement and antimicrobial peptides in human serum [ 6 - 8 ]. B. pseudomallei can also survive after uptake by a range of phagocytic and non-phagocytic cells. Macrophages have several strategies to control bacterial infection, including bacterial killing following uptake through the action of reactive oxygen and reactive nitrogen compounds, antimicrobial peptides and lysozomal enzymes. Despite this, B. pseudomallei can invade and replicate in primary human macrophages [ 8 - 10 ]. Bacterial survival under adverse and rapidly changing environmental conditions is likely to be facilitated by phenotypic adaptability and plasticity. A previous study conducted by us found that 8% of primary cultures of clinical samples taken from patients with melioidosis contained more than one colony morphotype on Ashdown agar. Morphotypes could switch reversibly from one to another under specific conditions, and were associated with variable expression of putative virulence determinants including biofilm and flagella [ 11 ]. Compared with parental type I (the common 'cornflower head' morphology), isogenic type II (a small, rough colony) had increased biofilm and protease production, while isogenic type III (a large, smooth colony) was associated with increased flagella expression [ 11 ]. In vitro models suggested that switching of morphotype impacted on intracellular replication fitness after uptake by human epithelial cell line A549 and mouse macrophage cell line J774A.1. We postulated that colony morphology switching might represent a mechanism by which B. pseudomallei can adapt within the macrophage and persist in vivo . In this study, we investigated whether the variable phenotype associated with different morphotypes resulted in altered fitness during interactions with the human macrophage cell line U937 and after exposure to a range of laboratory conditions that simulate one or more conditions within the macrophage milieu. Isogenic morphotypes II and III generated from each parental type I of 5 B. pseudomallei strains isolated from patients or soil were used in all experiments.
Methods Bacterial isolates and isolation of isogenic morphotypes Five B. pseudomallei isolates were examined in this study. Isolates 153, 164 and the reference isolate K96243 were cultured from cases of human melioidosis in Thailand, and isolates B3 and B4 were cultured from uncultivated land in northeast Thailand [ 19 ]. The colony morphology of all five parental isolates was type I, and isogenic types II and III were generated from type I of each strain using nutritional limitation [ 11 ]. Briefly, a single colony of type I on Ashdown agar was inoculated into 3 ml of TSB and incubated at 37°C in air in static conditions for 21 days. Bacterial culture was diluted and spread plated onto Ashdown agar. Morphotypes were identified using a morphotyping algorithm [ 11 ]. Isogenic types II and III generated from each parental type I were isolated from the plates of each strain. Growth curve analysis Growth curves were performed for the 3 isogenic morphotypes of each of the 5 B. pseudomallei isolates. A colony of B. pseudomallei was suspended in sterile phosphate buffered saline (PBS). The bacterial suspension was adjusted to an optical density (OD) at 600 nm of 0.15 and diluted 100 times. One hundred microlitres of bacterial suspension was added to 10 ml of TSB and incubated at 37°C in air with shaking at 200 rpm for 28 h. At 2 h intervals, 100 μl of bacterial culture was removed, serially diluted 10-fold in PBS, and the bacterial count determined by plating on Ashdown agar in duplicate and performing a colony count following incubation at 37°C in air for 4 days. Doubling time was calculated. Cell line and culture conditions Human monocyte-like cell line U937 (ATCC CRL-1593.2) originating from a histiocytic lymphoma was maintained in RPMI 1640 (Invitrogen) supplemented with 10% heat-inactivated fetal bovine serum (PAA Laboratories), 100 units/ml of penicillin and 100 μg/ml of streptomycin (Invitrogen) and cultured at 37°C in a 5% CO 2 humidified incubator [ 20 ]. Before exposure to B. pseudomallei , 1 × 10 5 U937 cells per well were transferred to a 24 well-tissue culture plate (BD Falcon) and activated by the addition of 50 ng/ml of phorbol 12-myristate 13-acetate (PMA) (Sigma) over 2 days [ 20 ]. The medium was then replaced with 1 ml of fresh medium without PMA and incubated for 1 day. The differentiated macrophage was assessed by macrophage-like morphology [ 21 ]. Following washing 3 times with 1 ml of Hank's balance salt solution (HBSS) (Sigma), 1 ml of fresh medium was gently added to the macrophages. Interaction of B. pseudomallei isogenic morphotypes with human macrophages The interaction assay was performed as previously described [ 11 ]. B. pseudomallei from an overnight culture on Ashdown agar was suspended in PBS, the bacterial concentration adjusted using OD at 600 nm and then diluted in PBS and inoculated into wells containing differentiated U937 cells to obtain an MOI of approximately 25 bacteria per cell. The MOI was verified by colony counting on Ashdown agar. Infected U937 cells were incubated at 37°C in 5% CO 2 for 2 h. Non-adherent bacteria were removed by washing gently 3 times with 1 ml of PBS. The U937 cells were lysed with 1 ml of 0.1% Triton X-100 (Sigma), and the cell lysates serially diluted in PBS and spread plated on Ashdown agar to obtain the bacterial count. Colony morphology was observed [ 11 ]. The percentage of bacteria that were cell-associated was calculated by (number of associated bacteria × 100)/number of bacteria in the inoculum. The experiment was performed in duplicate for 2 independent experiments. Intracellular survival and multiplication of B. pseudomallei in human macrophages were determined at a series of time points following the initial co-culture described above of differentiated U937 with B. pseudomallei for 2 h. Following removal of extracellular bacteria and washing 3 times with PBS, medium containing 250 μg/ml kanamycin (Invitrogen) was added and incubated for a further 2 h (4 h time point). New medium containing 20 μg/ml kanamycin was then added to inhibit overgrowth by any remaining extracellular bacteria at further time points. Intracellular bacteria were determined at 4, 6 and 8 h after initial inoculation. Infected cells were washed, lysed and plated as above. Intracellular survival and multiplication of B. pseudomallei based on counts from cell lysates were presented. Percent intracellular bacteria was calculated by (number of intracellular bacteria at 4 h) × 100/number of bacteria in the inoculum. Percent intracellular replication was calculated by (number of intracellular bacteria at 6 or 8 h × 100)/number of intracellular bacteria at 4 h. The experiment was performed in duplicate for 2 independent experiments. Growth in acid conditions B. pseudomallei from an overnight culture on Ashdown agar was suspended in PBS and adjusted using OD at 600 nm to a concentration of 1 × 10 6 CFU/ml in PBS. Thirty microlitres of bacterial suspension was inoculated into 3 ml of Luria-Bertani (LB) broth at a pH 4.0, 4.5 or 5.0. The broth was adjusted to acid pH with HCl. Growth in LB broth at pH 7.0 was used as a control. The culture was incubated at 37°C in air with shaking at 200 rpm. At 1, 3, 6, 12 and 24 h time intervals, the culture was aliquoted and viability and growth determined by serial dilution and plating on Ashdown agar. Susceptibility of B. pseudomallei to reactive oxygen intermediates (ROI) The sensitivity of B. pseudomallei to reactive oxygen intermediates was determined by growth on oxidant agar plates and in broth containing H 2 O 2 . Assays on agar plates were performed as described previously [ 22 ], with some modifications. Briefly, an overnight culture of B. pseudomallei harvested from Ashdown agar was suspended in PBS and the bacterial concentration adjusted using OD at 600 nm. A serial dilution of the inoculum was spread plated onto Ashdown agar to confirm the bacterial count and colony morphology. Ten microlitres of serial dilutions of bacteria in PBS were spotted onto LB agar containing 0, 170, 310, 625, 1,250 and 2,500 μM H 2 O 2 . Colony counts were performed after incubation at 37°C in air for 24 h. The number of colonies on plates containing H 2 O 2 was compared with that on control plates and presented as bacterial survival (%). The assay was performed for 4 independent experiments. Sensitivity to killing by hydrogen peroxide was further examined in LB broth. An overnight culture of B. pseudomallei on Ashdown agar was suspended in PBS and adjusted to approximately 1 × 10 8 CFU/ml. Ten microlitres of bacterial suspension was added into 1 ml of LB broth containing two-fold decreasing concentrations of H 2 O 2 ranging from 500 to 31.25 μM. The mixtures were statically incubated at 37°C in air for 24 h and then the viable count and colony morphotype were determined by serial dilution and plating on Ashdown agar. The experiment was performed for 2 independent experiments. Susceptibility of B. pseudomallei to reactive nitrogen intermediates (RNI) B. pseudomallei from an overnight culture on Ashdown agar was suspended in PBS and the bacterial concentration adjusted using OD at 600 nm. Thirty microlitres of bacterial suspension was added into 3 ml of two-fold decreasing concentrations of sodium nitrite (ranging from 10 to 0.1 mM) in LB broth at pH 5.0. The mixture was incubated at 37°C in air with shaking at 200 rpm and viable bacteria were determined at 6 h by serial dilution and plating on Ashdown agar. The number of viable bacteria in the presence of NaNO 2 was compared with the number of bacteria in the inoculum and presented as bacterial survival (%). The experiment was performed in duplicate for 2 independent experiments. Susceptibility of B. pseudomallei to lysozyme and lactoferrin B. pseudomallei cultured overnight on Ashdown agar was harvested and suspended in 10 mM Tris-HCl buffer pH 5.0 [ 23 ]. The bacterial suspension was adjusted to a concentration of 1 × 10 7 CFU/ml. Fifty microlitres of bacterial suspension was added to an equal volume of 400 μg/ml chicken egg white lysozyme (48,000 U/mg protein) (Sigma) to obtain a final concentration of 200 μg/ml. The mixture was incubated at 37°C in air for 24 h, after which 10 μl of 10-fold serial dilutions were dropped on Ashdown agar. Sensitivity to lysozyme was also tested in the presence of 3 mg/ml lactoferrin (Sigma) in a separate experiment [ 23 ]. E. coli strain HB101 was tested in parallel as a control. Susceptibility to human α-defensin and β-defensin B. pseudomallei was tested for resistance to HNP-1 and HBD-2 (Peptide international) as described previously [ 24 ], with the exception that HNP-1 was used at twice the dose. E. coli strain HB101 was tested in parallel as a control. Briefly, B. pseudomallei or E. coli strain HB101 colonies were washed and suspended in 1 mM sodium phosphate buffer pH 7.4 containing 1% TSB [ 24 ]. The bacterial suspension was adjusted to a concentration of 1 × 10 7 CFU/ml. Twenty microlitres of bacterial suspension was mixed with an equal volume of 200 μg/ml HNP-1 or HBD-2 to obtain a final concentration of 100 μg/ml antimicrobial peptide and incubated at 37°C in air for 3 h. The viable bacterial count was determined by dropping a 10-fold serial dilution on Ashdown agar. Susceptibility to antimicrobial activity of human cathelicidin B. pseudomallei susceptibility to cathelicidin LL-37 was tested using a microdilution method [ 25 ]. LL-37 was kindly provided by Dr. Suwimol Taweechaisupapong, Department of Oral Diagnosis, Faculty of Dentistry, Khon Kaen University and Dr. Jan G.M. Bolscher, Department of Oral Biochemistry, Van der Boechorststraat, Amsterdam, The Netherlands. A loop of bacteria was washed 3 times in 1 mM potassium phosphate buffer (PPB) pH 7.4 and suspended in the same buffer. The bacterial suspension was adjusted to a concentration of 1 × 10 7 CFU/ml. Fifty microlitres of suspension was added into wells containing 50 μl of a 2-fold serial dilution of human cathelicidin in PPB (to obtain a final concentration of 3.125-100 μM), The mixture was incubated at 37°C in air for 6 h and viability of bacteria was determined by plating a 10-fold serial dilution on Ashdown agar. The assay was performed in duplicate. Growth in low oxygen and anaerobic conditions An overnight culture of B. pseudomallei on Ashdown agar was suspended in PBS and adjusted to a concentration of 1 × 10 8 CFU/ml. The bacterial suspension was 10-fold serially diluted and 100 μl spread plated on Ashdown agar to obtain approximately 100 colonies per plate. Three sets of plates were prepared per isolate and incubated separately at 37°C in 3 conditions: (i) in air for 4 days (control); (ii) in an GasPak EZ Campy Pouch System to produce an atmosphere containing approximately 5-15% oxygen (BD) for 2 weeks; or (iii) in an anaerobic jar (Oxoid) with an O 2 absorber (AnaeroPack; MGC) for 2 weeks and then re-exposed to air at 37°C for 4 days. The mean colony count was determined for each morphotype from 5 B. pseudomallei isolates after incubating bacteria in air for 4 days (control). % colony count for each isolate incubated in 5-15% oxygen or in an anaerobic jar for 14 days was calculated in relation to the colony count of the control incubating bacteria in air for 4 days. Colony morphology switching Seven conditions were observed for an effect on morphotype switching, as follows: (i) culture in TSB in air with shaking for 28 h, (ii) intracellular growth in macrophage cell line for 8 h, (iii) exposure to 62.5 μM H 2 O 2 in LB broth for 24 h, (iv) growth in LB broth at pH 4.5 for 24 h, (v) exposure to 2 mM NaNO 2 for 6 h, (vi) 6.25 μM LL-37 for 6 h, and (vii) incubation in anaerobic condition for 2 weeks and then re-exposure to air for 4 days. All experiments were performed using the experimental details described above. B. pseudomallei morphotype on Ashdown agar following incubation in air at 37°C for 4 days was defined and compared with the starting morphotype. Morphotype switching was presented as the proportion (%) of alternative types in relation to the total colonies present. Assays of resistance to HNP-1, HBD-2, lysozyme and lactoferrin employed a drop method to assess bacterial survival and colony morphology could not be accurately determined. Statistical analysis Statistical analysis was performed using the statistical program STATA version 10.1. Log transformation of continuous dependent variables was performed as appropriate. Nested repeated measures ANOVA was used to test continuous dependent variables between 3 isogenic morphotypes. A difference between 3 morphotypes was considered to be statistically significant when the P value was less than or equal to 0.05, after which pairwise comparisons were performed between each morphotype. All P values for pairwise analyses were corrected using the Benjamini-Hochberg method for multiple comparisons [ 26 ].
Results Growth curve analysis of isogenic morphotypes Different growth rates may affect the number of intracellular bacteria following uptake by host cells. Thus, prior to observation of intracellular replication in macrophages, extracellular growth of B. pseudomallei was compared between 3 isogenic morphotypes cultured in trypticase soy broth (TSB). Using a starting inoculum of 1 × 10 4 CFU/ml, log and stationary phase occurred at 2 h and 12 h, respectively, for all 3 morphotypes. There was no difference in doubling time between 3 isogenic morphotypes ( P = 0.14) with an average doubling time of 40.2, 39.2 and 38.3 minutes for types I, II and III, respectively. Replication of isogenic B. pseudomallei morphotypes in macrophages Evaluation of the initial B. pseudomallei -macrophage cell interaction using a multiplicity of infection (MOI) of 25:1 demonstrated that 3.0% of the bacterial inoculum (range 1.2-8.0% for different isolates) was associated with macrophages at 2 h. There was no significant difference in this value between 3 isogenic morphotypes for all 5 isolates. Following removal of extracellular bacteria and incubation for a further 2 h, 1.5% of the bacterial inoculum (range 0.4-3.4% for different isolates) was recovered. There was no significant difference in this value between 3 isogenic morphotypes for all 5 isolates. The intracellular replication of B. pseudomallei between 4 to 8 h within macrophages is summarized in Figure 1 . The replication rates for the 3 isogenic morphotypes of each strain obtained from two independent experiments were comparable (data not shown). Percent replication at 8 h was defined in relation to the 4 h time point, which was used as the reference count. Analysis of pooled data for 5 isolates demonstrated that type I had a significantly higher rate of intracellular replication than either type II or III. The mean intracellular replication of type I at 8 h was 2.0 (95%CI 1.5-2.6, P = 0.004) times higher than that of type II, and 1.9 (95%CI 1.4-2.5, P = 0.004) times higher than that of type III (Figure 1A ). However, this pattern was not uniformly observed for each of the 5 isolates, as shown in Figure 1B-F . The higher replication fitness for type I based on the summary data was largely accounted for strains 164 and K96243. Other strains demonstrated a different pattern. For example, strain 153 type III had a higher intracellular replication than type I, a finding that replicates those of a previous study [ 11 ]. The mean intracellular bacterial count also varied between individual isolates. These differences were not due to the relative sensitivities of 3 isogenic morphotypes to 250 μg/ml kanamycin, as this experimental condition removed 99.9% of extracellular bacteria independent of type for all isolates (data not shown). Susceptibility of isogenic morphotypes to acid To examine the effect of acid, growth of 3 isogenic morphotypes in LB at pH 4.0, 4.5, 5.0 and 7.0 was compared at each of 5 time points over 24 h of incubation. No growth difference was observed between morphotypes at any time point for pH 4.5, 5.0 or 7.0 ( P > 0.10 for all time points). When cultured in LB broth at pH 4.0, all bacteria died within 12 h incubation. Susceptibility of isogenic morphotypes to reactive oxygen intermediates (ROI) The susceptibility of 3 morphotypes to ROI was initially examined on LB agar plates containing a range of H 2 O 2 concentrations (0, 170, 310, 625, 1,250 and 2,500 μM) (data not shown). B. pseudomallei failed to grow on plates with H 2 O 2 at a concentration higher than 625 μM, and so the percentage of viable bacteria were enumerated using agar plates with 625 μM H 2 O 2 compared to those on plates without H 2 O 2 . This demonstrated a difference in bacterial survival between the three isogenic morphotypes ( P < 0.001). Percentage survival of type I was 3.8 (95%CI 2.9-5.0, P < 0.001) times higher than that for type II, and was 5.2 (95%CI 4.0-6.8, P < 0.001) times higher than that for type III (Figure 2A ). Further examination was undertaken of the susceptibility of the 3 morphotypes with a range of concentrations of H 2 O 2 in LB broth. No bacteria survived in 500 μM and 250 μM H 2 O 2 . In 125 μM H 2 O 2 , type I of all 5 isolates multiplied from 1 × 10 6 CFU/ml (the starting inoculum) to between 5 × 10 7 and 2.1 × 10 8 CFU/ml. By contrast, all 5 type III and 4 type II isolates (the exception being type II derived from isolate 164) obtained from the same experiment demonstrated no growth on the plates. This confirmed a higher resistance to H 2 O 2 of parental type I compared to types II and III. A difference was also observed between three isogenic morphotypes in 62.5 μM H 2 O 2 ( P < 0.001). Bacterial growth of type I was 1.5 (95%CI 1.1-2.0, P = 0.02) times higher than that for type II, and was 2.7 (95%CI 2.0-3.7, P < 0.001) times higher than that for type III. Susceptibility of isogenic morphotypes to reactive nitrogen intermediates (RNI) Susceptibility of B. pseudomallei to RNI was observed following 6 h exposure to various concentrations of NaNO 2 ranging between 0.1 to 10 mM in acidified pH 5.0 in LB broth. Using a concentration of 2 mM NaNO 2 , the percent survival of types I, II and III were 43.8%, 43.7% and 40.1%, respectively, with no difference observed between the three morphotypes ( P > 0.10). Susceptibility of isogenic morphotypes to lysozyme and lactoferrin Compared with initial inocula and untreated controls, treatment with 200 μg/ml lysozyme at pH 5.0 did not decrease the bacterial count for the 3 isogenic morphotypes of B. pseudomallei , while this concentration could reduce the number of E. coli from 4.9 × 10 6 CFU/ml (the starting inoculum) to 425 CFU/ml. Susceptibility was examined further in the presence of 3 mg/ml lactoferrin. A kinetic study over time demonstrated that lactoferrin alone could kill an entire E. coli inoculum of 1 × 10 6 CFU/ml within 3 h at pH 5.0. The same treatment did not affect the number of viable B. pseudomallei which was comparable to the inoculum and untreated control. Adding 200 μg/ml lysozyme with lactoferrin did not enhance the killing efficacy of E. coli and had no effect on B. pseudomallei . Susceptibility of isogenic morphotypes to antimicrobial peptides Macrophages produce several antimicrobial peptides [ 12 , 13 ]. We examined the susceptibility of isogenic morphotypes to HNP-1, HBD-2 and cathelicidin LL-37, three of the main human antimicrobial peptides. The results demonstrated that 100 μg/ml HNP-1 and 100 μg/ml HBD-2 did not reduce the bacterial count for the 3 isogenic morphotypes of any of the B. pseudomallei isolates when compared with the initial inocula and untreated controls. In a pilot experiment with a range of LL-37 concentrations and exposure times, we found that LL-37 reduced the B. pseudomallei count at a concentration of 6.25 μM at 6 h. This condition killed 100% of a starting inoculum of 4.6 × 10 6 CFU/ml E. coli control and caused a 75.7 to 99.8% reduction of B. pseudomallei for different isolates. A difference in bacterial survival was observed between the three isogenic morphotypes ( P < 0.001). Survival of type I was 1.5 (95%CI 1.1-2.2, P = 0.02) times higher than that for type II, but was 3.7 (95%CI 2.6-5.3, P < 0.001) times lower than that for type III (Figure 2B ). Growth in low oxygen concentrations Low oxygen concentration may limit the intracellular growth of aerobic bacteria within the host [ 14 ]. We examined the survival of 3 isogenic morphotypes and determined whether morphotype switching occurred in response to different oxygen concentrations during incubation on Ashdown agar at 37°C. B. pseudomallei survived in 5-15% oxygen concentration for 14 days, with an average colony count of 95% (range 72-109% for different isolates and morphotypes) compared to control plates incubated in air for 4 days (Table 1 ). There was no difference in the survival pattern between 3 isogenic morphotypes ( P > 0.10). B. pseudomallei colonies were not visible on Ashdown agar after incubation in an anaerobic chamber for 2 weeks. The capability to recover from anaerobic conditions was observed as colonies were visible at 48 h after reincubation at 37°C in air, and colony counts were performed after incubation for 4 days. The percentage of bacteria recovered was not different between three morphotypes ( P > 0.10). Effect of laboratory conditions on morphotype switching Types I and II did not demonstrate colony morphology variation over time in any of the conditions tested. Figure 3 shows the effect of various testing conditions of type III for all 5 isolates. Between 1% and 13% of colonies subcultured from 28 h TSB culture onto Ashdown agar switched to alternative types. The switching of type III appeared to be important for replication in macrophages. Following uptake, switching of type III increased over time such that by the 8 h time point, between 48-99% of the agar plate colonies (the range representing differences between isolates) had switched to type I (isolates K96243, 164, B3 and B4) or to type II (isolate 153). Morphotype switching did not increase in acid, acidified sodium nitrite, or LL-37. In contrast, morphotype switching from broth culture containing 62.5 μM H 2 O 2 increased over time of incubation, ranging between 24-49% of the plate colonies for different isolates. Interestingly, between 15-100% of the total type III colony count switched to an alternative morphotype after recovery from anaerobic conditions. The pattern of morphotype switching in all conditions tested was specific to isolates, with four isolates switching from type III to type I (K96243, 164, B3 and B4), and one isolate switching to II (153).
Discussion Our previous paper reported a process of B. pseudomallei colony morphology switching that occurred during human melioidosis, and in an animal model, mouse macrophage cell line J774A.1, human lung epithelial cell line A549, and under starvation conditions in vitro . In this study, we investigated whether the variable phenotype associated with different morphotypes resulted in a survival fitness or disadvantage during interactions with a human macrophage cell line U937 and after exposure to factors that simulate the macrophage milieu. Although our previous report described 7 different morphotypes from clinical isolates, the five isolates used here from 3 different clinical and 2 environmental samples were only observed to switch under nutritional limitation from parental type I to types II and III, allowing comparison of 3 isogenic morphotypes with known variable phenotype. The initial interaction between the human macrophage cell line U937 and 3 isogenic morphotypes of B. pseudomallei was not different between the three types. Despite a comparable rate of extracellular growth between isogenic morphotypes, heterogeneity in subsequent intracellular survival/growth after this time point was observed. Type III of each isolate was inconsistently capable of multiplication after uptake by human macrophages, and was associated with a change in morphotype. This suggests that type III has a fitness disadvantage under these circumstances. A possible explanation for this is that type III does not appear to produce biofilm [ 11 ]. A biofilm mutant demonstrated a mark reduction in intracellular survival in primary human macrophages than the wild type, suggesting that biofilm production is associated with the ability to survive in human macrophages [ 8 ]. Our previous study examined the survival and replication of B. pseudomallei strain 153 in the human respiratory epithelial cell line A549 and the mouse macrophage cell line J744A.1. Our finding here that type III of strain 153 had increased survival in the human macrophage cell line U937 is consistent with our previous findings for the mouse macrophage cell line J774A.1 infected with the same strain [ 11 ]. However, the use of a wider number of strains in this study demonstrated that there was a lack of reproducibility between strains. We suggest that this is likely relate to variability in genomic content between the strains tested. Future testing strategies require the evaluation of a large numbers of strains that have undergone whole genome sequencing to facilitate statistically robust comparisons between genomic variation and phenotypic behaviour. Several components of the innate immune system are efficient in killing organisms within human macrophages [ 15 ]. The most important of these are the antimicrobial peptides and nitric oxide (NO), the superoxide anion (O 2 - ), and hydrogen peroxide (H 2 O 2 ), all of which are directly toxic to bacteria. Reactive oxygen species generated by the phagocyte NADPH oxidase have an essential role in the control of B. pseudomallei infection in C57BL/6 bone marrow derived macrophages [ 16 ]. Type I of all 5 B. pseudomallei isolates tested here had the greatest resistance to H 2 O 2 , followed by types II and III, respectively, suggesting that type I has the greatest potential to scavenge or degrade H 2 O 2 molecules. This may explain the finding that type I had the highest replication after uptake by the macrophage cell line. Type III switched to type I or II during culture in medium containing H 2 O 2, indicated that type III had a survival disadvantage under such conditions that required switching to a more H 2 O 2 resistant type. One of the mechanisms by which B. pseudomallei escapes macrophage killing is by repressing inducible nitric oxide synthase (iNOS) by activating the expression of two negative regulators, a suppressor of cytokine signaling 3(SOC3) and cytokione-inducible src homology2-containing protein (CIS) [ 17 ]. It is unknown whether there are variation between strains and isogenic morphotypes in the ability to interfere with iNOS induction. However, colony morphology differences did not influence resistance to RNI. B. pseudomallei is protected from RNI by the production of alkyl hydroperoxide reductase (AhpC) protein and depends on OxyR regulator and a compensatory KatG expression [ 18 ]. These mechanisms may not be associated with colony morphology variability. B. pseudomallei survive in the phagolysosome [ 10 ] which are acidified environments containing lysozymes, proteins and antimicrobial peptides that destroy pathogen. There was no difference in growth for the 3 isogenic morphotypes of B. pseudomallei derived from all five isolates at all pH levels tested above 4.0, but a pH of 4.0 was universally bactericidal, suggesting that morphotype switching did not provide a survival advantage against acid conditions. All morphotypes of B. pseudomallei were highly resistance to lysozyme and lactoferrin. Lysozyme functions to dissolve cell walls of bacteria. Lactoferrin is a competitor that works by binding iron and preventing uptake by the bacteria. Common structures for resistance to these factors such as capsule and LPS [ 8 ] were present in all isogenic morphotypes [ 11 ]. An alternative explanation is that B. pseudomallei may produce a morphotype-independent lysozyme inhibitor that counteracts the action of lysozyme and lactoferrin. Antimicrobial peptides are efficient at killing a broad range of organisms. They are distributed in variety tissues, and in neutrophils and macrophages [ 12 , 13 ]. All 3 isogenic B. pseudomallei morphotypes were resistant to α-defensin HNP-1 and β-defensin HBD-2, but were susceptible to LL-37. In contrast to sensitivity to H 2 O 2 , type III was more resistant than type I or II to LL-37. This ability may allow type III to survive within host cells for a limited period before successfully switching to alternative phenotypes and may provide a fitness of advantage in macrophages. Another feature of bacterial survival during the establishment of persistent infection in the host is adaptation to hypoxia in the host microenvironment [ 14 ]. This study demonstrated that all 3 isogenic morphotypes were able to tolerate a low oxygen concentration and anaerobic conditions for at least two weeks. Type III switching to either type I or II was observed during recovery from anaerobic incubation. The fact that types I and II were stable following anaerobic incubation suggests that they are tolerant of fluctuations in oxygen concentration. Given the variation in the genome of different B. pseudomallei , it was not surprising to observe some variation in intracellular replication between isogenic morphotypes of different isolates. Only one strain switched from type III to II, while the other four isolates switched from type III to type I in all conditions in which a change in morphotype was observed. Analyses of 5 isolates in this study provide evidence that colony morphology variation represents heterogeneous phenotypes of B. pseudomallei with different fitness advantages to interact, survive and replicate in the presence of bactericidal substances within human macrophages. A limitation of this study is that the experimental methods were laborious and time consuming, which restricted the number of strains we could examine. It is also unclear whether these in vitro assays using a human macrophage cell line are a good model for human infection. Further studies are required to determine the molecular mechanism of morphotype switching, and whether this is associated with persistence of B. pseudomallei in the human host.
Conclusions B. pseudomallei can produce different colony morphologies in vivo and in vitro . This study has described the intracellular survival and replication of two isogenic morphotypes II and III generated from 5 different parental type I B. pseudomallei in the U937 human macrophage cell line, and has examined the survival of these isogenic morphotypes compared to the parental types in the presence of a variety of substances and under conditions which are potentially encountered within the macrophage milieu. Data for 5 isolates demonstrated that there was variability in bacterial survival and replication following uptake by human macrophages between parental type I and types II or III, as well as variability between strains. Uptake of type III alone was associated with colony morphology switching. Type I was associated with survival in the presence of H 2 O 2 . In contrast, isogenic morphotype III demonstrated higher resistance to antimicrobial peptide LL-37. Specific morphotypes were not associated with survival with susceptibility to acid, acidified sodium nitrite, or resistance to lysozyme, lactoferrin, HNP-1 or HBD-2. Incubation under anaerobic conditions was a strong driver for switching of type III to an alternative morphotype in all isolates.
Background Primary diagnostic cultures from patients with melioidosis demonstrate variation in colony morphology of the causative organism, Burkholderia pseudomallei . Variable morphology is associated with changes in the expression of a range of putative virulence factors. This study investigated the effect of B. pseudomallei colony variation on survival in the human macrophage cell line U937 and under laboratory conditions simulating conditions within the macrophage milieu. Isogenic colony morphology types II and III were generated from 5 parental type I B. pseudomallei isolates using nutritional limitation. Survival of types II and III were compared with type I for all assays. Results Morphotype was associated with survival in the presence of H 2 O 2 and antimicrobial peptide LL-37, but not with susceptibility to acid, acidified sodium nitrite, or resistance to lysozyme, lactoferrin, human neutrophil peptide-1 or human beta defensin-2. Incubation under anaerobic conditions was a strong driver for switching of type III to an alternative morphotype. Differences were noted in the survival and replication of the three types following uptake by human macrophages, but marked strain-to strain-variability was observed. Uptake of type III alone was associated with colony morphology switching. Conclusions Morphotype is associated with phenotypes that alter the ability of B. pseudomallei to survive in adverse environmental conditions.
Authors' contributions ST carried out the experiments and data analysis. AT isolated and maintained isogenic morphotypes. DL participated in statistical analysis. SK and ND provided materials and intellectual comments. SJP participated in the design of the study, and assisted in the writing of the manuscript. NC participated in the design of the study, data analysis and coordination and writing of the manuscript. All authors read and approved the final manuscript.
Acknowledgements We are grateful to Dr. Suwimol Taweechaisupapong and Dr. Jan G.M. Bolscher for providing LL-37, to Dr. Sue Lee for statistical advice and to Mrs. Vanaporn Wuthiekanun for providing B. pseudomallei isolates. We thank staff at the Mahidol-Oxford Tropical Medicine Research Unit for their assistance and support. S.T was supported by a Siriraj Graduate Thesis Scholarship, Thailand. N.C. was supported by a Wellcome Trust Career Development award in Public Health and Tropical Medicine, UK, and a Thailand Research Fund award, Thailand.
CC BY
no
2022-01-12 16:12:49
BMC Microbiol. 2010 Nov 30; 10:303
oa_package/79/82/PMC3014917.tar.gz