accession_id
stringlengths 9
11
| pmid
stringlengths 1
8
| introduction
stringlengths 0
134k
| methods
stringlengths 0
208k
| results
stringlengths 0
357k
| discussion
stringlengths 0
357k
| conclusion
stringlengths 0
58.3k
| front
stringlengths 0
30.9k
| body
stringlengths 0
573k
| back
stringlengths 0
126k
| license
stringclasses 4
values | retracted
stringclasses 2
values | last_updated
stringlengths 19
19
| citation
stringlengths 14
94
| package_file
stringlengths 0
35
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
PMC3016363 | 21138568 | Background
The field of addiction treatment lacks a universally accepted and unambiguously defined clinical definition of recovery. Although a single disciplinary group such as physicians may agree upon a definition, there is no such agreement among the broader field of treatment professionals, addiction researchers, program evaluators, and policymakers. Indeed, there is no comprehensive consensus of what a definition of "recovery" is even among those individuals who are themselves in recovery from substance use disorders [ 1 , 2 ].
There have been many efforts to define recovery from substance use disorders. Perhaps the most often used criteria for remission from substance use disorders by treatment professionals are those from the Diagnostic and Statistical Manual of Mental Disorders ( DSM ) [ 3 ]. An example of such a criterion is the sum of all years in which a diagnosis of alcohol use disorder was not present. However, this criterion indicates a remission from a clinical diagnosis, rather than a multidimensional perspective on recovery.
Perhaps the most comprehensive efforts to define "recovery" were those offered by leading investigators as part of a special issue on "recovery" published by the Journal of Substance Abuse Treatment (October 2007). Methodological approaches to developing a definition of recovery included a consensus panel, literature reviews, surveys of the general public and addiction treatment professionals, and opinions from persons themselves in "recovery".
The Betty Ford Consensus Panel [ 4 ] defined "recovery" as consisting of three parts: sobriety, personal health, and citizenship. Sobriety refers to abstinence from alcohol and all other non-prescribed drugs; personal health refers to improved quality of health; and citizenship refers to living with regard and respect for others. This conceptualization has come under criticism for several reasons, including the use of the construct of citizenship as a measure of chemical dependency recovery. The objection is that no other chronic illness is measured for recovery status on the construct of citizenship [ 5 ]. The Betty Ford panel proposed measuring recovery using the World Health Organization Quality of Life instrument. However, this approach has been critiqued on the ground that a measure developed for a general population may not be valid for the specific population of people in recovery [ 5 ]. Arndt & Taylor [ 5 ] view the Betty Ford conceptualization as an initial step in defining "recovery," rather than the pinnacle of a definition.
White [ 6 ] defined "recovery" as the experience (a process and sustained status) through which individuals, families and communities impacted by severe alcoholism and other drug problems utilize internal and external resources to voluntarily resolve these problems, heal the wounds inflicted by alcohol and other drug-related problems, and develop a healthy, meaningful and productive life. White [ 7 ] subsequently proposed outcome measures for these areas including measures of substance use, living environment, physical and emotional health, family relationships, citizenship, and quality of life.
Laudet [ 1 ] cites a survey of members of the public regarding their view of recovery (Peter D. Hart Research Associates, 2004). This survey indicated that 62% reported that "in recovery from addiction to alcohol or other drugs, the one addicted is trying to stop using". Only 22% of respondents reported that "the one in recovery is free from the disease of addiction and no longer uses alcohol or illicit drugs". Further, 80% of respondents expressed that total abstinence was their "recovery" goal and over 80% reported that "recovery" is a process and not a finite achievement. Apart from the public's perceptions, Laudet [ 1 ] also conducted a review of articles on recovery and concluded that most researchers operationally define recovery in terms of substance use and more often as abstinence status. Some of the terms used interchangeably were remission, resolution, abstinence, and recovery . In addition, words to represent the act of changing the substance using behaviors were quit, overcome, and recover . In these contexts, "recovery" is defined as "overcoming both physical and psychological dependence to psychoactive drugs while making a commitment to society." This description implies domains of recovery that encompass drug abstinence, personal wellbeing, and re-integration into society.
Galanter [ 8 ] suggested a model of "recovery" from addiction that is attuned with the spiritual framework supported by Alcoholics Anonymous. This aspect of recovery is based on the substance-using individuals' own perspectives. These experiences are not observable; rather, they are self-reported through the persons' interpretations. This is an important domain of "recovery" and is reminiscent of the spiritual orientation of Alcoholics Anonymous.
Finally, McLellan, Chalk, & Bartlett [ 9 ] present "recovery" in terms of outcomes, performance, and quality. Calls for accountability within the addiction treatment field have inspired these authors to build a set of treatment quality, performance and outcome indicators. They suggest that outcomes of any treatment are the changes in clients' symptoms, behaviors, and functioning that can be attributed to the treatment. Because clients present with multiple problems, outcome evaluations of chemical dependency treatment have measured more than one outcome variable. Outcome measures are generally grouped together by the domain of functioning that they represent. When clients experience abstinence or a substantial reduction in use of drugs/alcohol as well as improvement in functioning in other domains (e.g., family, social, education, financial, etc.) this can be called "recovery." The three variables that are most frequently presented as "recovery" domains are substance use, employment/self-support, and criminal activity. The Substance Abuse and Mental Health Services Administration [ 10 ] measures substance abuse "recovery" by adjoining physical health, mental health, family and social relations, stability in housing, perception of care, access, and retention domains. According to SAMSHA, improvements in three of the seven functional domains plus abstinence are considered indicative of "recovery".
As can be seen from the above, the definition and very concept of recovery is unclear, although a convergence of ideas is beginning to emerge. It might be fundamental to consider the foregoing efforts, which represent collective federal agency/consensus panel/empirical definitions of "recovery" as a starting point meant to be reviewed, revised, expanded upon, revisited, and updated. Therefore, the purpose of this article is to move toward an abstinence-based model of recovery by using existing models and developing upon them. This article is theoretical in nature and does not present quantitative analyses. Instead it presents the results of a small qualitative effort meant to create a theoretical foundation for future research. | Methods
This study used a participatory approach to explore "recovery" from the perspectives of addiction treatment professionals. It used inductive content analysis approaches to identify domains of recovery that could be modeled for further testing.
Setting
The study was conducted at the Hanley Center, a private, 82-bed residential substance abuse treatment facility located in South Florida. The Hanley center serves a primarily white, private-pay clientele. The facility has separate treatment centers for men, women, and older adults. Comprehensive assessment and services are provided based on an eclectic model including medical, 12-step, cognitive/behavioral, and spiritual approaches delivered in individual, family, and small group modalities.
Participants
There were eleven participants. Eight of these were managers of their respective disciplines at the Hanley Center, and three were researchers. The following disciplines or treatment perspectives were represented among the eight managers: medicine (physician and nurse); psychiatry; psychology; clinical therapy; social/emotional wellness; family/community; and spirituality. The multidisciplinary approach was used in order to obtain a comprehensive perspective on recovery. The researchers, who had expertise in addiction, research methodology, and neuroscience, lent their empirically-centered guidance to the process. Some of these participants were in recovery themselves. The participants included both men and women, all had a master's or doctorate degree in their discipline, and most had worked in the field of substance abuse for more than ten years.
Procedures
The participants convened in group discussions at the Hanley Center over the period of one year to review literature, review the instruments used to collect data from Hanley clients, discuss practice observations and experiences, and derive findings in a prioritization process to define "recovery" and select measures to indicate the attainment of "recovery." The researchers solicited and encouraged the managers/practitioners to inductively construct domains from the ground up to represent dimensions of "recovery." Their responses were then merged with the existing literature to determine overlaps of concepts and reconcile differences in order to formulate a collective operational definition of "recovery" and select instruments that measure it. | Results
The literature review and group discussion process led to the conclusion that a single industry-wide definition of "recovery" did not exist and that the multiple definitions that did exist were usually subjective and open for interpretation and often appeared more ideological than scientific. As a result of this process and conclusions, the participants inductively constructed seven domains that represent aspects, and are sensitive to status, of "recovery": physical; bio-marker; chemical dependency; psychological; psychiatric; family/social; and spiritual. This conceptual model is summarized in Table 1 . For each conceptual domain, potential operational measures are proposed. Some of these measures were already in use at the Hanley Center at the time of the group discussions, but others were not and were selected based on the literature review.
The model is holistic in that all the domains are hypothesized to be correlated with each other and all are considered important to recovery. However, it should be noted that not all domains will be relevant to all clients. For example, some addiction clients have co-occurring physical and psychological impairments wherein improvement may not be expected, and thus these domains would not be relevant elements of recovery for these individuals.
Definitions and Measurement of Domains
Each of the domains is defined below and the instruments that may be used to measure it are identified. All identified instruments have established reliability and validity.
Physical Domain
Recovery is indicated in part by improvement in overall physical health. This may be measured by a history and physical examination, blood panels, brain imaging, and self-report. History, physical exam, and blood panels may be used to measure general signs, symptoms, and chemistries indicative of health and illness. Brain imaging using scans such as SPECT, PET, MRI, and functional MRI may be used to assess the effects of abused substances on the brain. Finally, the self-report medical health subscale of the Addiction Severity Index (ASI) [ 11 ] may be used to measure the physical domain.
Bio-Marker Domain
Kraemer, Schultz, & Arndt [ 12 ] have defined a biomarker as a measurable characteristic of living tissues that indicates whether a specific event or process of medical interest has occurred, is occurring, or is likely to occur. Frank & Hargreaves [ 13 ] define a biomarker as a characteristic that is objectively measured and evaluated as an indicator of normal biological processes, pathogenic processes, or pharmacological responses to a therapeutic intervention. The common theme captured in the definitions above is that biomarkers can provide readily accessible information regarding an individual's disease state.
We propose that relevant biomarkers for measuring recovery include neuropeptides and hormones. Specifically, we propose that the neuropeptides that may be used as indicators of substance use disorders and recovery are orexin (also known as hypocretin) and oxytocin. Neuropeptide transmitters are made exclusively in hypothalamic neurons and have extensive central nervous system projections. Harris & Aston-Jones [ 14 ] reveal a novel and important role for the orexin/hypocretin neuronal system in reward processing and addiction. The hypothalamus has been considered essential in motivational behaviors. Orexin is expressed in the lateral hypothalamus and has been implicated in the regulation of feeding behavior. Neurons expressing this neuropeptide have extensive projections to regions of the brain vital to behavioral responses to drugs of abuse, raising the possibility that these pathways are central when examining addiction [ 15 , 16 ].
Some neuropeptides affect adaptive central nervous system processes related to opiate, ethanol, and cocaine addiction. Oxytocin, a neuropeptide synthesized in the brain and released at the posterior pituitary, is also released in the central nervous system. Oxytocin acts within the CNS to inhibit the development of tolerance to morphine. The behavioral tolerance to the effect of cocaine was inhibited by oxytocin while increasing behavioral sensitivity to cocaine. Tolerance to ethanol was also inhibited by oxytocin [ 17 ].
A hormone that may be measured to indicate recovery is cortisol. It has been determined that addiction patients appear to be biologically different from healthy subjects as indicated by greater cortisol blunting and other cortisol-related hormonal and psychological responses [ 18 ]. In fact, during early drug-quitting phases, cortisol levels were found to drop and be related to withdrawal distress and could be found to be associated with the attainment of abstinence [ 19 , 20 ].
Chemical Dependency Domain
This domain represents the extent to which the individual uses addictive chemicals. It may be measured using the alcohol and drug use subscales of the ASI, and urine and breath screening. A five panel urine quick screen may be administered to test for benzodiazepines, amphetamines, cocaine, THC and opiates. If the result is positive a quantitative test may be conducted to test for ethanol, amphetamine, barbiturate, benzodiazepines, cannabinoids, cocaine (metabolite), opiates, phencyclidine, methadone, propoxyphene, meperidine, and tramadol. A breathalyzer may be utilized for alcohol testing.
Psychological Domain
Many persons suffering from addiction enter treatment in a significantly cognitively impaired state. Thus, improvement in gross cognitive functioning is indicative of recovery. Basic cognitive functioning may be measured with one of two instruments: the Folstein Mini-Mental State Examination (MMSE) [ 21 ] or the Test of Nonverbal Intelligence (TONI-3) [ 22 ]. The MMSE assesses orientation, registration, attention and calculation, recall, and language. Alternatively, the TONI-3 is administered to persons who have a cognitive impairment or who speak a language other than English. The test requires no reading, writing, speaking, or listening by the participant. Cognitive impairment in individuals with normal intelligence can be assessed using the Shipley Institute of Living Scale [ 23 ].
Psychiatric Domain
This domain refers to the presence of a mental disorder. This may be measured by a diagnosis on Axis I of the Diagnostic and Statistical Manual of Mental Disorders [ 3 ]. Recovery would be indicated by a diagnosis of a disorder in remission. This domain may also be measured by the psychiatric health subscale of the ASI, which captures a somewhat different aspect of psychiatric functioning than does the DSM diagnosis.
Family/Social Domain
This domain includes interaction with family and engagement with community. It may be measured using the employment/self support, family relations, and illegal activity subscales of the ASI, and the Work Interference, Family Problems, Antisocial Practices Scales of the Minnesota Multiphasic Personality Inventory-2 (MMPI-2) [ 24 ].
Spiritual Domain
This domain reflects the wide usage of Alcoholics Anonymous and other 12-step programs in the addiction rehabilitation community. It is defined as how an individual's way of life is reflected in thinking, speaking, and acting and the quality of one's relationships with oneself, others, and with a Higher Power. Spirituality may be measured with the Spirituality Self-Rating Scale [ 25 ] and the Spirituality Scale of the ASI-JCAHO Version [ 26 ].
Testing the Model
The next step in this research process is to test the hypothesized conceptual model. Such a test would involve collecting data using all of the proposed instruments at intake into a treatment program and at predetermined time points thereafter. Following data collection, confirmatory factor analysis would be used to examine (1) the extent to which the proposed measurable indicators are representative of their respective conceptual domains, (2) the extent to which changes in the indicators occur over time; and (3) the extent to which the conceptual domains are correlated with each other initially and over time.
Confirmatory factor analysis would yield both an estimation of how well the model fit the data, and tests of specific hypotheses. For purpose (1) above, the null hypothesis is that each of the indicators within a domain are not correlated with each other. If so, this would demonstrate an absence of an underlying concept. However, if the null hypothesis were rejected, it would suggest that there is construct validity of the underlying concept or domain. For purpose (2), the null hypothesis would be that the indicators do not change over time as individuals proceed through the treatment and aftercare process. Rejection of the null hypothesis would suggest that these indicators are in fact indicative of recovery. For purpose (3), the null hypothesis is that the conceptual domains are not correlated; rejection would suggest convergent validity of the model. Additionally, this model could be further validated by correlating propensity scores derived from the model with clinicians' independent ratings of the clients' degree of recovery (i.e., known-groups validity).
As a result of this validation process, it is likely that the model would be refined. Some of the indicators may turn out not to adequately represent their respective domains (i.e., indicators not correlated with their domains), and some of the domains may turn out not to adequately represent recovery (i.e., domains not correlated with other domains). Thus, the model would likely be reduced through empirical testing. Individual components of the domains and biomarkers would need to be checked for sensitivity and specificity which would address effect size issues. | Discussion and Conclusions
This study has several limitations that should be noted. First, the participants represented a narrow selection of experts. Second, the study site, the Hanley Center, serves a fairly demographically homogeneous treatment population, which might bias the experts' views. Third, some of the conceptual domains have more support in the treatment literature than others; for example, the biomarker domain primarily has research but not treatment support; however the aim of the proposed model is to test the treatment support for this domain.
This conceptual model represents an initial effort to build upon and expand upon traditional biopsychosocial models of addiction such as the syndrome model proposed by Schaffer et al. [ 27 ]. The syndrome model proposes a common etiology (e.g., distal antecedents such as neurobiology and proximal antecedents such as biopsychosocial events) that creates multiple expressions of the disease of addiction such as drug and alcohol misuse, gambling, smoking, psychiatric co-morbidities, social misconduct and biological malfunctioning. The syndrome model describes addiction. This paper proposes a potential model of recovery by incorporating elements of the syndrome model and reversing this process to build a model of recovery. The recovery model proposes that the reversal of addiction can be captured biologically through the measurement of neuroadaptations and hormonal shifts, and psychosocially with paper and pencil measures across the chemical dependency, psychiatric, psychological, family/social, and spiritual domains.
Additionally, the proposed model expands upon traditional biopsychosocial models by adding the bio-marker domain and the spiritual domain. In testing the bio-marker domain, neuropeptides and neuroadaptations may be examined to identify certain genes, DNA and core RNA for biological elements that are diagnostic of addiction, representative of recovery, and predictive of relapse. Additionally, cortisol may be examined to see if there are relationships between this hormone, stress, addiction, and recovery. Adding the spiritual domain incorporates the work of Galanter [ 8 ] who looks at addiction and recovery by examining how spiritual interventions can be used to support recovery efforts.
If the hypothesized conceptual model is supported by empirical data, it could provide a starting point for the development of a comprehensive, unifying perspective of recovery and its measurable indicators that could potentially be used to assess recovery in future outcome studies. This model would yield a conception of recovery as a continuum, rather than as a dichotomy (i.e., a person is either in recovery or not). In this model, the greater the improvement over time on a greater number of indicators, the greater the "recovery" would be considered to be. Additionally, a validated unifying model could provide a common metric across treatment programs that could allow for direct comparisons of effectiveness. | Discussion and Conclusions
This study has several limitations that should be noted. First, the participants represented a narrow selection of experts. Second, the study site, the Hanley Center, serves a fairly demographically homogeneous treatment population, which might bias the experts' views. Third, some of the conceptual domains have more support in the treatment literature than others; for example, the biomarker domain primarily has research but not treatment support; however the aim of the proposed model is to test the treatment support for this domain.
This conceptual model represents an initial effort to build upon and expand upon traditional biopsychosocial models of addiction such as the syndrome model proposed by Schaffer et al. [ 27 ]. The syndrome model proposes a common etiology (e.g., distal antecedents such as neurobiology and proximal antecedents such as biopsychosocial events) that creates multiple expressions of the disease of addiction such as drug and alcohol misuse, gambling, smoking, psychiatric co-morbidities, social misconduct and biological malfunctioning. The syndrome model describes addiction. This paper proposes a potential model of recovery by incorporating elements of the syndrome model and reversing this process to build a model of recovery. The recovery model proposes that the reversal of addiction can be captured biologically through the measurement of neuroadaptations and hormonal shifts, and psychosocially with paper and pencil measures across the chemical dependency, psychiatric, psychological, family/social, and spiritual domains.
Additionally, the proposed model expands upon traditional biopsychosocial models by adding the bio-marker domain and the spiritual domain. In testing the bio-marker domain, neuropeptides and neuroadaptations may be examined to identify certain genes, DNA and core RNA for biological elements that are diagnostic of addiction, representative of recovery, and predictive of relapse. Additionally, cortisol may be examined to see if there are relationships between this hormone, stress, addiction, and recovery. Adding the spiritual domain incorporates the work of Galanter [ 8 ] who looks at addiction and recovery by examining how spiritual interventions can be used to support recovery efforts.
If the hypothesized conceptual model is supported by empirical data, it could provide a starting point for the development of a comprehensive, unifying perspective of recovery and its measurable indicators that could potentially be used to assess recovery in future outcome studies. This model would yield a conception of recovery as a continuum, rather than as a dichotomy (i.e., a person is either in recovery or not). In this model, the greater the improvement over time on a greater number of indicators, the greater the "recovery" would be considered to be. Additionally, a validated unifying model could provide a common metric across treatment programs that could allow for direct comparisons of effectiveness. | Background
There is a lack of consensus in the addiction treatment literature regarding the definition of substance abuse "recovery".
Methods
This study utilized a review of the literature together with a participatory research design to construct a conceptual model of recovery from the perspectives of addiction treatment professionals, those recovering from addictions, and researchers.
Results
A multidimensional, comprehensive hypothetical model consisting of seven conceptual domains (physical, biomarker, psychological, psychiatric, chemical dependency, family/social, and spiritual) is presented. Each domain is operationally defined by identifying reliable and valid instruments that may be used to measure the domain. It is proposed that the conceptual model be tested using confirmatory factor analysis.
Conclusions
If empirically supported, this conceptual model would validate the hypothesized multidimensional nature of recovery and provide a potential means for assessing recovery in future treatment outcome studies. | Competing interests
The authors declare that they have no competing interests.
Authors' contributions
KD designed the study, supervised the literature review, led the panel discussions, and drafted the manuscript. BK participated in the panel discussions and contributed the conceptualization of the physical domain. PJK contributed the conceptualization of the bio-marker domain. All authors read and approved the final manuscript.
Authors' information
KD is the Director of Research at the Hanley Center, West Palm Beach, Florida. Before joining Hanley Center, from 2005 through 2009, KD was the Senior Health Planner for the Palm Beach County Health Department, where she was the lead in Mobilizing for Action through Planning and Partnerships (MAPP) for Palm Beach County, Florida. KD provided epidemiological reports on one-hundred and seventy health indices for the county and compared these findings with the State and Nation. These reports are considered "Best Practices" by the Florida State Department of Health. Starting in 2004 KD was an Assistant Professor at the University of Miami: Miller School of Medicine; Department of Epidemiology and Public Health, and is currently the Director, Comprehensive Substance Abuse and Mental Health Research Center at Hanley (under the University of Miami's auspices. From 2006-present KD was and is a Clinical Professor in the College of Osteopathic Medicine at NOVA Southeastern University. For over twenty years, Dr. Dodge has built expertise in the chemical dependency industry in substance abuse, mental health, community health and sexually transmitted infections. Dr. Dodge is a quantitative and qualitative epidemiological consultant, has published in numerous peer-reviewed journals and has conducted and published pioneering studies in specific substance abuse areas.
BK is Chief Executive Officer and Medical Director of Research of the Hanley Center. She has been practicing medicine for over thirty years. She is American Board of Addiction Medicine certified as well as board certified in Family Practice. Prior to joining Hanley Center in 2001, BK was the Medical Director for the Center for Alcohol and Drug Studies. From 1987 to 1999, she was the Medical Director at Cornerstone, a substance abuse treatment program at Wellington Regional Medical Center, and maintained a private family practice that included pediatrics and geriatrics. BK served as Clinical Assistant Professor of Behavioral Medicine at Nova Southeastern University in North Miami Beach from 1994 to 2000. She also taught addiction medicine to 1st and 2nd year students at the University of Miami Miller School of Medicine on the FAU Campus. In 1999 she developed the Inner City Outreach Program and the Medical Mobile Unit of West Palm Beach, the first such project undertaken in Palm Beach County. BK is a member of the American Medical Association, the American Medical Women's Association, the American College of Family Practitioners, the American Osteopathic Association and the American Society of Addiction Medicine. She has lectured on numerous topics, including the bio-chemistry of the brain as it relates to addiction. In 2008 the Palm Beach County Medical Society honored BK as a Heroes in Medicine award recipient for her leadership, active community philanthropy and commitment to the field of medicine. BK received a Bachelor of Science degree and was class valedictorian at Manhattanville College in Purchase, New York. At the University of Miami (Florida) she graduated Phi Kappa Phi with a Master of Science degree. After studying at Des Moines University, Health Sciences, she served an internship in Osteopathic Medicine in North Miami Beach. She completed her family practice residency at Miami's Jackson Memorial Hospital. In 2007 she was certified by the Medical Review Officer Certification Council (MTOCC) as a Medical Review Officer. Certified MROs evaluate drug and alcohol test results in the workplace.
PJK is Associate Professor in the Department of Molecular Therapeutics at Scripps Florida. His research focuses on understanding the neurobiological mechanisms underlying psychiatric disorders such as drug addiction, depression and anxiety, with a particular emphasis on the role of nicotinic acetylcholine receptors (nAChRs) in these processes. He employs a multidisciplinary approach that includes complex behavioral paradigms, physiological analyses, and molecular biological techniques. Current projects include the utilization of vector-based delivery of short-interfering RNAs (siRNA) into the brains of rodents to identify novel signaling cascades that may play a role in addiction-like behaviors. PJK received a B.A. in Biochemistry from Trinity College Dublin, Ireland, and a Ph.D. in Neuropsychopharmacology from King's College London, UK. He has received the Post-Doctoral Fellowship Award (2001-2003) from The Peter F. McManus Charitable Trust; the Young Scientist Travel Award (2001) from the International Society for Neurochemistry; the Early Career Investigator Award (2004) from the National Institute on Drug Abuse; the Young Investigator Travel Award (2004) from the American College of Neuropsychopharmacology; and the Young Investigator Award (2004-2006) from the National Alliance for Research on Schizophrenia and Depression (NARSAD). | Acknowledgements
Miriam Potocky participated in the panel discussions and provided technical and writing assistance. John Dyben, Caterina Iapolo, Sandra Betancourt, Juan Harris, Jeannie Provost, and Linda Spencer participated in the panel discussions. This project was funded by the Hanley Center. | CC BY | no | 2022-01-12 15:21:45 | Subst Abuse Treat Prev Policy. 2010 Dec 7; 5:31 | oa_package/fa/b2/PMC3016363.tar.gz |
PMC3016364 | 21143865 | Background
Diarrhea remains a major health burden in resource limited nations, contributing substantially to morbidity and mortality. It is estimated that worldwide diarrheal disease leads to 1.87 million deaths each year in children under five, accounting for 19% of the total deaths in this age group[ 1 ]. Nearly 80% of these deaths occur in developing nations. Diarrheal illness is common in Nicaragua, particularly in the North Atlantic Autonomous Region (Región Autónoma del Atlántico Norte, RAAN) province, where household sanitation measures are not in common use. The Nicaraguan Ministry of Health (MINSA) has found that the RAAN has one of the highest incidences of diarrheal disease in Nicaragua. Mortality due to diarrhea accounts for 7.3% of deaths per year in the RAAN, compared to 1 - 2% of deaths in other regions of Nicaragua[ 2 ].
The RAAN is an extensive remote region in northern Caribbean Nicaragua, with a history of geographic and political isolation. One unpaved road connects the RAAN with the populated Pacific coastal areas of the country. The inland Tasba Pri region, with the central municipality of Sahsa, is located approximately 100 km from the Atlantic coast and 400 km east of the capital Managua. While Nicaragua as a whole is primarily Hispanic Mestizo, the RAAN population is ethnically and culturally diverse, with multiple indigenous groups, including Miskito, Mayagna, and Creole. Many populations relocated to the Sahsa region as refugees from the "crossfire" from the conflicts of the Sandinista Revolution and Contra war of the 1970 s and 1980s[ 3 ]. Many communities in the municipality of Sahsa are accessible only by canoe or footpath, and are without communication. The primarily agricultural region alternates between the harsh realities of dry months with little water available and the rainy season where an overabundance of water leads to flooding and muddy conditions. Due to the political reality, the geographic isolation of the region, and the distinct ethnic groups, the inhabitants of Sahsa have been marginalized, with significant health inequities.
The RAAN was further affected by Hurricane Felix, a Category 5 hurricane, which struck the Atlantic coast of Nicaragua in September, 2007. Hurricane Felix accentuated the burden of diseases in the region, and in particular diarrheal diseases. In response, investigators from the University of North Carolina at Chapel Hill (UNC) and the Universidad Autónoma de Nicaragua, León (UNAN) have partnered (Collaborative Sahsa Health Initiative, CSHI) to identify the regional health needs, with the future goal of specific interventions. This builds upon the existing UNAN surveillance systems of western Nicaragua[ 4 ].
The aim of this study was to estimate the prevalence of diarrheal disease in the Sahsa region, and identify associations with reported water sanitation and hygiene infrastructure in this region of Nicaragua. Specifically, this study addresses the impact of modifiable exposures (water sanitation and waste management) on the prevalence of diarrheal disease. | Methods
Study design
A two stage cluster sampling method (known as 30/7)[ 5 ] was used for random population-based selection of households from the 54 communities of the Sahsa region. In the first stage, 30 communities (primary sampling units) were selected with probability of selection proportionate to population size with replacement[ 5 , 6 ]. Three of the thirty selected primary sampling units were excluded in the field due to impossible travel conditions (2 units) and political instability (1 unit), resulting in a sampled population of 27 primary sampling units in 16 communities. In the second stage, 7 households were selected from each of the primary sampling units. Because satellite imagery and detailed maps were unavailable in this region, households within each cluster were randomly selected for interview using a "compass" method undertaken by the interview team on site in the community. In this method, starting at the most centralized school or church, a pen was spun to determine the direction. Surveyors would walk in that direction, down roads and across fields, until the first residence was reached. After each survey, again a pen was spun to determine the direction of travel to find the next household.
Study instruments
Women were interviewed in July of 2009 about household characteristics, sanitation practices, diarrhea, family planning, and health care access. Interviews were conducted in Spanish or Miskito, depending on the language of the woman. Social promoters supported the interviews in Miskito. Study participants were questioned about the prevalence of diarrhea in the household in the past two weeks using the standard CDC instrument. For this study, diarrhea was coded as a dichotomous variable and was defined as 3 or more loose or watery stools or any stool with blood in a 24-hour period. Two week prevalence of diarrhea was assessed at the household level, as many of the risk factors examined in this analysis occur at the level of the household, as opposed to the level of the individual. The study protocol was approved by the Institutional Review Boards at the University of North Carolina, Chapel Hill and the Universidad Nacional Autónoma de Nicaragua, León. Participants gave verbal consent prior to enrollment in the study and collection of data.
Water and sanitation infrastructure was measured through a series of questions about drinking water source, water treatment, and latrine use. Respondents indicated where they obtained their water (indoor or communal piping, private or communal well, river or creek, natural spring, other) and how they treated it prior to drinking (no treatment, chlorine, filter, other). Latrine use was assessed through two questions that ascertained if the household used a latrine (no, yes) and if so, how many times the latrine overflowed in the past year. Latrine use was recoded as a nominal categorical variable to combine these attributes (no latrine, latrine with no overflow, latrine with at least one reported overflow).
The United Nations standardized Poverty Index, validated in Nicaragua[ 4 ], was used to assess socioeconomic status and living conditions. This included measures of household structure, access to potable water, sanitation, number of individuals in the household, education level, and employment. The presence of tambos , stilts that support a house, was recorded as the proxy for houses built in a known flood zone. The remoteness of each household was estimated by the reported travel time to the nearest health center (continuous variable, recorded in minutes).
Statistical analysis
Prevalence differences (PD) and 95% confidence intervals were estimated to examine associations between diarrhea prevalence with water treatment and latrine use. Covariates were assessed for effect measure modification by comparing stratified PD estimates and testing for heterogeneity using the Wald Heterogeneity test p-value with an a priori criterion of 0.20. Potential confounders were identified using directed acyclic graphs (DAGs) informed by recent literature on the subject[ 7 , 8 ]. Potential confounders were assessed for their inclusion in the linear regression models through backwards elimination, where variables were removed from the full models one at a time in the order of their change-in-estimate from the singly adjusted prevalence differences. If elimination of the variable resulted in a change in the PD estimate generated by the full model of greater than 0.02, the variable was retained in the model[ 9 ]. Data were analyzed in SAS 9.2 (SAS Institute Inc., Cary, NC). | Results
Surveyors from UNC and UNAN accompanied by a local health promoter conducted 189 interviews in the selected 27 primary sampling units comprising 16 communities in the Sahsa region in July 2009. Of those eligible to participate, 94.5% completed the survey (189/200). Nine women refused the interview, while one interview could not be conducted due to language barriers, and one interview was incomplete.
Study participants were predominantly of Mestizo ethnicity (Table 1 ). The majority (n = 145, 79%) of respondents indicated that the head of household had not received education beyond primary school. Nearly half (n = 77, 41%) of the households were in flood zones, as estimated by the presence of tambos (stilts). Half (n = 85, 47%) of the study participants lived in remote areas greater than one hour away from the nearest health center. Households interviewed during the study obtained water from piping (n = 78, 41%), wells (n = 60, 32%), rivers (n = 30, 16%) and springs (n = 17, 9%) and the majority (n = 103, 57%) of families used no method of water purification. The households that did employ a method of water purification used chlorine or water filters. The majority of households (n = 146, 78%) used a latrine, and of those families, 51 (37%) experienced an overflow of the latrine in the previous year.
In the study sample, 41% (n = 77) of households reported an instance of diarrhea in the past two weeks. The two week household prevalence of diarrhea was lower in households using water purification, such as chlorine and filters (N = 29, 37%), than for households drinking untreated water (N = 40, 41%) (Table 2 ). Families drinking from wells and rivers had a prevalence of diarrhea in the previous two weeks 10% and 16% higher, respectively, than those drinking from piping, though both 95% confidence intervals (CI) encompassed the null. Households reporting that they did not have a latrine were more likely to report diarrhea in the previous two weeks. Forty-nine percent of households without a latrine reported at least one case of diarrhea within the past two weeks, whereas only 38% of households with latrines reported diarrhea (Table 2 ).
Five potential confounders of the relationship between water treatment method and diarrheal disease were assessed: water source, residence in a flood zone, socioeconomic status (head of household education level), number of individuals in the household, and time required to travel to the health center (an indicator of remoteness). In the linear regression model, after backward elimination, only water source remained in the model. The adjusted PD and 95% CI comparing two-week prevalence of diarrhea among households who treated water with filters to households who drank untreated water was -0.12 (-0.33, 0.10) (Table 3 ). The PD for chlorine was -0.05 (-0.21, 0.11).
For assessment of the relationship between latrine use, latrine overflow, and diarrhea, three potential covariates were considered: living in a flood zone, number of individuals in the household, and socioeconomic status (head of household education level). After backwards elimination, none of these covariates remained in the final model. The analysis shows that latrine overflow was associated with a higher prevalence of diarrhea (PD = 0.19 95% CI = 0.02, 0.36). There was some evidence of modification of the effect of having a latrine on the prevalence of diarrhea by latrine overflow. The PD comparing owning a latrine that did not overflow to not owning a latrine was -0.17 (-0.35, 0.01), while the PD comparing owning a latrine that did overflow to not owning a latrine was 0.02 (-0.18, 0.22) (Table 4 ). | Discussion
This population-based, cross-sectional study in the isolated Sahsa municipality of Nicaragua found a high prevalence of household diarrhea (41%), which is consistent with data from the Nicaragua Ministry of Health (MINSA) 2 . Importantly, the analysis suggests that latrine ownership with no reported overflow was associated with reduced household prevalence of diarrhea.
The use of prevalence differences in our analysis allows for a prediction of benefit from a change in exposure[ 10 ], with the number of needed interventions, or the number needed to treat (NNT) in order to reduce the household diarrheal burden by 1. Our data estimate that the NNT for water treatment with the use of filters is 17 households, and for chlorine treatment it is 20 homes. Similarly, the NNT is 10 for latrine use, and 6 for use of well-designed latrines which do not overflow. This suggests that an important reduction in diarrheal burden can be obtained with changes in the sanitation infrastructure.
Previous investigations addressing water source, water treatment, and latrine use have led to inconsistent conclusions on associations with diarrheal disease[ 11 - 17 ]. A recent review found that point-of-use chlorine treatment was associated with reduction in diarrhea in 9 of 10 reviewed studies[ 14 ]. In only 5 of these, however, was the association statistically significant. Sand filtration systems have been shown to be effective at reducing diarrhea [ 16 , 17 ]. Our results did identify an association of water source and treatment with reduced prevalence of diarrhea, but these findings were not statistically significant. For sanitation infrastructure, two studies have shown a reduction in diarrhea associated with latrine access[ 13 , 18 ]. Our results are similar in that we found a reduction in diarrhea for households with access to a latrine. These results, however, were not statistically significant. A statistically significant association was found for those with access to a properly situated latrine that does not overflow.
The discordant results may stem from the specific emphasis of the studies. While water purification and latrine use are factors which protect against diarrheal disease, few studies have addressed these factors in a remote, impoverished, and high prevalence region. This study suggests that proper latrine function is effective in preventing diarrhea even in areas with many other risk factors for diarrhea, by highlighting the importance of proper placement and maintenance of latrines. Ownership of a latrine that overflowed was associated with little to no change in the prevalence of household diarrhea compared to not owning a latrine (PD: 0.02, 95%CI: -0.18, 0.23), whereas ownership of a latrine that did not overflow was associated with an appreciable difference (PD: -0.17 95% CI: -0.36, 0.01). The identification of the potential causal interaction between latrine ownership and overflow may have important implications for directing future interventions. Without the interaction term, the PD model treats all households reporting latrines with equal risk, regardless of the proper functioning of the latrine, which may lead to inaccurate effect estimates.
The main strengths of this study include the population-based sampling of the isolated Sahsa region and the high subject response rate. The incorporation of local health promoters was a key factor in the high response rate. The assessment has several limitations. The study region is isolated and the travel required to conduct interviews in selected communities was challenging. Three initially selected communities could not be reached due to travel conditions and safety concerns. As detailed maps of the region were not available, second stage sampling used the "compass" approach for household selection. By allowing interview teams to select interview locations in this way, selection bias may have been introduced[ 6 ]. A further potential bias is that data are based on self-report. No inspections of the home were made to verify answers on water treatment or latrine set-up. | Conclusions
This study provides a step forward in understanding the diarrheal disease burden in northern Nicaragua, which is potentially generalizable to other remote regions of Central America. It is suggested that low cost interventions to prevent latrine overflow may reduce the incidence of diarrheal disease, even in the presence of other risk factors. These findings provide support to the notion that public health interventions should be appropriate for the regional environment. This information may be helpful in directing international planning for infrastructure improvements aimed at addressing extreme poverty, such as those outlined in the United Nations millennium development goals adopted in 2000[ 19 ]. Simple, low cost interventions that improve water and latrine infrastructure may reduce the prevalence of diarrheal disease in the isolated regions of Nicaragua and Central America. | Background
The isolated northern region of Nicaragua has one of the highest rates of diarrheal disease in Central America. Political and environmental hardships faced by inhabitants of this region are contributing factors to this health inequity. The aim of this study was to assess the relationship between water and latrine infrastructure and the prevalence of diarrhea in this region.
Methods
A population-based, cross-sectional survey of women of reproductive age was conducted in the Sahsa region of northern Nicaragua in July, 2009. Households were selected by two stage cluster sampling methodology. A questionnaire was administered in Spanish and Miskito with assessment of household and socioeconomic conditions, sanitation practices, and health care access. Diarrhea prevalence differences at the household level over a two week reporting period were estimated with a standardized instrument which included assessment of water treatment and latrine use and maintenance.
Results
There were 189 women enrolled in the current study. The use of water purification methods, such as chlorine and filters, and latrine ownership were not associated with reduced prevalence of household diarrhea in the two week reporting period. Latrine overflow, however, was associated with an increased prevalence of diarrhea during the same two week period [adjusted prevalence difference and 95% CI: 0.19 (0.03, 0.36)].
Conclusions
Simple, low cost interventions that improve water and latrine infrastructure may reduce the prevalence of diarrheal disease in the isolated regions of Nicaragua and Central America. | Competing interests
The authors declare that they have no competing interests.
Authors' contributions
SD and JE were involved in study design, helped with data collection, performed data analysis, and drafted the manuscript. JH participated in study design and helped to draft the manuscript. RP was involved in study conception and coordination. DW was involved in study conception and design and helped with data collection. DM was involved in study conception and coordination and was involved in critical revisions of the manuscript. All authors read and approved the final manuscript.
Pre-publication history
The pre-publication history for this paper can be accessed here:
http://www.biomedcentral.com/1472-698X/10/30/prepub | Acknowledgements
Special thanks to the 2009 CSHI team:, Omar Awan, Abigail Corry, Sophia Delpe, Mary Dickinson, Meredith Gilliam, Karinne Jiron, Maria Jose Lopez Espinoza, Bleysing Loaisiga, Luvianka Lopez Cruz, Nayrobi Luna Amador, Jessica Mercedes Hernandez Quiroz, Bomar René Méndez Rojas, Jacqueline Shafeei, Emily Treleaven, Jazmin Williams.
This research was supported by a UNC School of Medicine STRT grant (T35 DK07386-29), and an Infectious Diseases Society of America Medical Scholarship. | CC BY | no | 2022-01-12 15:21:45 | BMC Int Health Hum Rights. 2010 Dec 8; 10:30 | oa_package/e3/5e/PMC3016364.tar.gz |
PMC3016365 | 21134246 | Background
The syndrome of 22q11.2 microdeletions comprises a set of developmental abnormalities, including primarily heart and craniofacial congenital defects. The syndrome is characterised by various phenotypic changes described earlier in DiGeorge, Takao and Shprintzen syndromes. It is the most common microdeletion in the population (1/4000 births) [ 1 ]. The 22q11.2DS is inherited in an autosomal dominant manner [ 2 ].
In 1993, nosological entities caused by microdeletions in this region were given a common name of CATCH 22, which is an acronym formed by the first letters of the phenotypic traits accompanying this syndrome: C - cardiac defect , A - abnormal face , T - thymic hypoplasia , C - cleft palate , H - hypocalcaemia [ 3 ]. However, the spectrum of clinical features of this syndrome is considerably wider than that indicated by the name CATCH. In the light of the latest investigations, 22q11.2 microdeletion occurs also in patients suffering from the Cayler syndrome and Opitz/GBBB. Therefore, the name CATCH 22 was replaced by 22q11.2 microdeletion syndrome and is known under this name today [ 4 ]. Dysmorphy traits are characteristic for the entire 22q11.2 microdeletion syndrome. The most common ones include: elongated face, low-set small dysplastic ears, microstomia, small teeth, congenital tooth or enamel agenesis, almond-shaped eyes, hypertelorism, prominent long bulbous nose, retro- and micrognathia, short neck and characteristic arachnodactyly. The above-listed traits can be clearly pronounced or barely noticeable making phenotypic diagnostics very difficult.
Approximately 180 signs of the 22q11.2DS have been described to date. These include abnormalities of the urinary-reproductive, skeletal, gastrointestinal, nervous and immunological systems as well as psychiatric disturbances. Moreover, considerable changes in the phenotype expression are observed in persons with a deletion of the same size, belonging to the same family or even in monozygotic twins [ 5 ]. In the light of these facts, it is difficult to determine the correlation between genotype and phenotype in patients with 22q11.2DS [ 6 ]. Causes can be attributed to: heterogeneity, pleiotropy, variations in mutation penetration, varying impact of environmental effects as well as other factors or events affecting the foetus [ 7 ].
In 90% of patients, the identified microdeletion covers the region of 3 Mbp, encoding 30 genes, referred to as the typically deleted region (TDR), e.g. DGCR6, PRODH, IDD, TUPLE1, UFDL1, TBX1, ZNF74, CRKL , whereas microdeletion of 1.5 Mbp including 24 genes has been found in 8% of patients. A minimal DiGeorge critical region (MDGCR) covers about 0.5 Mbp and several genes: CTP, CLTD, TUPLE1, UFD1, TBX1, CDC45 [ 8 ]. The remaining 2% include patients with other chromosomal aberrations (e.g. del10p) [ 9 ]. Interestingly, the intensity of clinical symptoms is not correlated with the size of microdeletions [ 10 ]. Atypical deletions in the 22q11.2 region are rare. In patients with chromosome 22q11.2 microduplication syndrome, similar phenotype can be found.
The 22q11.2 microdeletion accounts for 5% of all heart defects in newborns [ 11 ]. In approximately 10-20% of patients the 22q11.2 microdeletion does not originate de novo but is also found in one of the parents who was not earlier identified as the mutation carrier [ 12 ]. A phenotype similar to the 22q11.2DS may also result from environmental factors such as diabetes or alcoholic disease affecting the foetus as well as from increased or decreased levels of retinoic acid (vitamin A) in the mother's organism [ 13 ].
The aim of this study was to determine the frequency of the 22q11.2 microdeletion in (i) 87 children with a congenital heart defect (37 conotruncal and 50 non-conotruncal cases) and at least one additional 22q11.2DS feature, (ii) a group of the deleted children's parents. We also attempted to establish the correlation of the most frequent heart defects in both groups and characteristic phenotype in patients with 22q11.2 microdeletion to determine selection criteria for at risk patients. | Methods
FISH (fluorescence in situ hybridization) method was used in a group of 87 children with a congenital heart defect at least one additional 22q11.2DS feature: facial dysmorphy, thymic or parathyroid gland hypoplasia or aplasia, cleft palate or lip and hypocalcaemia during the neonatal period or accompanying abnormalities of urinary-reproductive, skeletal, gastrointestinal or nervous systems. The patients were divided into two groups: with conotruncal (n = 35) and non-conotruncal heart defects (n = 52). The examined group comprised 39 girls and 48 boys. The average age of all examined children was 2 years (0-6 years); thirty one patients (35.63%) were newborns.
Additionally, seventeen of the deleted children's parents were included in the study. In one case, only one of the parents was examined. Having obtained a written consent of the parents, blood samples for genetic analyses were taken from the patients. The following analyses were performed: case history analysis, cardiologic examination and detailed pedigree and karyotype analysis as well as object examination together with phenotype assessment.
In all cases, the heart anatomy and physiology were diagnosed by echocardiography. In complex cardiac defects cardiac catheterization was performed. The thymus and parathyroid glands were visualized by ultrasound, and CT or MRI. Assessment of parathyroid dysfunction included detailed clinical history regarding the symptoms of hypocalcaemia and the following biological features: concentration of total calcium (normal range 2.2-2.6 mmol/l), phosphate (normal range 0.8-1.45 mmol/l) and ionized calcium (normal range 1.17-1.30 mmol/l). Low calcium levels in the blood indicated hypocalcaemia (total calcium <2.2 mmol/l; ionized calcium <1.17 mmol/l).
Heparinised lymphocytes of venous peripheral blood (0.5-1 ml each) stimulated with phytohemagglutinin were used for the cytogenetic studies. Isolated cells were grown and harvested in accordance with standard procedures. The obtained chromosome preparations were stained with the GTG standard method [ 14 ]. Fluorescence in situ hybridization using DiGeorge (Tuple) Region Probe Dual Colour (Qbiogene) was performed according to the procedure established by the manufacturer. This probe is specific to the HIRA (TUPLE1, DGCR1 ) region at 22q11. The hybridization signals were documented with a Zeiss-Axiophot fluorescent microscope and analysed with CytoVision 3.52 software.
The study was approved by the Ethic Committee at the Poznan University of Medical Sciences. | Results
Microdeletions in the region q11.2 of the 22 chromosome were identified in the group of fourteen patients - thirteen children with a congenital heart defect (conotruncal and non-conotruncal) and at least one additional trait from the 22q11.2DS spectrum (14.94% of children from the analysed group) and one parent of a deleted child. Patients were classified into 2 groups based on the conotruncal and non-conotruncal cardiac defects. Microdeletion of 22q11.2 occurred in 20% and 11.54% of the first (7/35 conotruncal patients) and the second group of children (6/52 non-conotruncal patients) respectively. The congenital heart defects and details of phenotypic abnormalities are described in Table 1 .
The analysed group of patients with the microdeletion comprised nine girls and four boys. The mean age on the day of examination was one year and two months, with seven patients diagnosed in the first month, three - in the first year and the remaining patients in the second, third and sixth year of life. The performed karyotype analysis of all patients did not reveal any abnormalities regarding the number and structure of chromosomes.
Microdeletion of 22q11.2 was detected in 1/17 parents of nine children. The parent with microdeletion did not have any diagnosed heart defects and exhibited only slight dysmorphism- ear dysplasia and arachnodactyly. The above-mentioned features were more pronounced in his child. Microdeletion was not detected in any other parent, which indicates that in the case of 7 children 22q11.2 microdeletion originated de novo .
Pedigree analysis of all the patients revealed the family history with a congenital heart defect in 14 cases (16.09%). Parents of the deleted children did not harbour any congenital defects including heart or other principal trait characteristics for the 22q11.2DS. | Discussion
Microdeletion of 22q11.2 was detected in 14.94% of the examined patients (Table 2 ). Other authors indicate wide range of prevalence rate depending on different selection criteria [ 15 - 19 ]. The frequency of 20% and 11.54% was observed in the first and second group of patients respectively. The published data reveal frequency of 22q11.2 microdeletion in conotruncal and dysmorphic patients at the level of 15.8% and 25% respectively [ 19 , 20 ]. Our data shows increased prevalence of 22q11.2 microdeletion (11.54%) in patients with non-conotruncal heart defect and at least one typical feature of this syndrome compared to other research studies whose results equalled 8.3% and 10% [ 18 , 21 ]. As postulated by Tobias et al., these results probably contribute to the expansion of the clinical and phenotypic criteria used in screening of patients at risk for 22q11 microdeletion [ 22 ]. It should be mentioned that data which indicate occurrence of this deletion in patients with conotruncal or non-conotruncal heart defects without characteristic phenotype show very low percentage of the mutation (0%, 5.7%, 6.5%) [ 18 , 23 , 24 ]. However, the dysmorphic features may be overlooked and extracardiac defects (i.e. mental retardation) may appear later.
The obtained frequencies of occurrence of the remaining heart defects were similar to the data derived from the European multi-centre research on the spectrum of phenotypic traits in patients with 22q11.2DS as well as from published literature data. As indicated in the literature the most frequent heart defects in children with 22q11.2DS include: tetralogy of Fallot, ventricular septal defect, interrupted aortic arch, pulmonary valve atresia with ventricular septal defect, persistent truncus arteriosus, transposition of the great arteries and atrial septal defect [ 17 , 18 , 25 ].
Facial dysmorphia is another typical feature for 22q11.2DS is facial dysmorphia. Our study indicated dysmorphic nature of discreet to strong expression, which is identical to the results of other researchers (Table 1 , 2 ) [ 25 , 17 , 26 ]. In newborns with microdeletions, problems with milk suckling and regurgitation, associated with palatopharyngeal insufficiency, are often observed. However, the problems with food intake are not necessarily caused by cleft lips and palate. It has been estimated that delayed speech development and rhinolalia may occur in up to 50% of patients in this group [ 27 ].
In many children with 22q11.2DS and disturbed thymus function, decreased immunity, increased proportion of complications after heart operations and higher mortality rate are observed [ 28 ]. For this reason it is believed that these children should be vaccinated at a later date than recommended in the vaccination schedule. Only in about 5% of patients with the 22q11.2DS microdeletion bone marrow transplantation or thymus transplantation might be necessary [ 29 ]. It has also been shown that thymus dysfunction predisposes patients with the deletion to different autoimmune diseases, i.e. juvenile arthritis, thrombocytopenia and Graves-Basedov disease [ 30 ]. In our group of patients, thymic hypoplasia coexisted with typical dysmorphic changes only in one child. In two patients with thymic hypoplasia, no signs of facial dysmorphy were observed. In two remaining cases the changes were considerably milder.
Another sign complicating the course of the neonatal period in the examined patients with 22q11.2DS was hypocalcaemia [ 31 ]. It should be emphasised that in our studies, thymic or parathyroid gland hypoplasia as well as hypocalcaemia occurred exclusively in the group of children with the 22q11.2 microdeletion. A similar relationship was found in experiments conducted by Koch et al., who carried out comparative investigations of calcium and thyroid hormone levels in two groups of adults with and without microdeletion. One group comprised patients with confirmed 22q11.2DS and congenital heart defect secondary to the conotruncal heart defect (tricuspid atresia, tetralogy of Fallot, pulmonary atresia with ventricular septal defect, interrupted aortic arch type B), while the other group was formed by patients with similar heart defects but without confirmed microdeletions [ 31 , 32 ]. Therefore, thymic or parathyroid gland hypoplasia coexisting with a congenital heart defect should indicate the need to carry out genetic examinations for 22q11.2DS.
Currently, more attention is being devoted to problems of psychiatric, psychological and behavioural nature in patients with the 22q11.2 microdeletion. In the examined group of children with 22q11.2DS, one patient was diagnosed with slight mental retardation and another with delayed psychomotor development. However, it is worth emphasising that the majority of children were examined during the neonatal period when it is difficult to determine unequivocally if the psycho-physical development of a child is normal. According to investigations carried out by other researchers, the average value of the intelligence quotient (IQ) in deleted patients amounts to 75 and abnormal development is observed in approximately 35-40% of patients with microdeletion [ 33 ]. Most frequently, problems in learning and abstract thinking are observed [ 34 ]. Therefore, in properly developed newborns and infants with 22q11.2DS, it is impossible to rule out their incorrect development in the future, which could be attributed to the 22q11.2 microdeletion. This remark also refers to psychiatric diseases. Increased incidence (up to 20%) of depressions, schizophrenia and bipolar disorder was demonstrated in adult patients with 22q11.2DS. Hemizygotic deletion on chromosome 22 was found in 2-5% of patients suffering from schizophrenia [ 35 ].
Diagnosis based on a clinical picture exclusively may be difficult due to the fact that 22q11.2 microdeletion phenotype is not always fully developed, dysmorphy may vary from very pronounced to quite discreet and patients may show non-typical signs. Our studies indicate that it is necessary to perform the analysis of the 22q11.2 microdeletion not only in children with congenital heart defect secondary to conotruncal heart defect but also in patients with other heart defects coexisting with at least one feature from the 22q11.2DS spectrum.
Literature data reveal that about 10-20% of patients inherit 22q11.2 microdeletion from one of the parents [ 12 ]. Frequently, the phenotype of affected parents exhibits none or only discreet traits of 22q11.2DS. This can probably be attributed to variable levels of penetration of this mutation. It is evident from experiments carried out on monozygotic twins that the 22q11.2 microdeletion of identical size can cause varying phenotype consequences [ 36 ]. On the other hand, studies conducted by Adeyinka et al. suggested familial origin of small-sized deletions. Exceptionally large differences may relate to congenital heart defects [ 37 ]. Genetic analyses of the 22q11.2 region carried out in two groups of patients with familial incidence of congenital heart defects revealed that this change is inherited in 44% and 13% of patients respectively [ 27 , 38 ].
The high frequency of 22q11.2 microdeletions originating de novo might suggest considerable genomic instability in this region of chromosome 22. The genetic mechanism behind the microdeletion is based on nonallelic homologous recombination (NAHR). The same aetiology has been observed in the case of other microdeletions - Angelman (15q11-q13 microdeletion), Prader-Willi (15q11-q13 microdeletion) and Williams-Beuren (7q11.2 microdeletion) syndromes [ 7 ].
It is believed that some of the patients with the 22q11.2DS phenotype do not have a chromosomal microdeletion but that they are likely to be carriers of mutations in one or more genes [ 8 ]. This fact has been observed in other microdeletion syndromes as, for example, William's and Wolf-Hirschhorn's syndromes (4p16.3 microdeletion). In the majority of cases, the 22q11.2 microdeletion locus comprises approximately 30 genes. Comprehensive functional studies on animal models revealed that TBX1 is the only gene with haploinsufficiency that results in the occurrence of a phenotype characteristic for the 22q11.2DS [ 39 ]. TBX1 gene encodes phylogenetically conservative transcription factor which takes part in the regulation of developmental processes. The extensive mechanisms in which it is involved create many possible ways for the development of abnormalities caused by haploinsufficiency - lack, excess or incorrect structure of the TBX1 gene. The discovery of the mutation in the TBX1 gene in a patient with the microdeletion phenotype and the recognition of its impact on the TBX1 protein accelerated the understanding of the molecular basis of 22q11.2 microdeletion syndrome [ 11 ].
It is possible to diagnose the 22q11.2 microdeletion using other diagnostic methods than FISH, ie. high resolution comparative genomic hybridization (HR-CGH), multiplex ligation-dependent probe amplification (MLPA), short tandem repeats (STR) as well as quantitative polymerase chain reaction (qPCR). These methods enable researchers to estimate the size of microdeletions, determine chromosomal breakpoints and perform segregation analysis [ 40 ].
Regardless of the methods applied to diagnose 22q11.2DS, early diagnosis is crucial. This is why the possibility of prenatal identification of congenital heart defects begins to play an important role. The result of echocardiographic examinations confirming a heart defect in the foetus can frequently give the baby a chance to survive as adequate care can be provided. In addition, the result could qualify newborns to a risk group and indicate the need to perform genetic analysis. This assumption was confirmed by investigations carried out by Moore et al. who prenatally diagnosed foetuses by means of USG. They demonstrated that 41% of foetuses with congenital heart defect had an incorrect karyotype and 3% had 22q11.2 microdeletion [ 41 ].
Long-term studies carried out in several research centres indicate that children with congenital heart defects are most likely to undergo examinations for microdeletions on chromosome 22 while being the patients on paediatric cardiology wards, even though they are under the care of other specialists [ 28 ]. Our studies also confirmed the above data as 70% of patients that we diagnosed with the 22q11.2 microdeletion were referred to the examinations by the physicians from the Department of Paediatric Cardiology and Nephrology of the Poznan University of Medical Sciences. | Conclusions
There is a wide spectrum of phenotypic characteristics that occur in patients with the 22q11.2DS. This syndrome is characterized by a highly variable expression of phenotype, ranging from discreet to quite strong. ToF was the most frequent conotruncal heart defect in the group of children with 22q11.2 microdeletion, while ASDII+VSD and VSD were the most common in the non-conotruncal group. Our studies indicate that patients with moderate heart defects (atrial or ventricular) or frequently observed complete absence of dysmorphism exhibit a typical but rare thymic hypoplasia. In one familial case of the 22q11.2DS, no cardio-vascular defect was identified in the parent who was the microdeletion carrier, despite the fact that tetralogy of Fallot was diagnosed in the child with the same mutation.
Many patients with the 22q11.2DS additionally suffer from accompanying defects, primarily, associated with urinary-reproductive, skeletal, gastrointestinal, nervous systems and the organ of sight. Therefore, at various stages of life they will require multi- and highly-specialized care of cardiologists, cardiosurgeons, immunologists, orthopaedists, psychologists and psychiatrists. Such long-term and specialized care over patients with the 22q11.2DS is provided, for example, in Goteborg, Tokyo or Philadelphia [ 7 ]. It should also be emphasised that patients with this syndrome as well as their families should be under genetic supervision. Our observations suggest that the criteria in searching for microdeletion 22q11.2 should be expanded and applied in patients with conotruncal and non-conotruncal congenital heart defects and at least one typical feature of this syndrome (facial dysmorphy, thymus hypoplasia, cleft palate or hypocalcaemia) as well as in the deleted children's parents. | Background
The 22q11.2 microdeletion syndrome (22q11.2 deletion syndrome -22q11.2DS) refers to congenital abnormalities, including primarily heart defects and facial dysmorphy, thymic hypoplasia, cleft palate and hypocalcaemia. Microdeletion within chromosomal region 22q11.2 constitutes the molecular basis of this syndrome. The 22q11.2 microdeletion syndrome occurs in 1/4000 births. The aim of this study was to determine the frequency of 22q11.2 microdeletion in 87 children suffering from a congenital heart defect (conotruncal or non-conotruncal) coexisting with at least one additional 22q11.2DS feature and to carry out 22q11.2 microdeletion testing of the deleted children's parents. We also attempted to identify the most frequent heart defects in both groups and phenotypic traits of patients with microdeletion to determine selection criteria for at risk patients.
Methods
The analysis of microdeletions was conducted using fluorescence in situ hybridization (FISH) on metaphase chromosomes and interphase nuclei isolated from venous peripheral blood cultures. A molecular probe (Tuple) specific to the HIRA (TUPLE1, DGCR1 ) region at 22q11 was used for the hybridisation.
Results
Microdeletions of 22q11.2 region were detected in 13 children with a congenital heart defect (14.94% of the examined group). Microdeletion of 22q11.2 occurred in 20% and 11.54% of the conotruncal and non-conotruncal groups respectively. Tetralogy of Fallot was the most frequent heart defect in the first group of children with 22q11.2 microdeletion, while ventricular septal defect and atrial septal defect/ventricular septal defect were most frequent in the second group. The microdeletion was also detected in one of the parents of the deleted child (6.25%) without congenital heart defect, but with slight dysmorphism. In the remaining children, 22q11.2 microdeletion originated de novo .
Conclusions
Patients with 22q11.2DS exhibit wide spectrum of phenotypic characteristics, ranging from discreet to quite strong. The deletion was inherited by one child. Our study suggests that screening for 22q11.2 microdeletion should be performed in children with conotruncal and non-conotruncal heart defects and with at least one typical feature of 22q11.2DS as well as in the deleted children's parents. | Competing interests
The authors declare that they have no competing interests.
Authors' contributions
Being the principal investigator of the project, AW planned and organized the study, conducted FISH analysis and prepared the manuscript, DW-B, MW, RG, AM-K, TM, MB-S, MK enlisted the patients and provided clinical data. JS, JZ performed FISH. AL-B, AS, WB, RS verified and interpreted clinical and genetic results.
Pre-publication history
The pre-publication history for this paper can be accessed here:
http://www.biomedcentral.com/1471-2431/10/88/prepub | Acknowledgements
This research was funded through statutory research projects of the Department of Paediatric Cardiology and Nephrology and the Department of Medical Genetics, Poznan University of Medical Sciences, Poland. We thank the patients and their family members for their cooperation during the study. | CC BY | no | 2022-01-12 15:21:45 | BMC Pediatr. 2010 Dec 6; 10:88 | oa_package/0c/05/PMC3016365.tar.gz |
PMC3016366 | 21143869 | Background
Infections are the single largest cause of neonatal deaths globally [ 1 ]. Based on the onset, neonatal sepsis is classified into two major categories: early onset sepsis, which usually presents with respiratory distress within 72 hours of age and late onset sepsis that usually presents with septicemia after 72 hours of age. Sepsis in neonates is a significant contributor to morbidity and death, with mortality rates varying from 3% to as high as 50% in some series, especially with gram-negative pathogens [ 2 - 6 ]. The incidence of early-onset sepsis in term neonates in The Netherlands 2003-2006 is approximately 0.6% [ 7 ].
Early diagnosis and treatment of the newborn infant with suspected sepsis are essential to prevent severe and life threatening complications. In contrast to the clear and valuable therapeutic options, the diagnosis of suspected early-onset neonatal sepsis is challenging. The early signs of sepsis in the newborn are non-specific. Therefore, many newborns with nonspecific symptoms undergo diagnostic studies and the initiation of treatment before the presence of sepsis has been proven. Blood culture is currently the gold standard for the diagnosis of sepsis. However, in addition to the fact that culture reports are available only after 48-72 hours, blood cultures are frequently false negative due to the small amount of blood that can be drawn from neonates [ 8 ]. The reliability of most laboratory markers, including white blood cell count (WBC), C-reactive protein (CRP), Procalcitonin (PCT) and IL-6 for the diagnosis of neonatal infection has been assessed in highly diverse groups of ill neonates with a mixture of diagnoses and conditions and has yielded variable results [ 9 ]. If culture results come back negative after 48-72 hours, the clinician has to decide whether to provide continued treatment.
In the era of multidrug resistance, it is mandatory to avoid unnecessary use of antibiotics to treat non-infected infants. In addition, the intravenously administration of antibiotics necessitates admission of the neonate to the hospital and thereby separation of mother and child in this delicate period of life. Thus rapid diagnostic test(s) that differentiate infected from non-infected infants, particularly in the early newborn period, have the potential to make a significant impact on neonatal care. In an effort to reduce the use of antimicrobial agents in neonates, clinical studies have been undertaken using the biomarker CRP to safely influence the length of antimicrobial therapy [ 10 , 11 ]. Thus far, no evidence has been presented that using CRP can make an impact on the length of antimicrobial therapy.
Another biomarker that has been discovered more recently, PCT, is proven to be a good marker of severe, invasive bacterial infections in children. All studies on severe, invasive bacterial infections in children report higher sensitivities and specificities of PCT than for CRP [ 12 - 16 ]. PCT is a 116-aminoacid peptide and one of the precursors of calcitonin. The physiological function of calcitonin remains unknown. No disorders attributable to either an excess or a deficiency of calcitonin have been identified. Most microbial infections induce a ubiquitous increase in CALC1 gene expression and a subsequent release of calcitonin precursors from all tissues and cell types throughout the body [ 17 ]. In bacterial infections, PCT increases from concentrations in the picogram range (below the detection level of current PCT assays) to plasma concentrations ranging from 1 to 1000 ng/ml. This increase often correlates with the severity of the disease and with mortality [ 18 - 21 ]. Increases in PCT occur more rapidly than increases in CRP. PCT can be detected in the plasma 2 hours after the injection of endotoxins. Within 6-8 hours, PCT concentrations rise and a plateau is reached after approximately 12 hours [ 22 ]. CRP can be detected in the plasma after 12 h and reaches a plateau after 20-72 hours. PCT and CRP decrease to their normal values after 2-3 days and 3-7 days, respectively [ 23 - 25 ].
The use of PCT as a marker of neonatal bacterial infection is complicated by several factors. First, infants with respiratory distress syndrome, hemodynamic failure, perinatal asphyxia, intracranial hemorrhage, pneumothorax, or after resuscitation have raised serum PCT concentrations that do not differ from those of septic neonates up to 48 h after onset of clinical signs of distress or infection [ 26 - 28 ]. Second, a physiological increase of PCT has been reported up to 48 h post partum [ 29 ]. Third, prepartum and intrapartum administration of antibiotics may affect PCT concentrations in the umbilical cord [ 30 ], and postnatal administration of antibiotics will decrease PCT concentrations more rapidly than CRP concentrations [ 31 , 32 ]. When these pitfalls are taken into account, PCT performs better than CRP in diagnosing neonatal bacterial infection.
Chiesa et al developed a nomogram for PCT [ 33 ] and Assumma and colleagues performed a longitudinal study on PCT values in healthy neonates [ 34 ]. In his findings Chiesa was able to report two major differences in between healthy and septic neonates which formed the basis of his nomogram. Firstly, the level of elevation in PCT was much higher in septic neonates versus healthy newborns and secondly, the absence of a decrease of PCT values after the initial cytokine release post-partum is indicative of a bacterial infection. With all PCT values being increased during the first two days of life, a reference range covering this time period with intervals of several hours is a tool to identify septic neonates. The adult reference ranges apply from three days after birth. Using PCT in this manner has been proven extensively to be a very reliable marker for the diagnosis of neonatal sepsis [ 35 - 39 ].
In recent years a novel indication for the use of PCT has been discovered, related to its described high negative value. It has been reported in many interventional trials [ 40 - 44 ] that a low PCT indicates the absence of a need for antimicrobial therapy. In several countries the recent adult intensive care guidelines have been altered to the extent that PCT has displaced CRP in the recommendations [ 45 , 46 ]. Applying this principle to neonatology, we performed a single-centre intervention trial in Lucerne, Switzerland that showed that serial PCT determinations allowed to shorten the duration of antibiotic therapy in term and near-term infants with suspected early-onset sepsis [ 47 ]. This study is designed to test the reliability of a PCT-based strategy in a larger cohort of neonates. | Methods/Design
The purpose of this trial is to evaluate whether PCT measurements are able to reduce antibiotic usage in suspected neonatal early-onset sepsis by reducing the duration of antibiotic treatment with unchanged outcome. The study is designed as randomized open controlled international multicenter intervention trial on the efficacy (superiority aspect) and safety (non-inferiority aspect) of PCT guided treatment.
Outcome measures
Co-primary outcome measures are the duration of antibiotic therapy and the proportion of infants with a recurrence of infection requiring additional courses of antibiotic therapy (within 72 hours after ending antibiotic therapy) and/or death in the first month of life (safety of study intervention). A secondary outcome measure is the length of hospitalisation stay.
Inclusion/Exclusion criteria
Inclusion criteria: Term and near-term infants with a gestational age ≥ 34 0/7 weeks, age 0-3 days of life, suspected sepsis in the first 3 days requiring empiric antibiotic therapy, parental consent. Exclusion criteria are severe congenital malformations and surgical procedures before or during the study. In case of surgical procedures during this trial patients will be excluded from the study and its analyses. These children will be treated by current protocols of care, including protocols for antibiotic therapy. Patients are not allowed to enter the trial for a 2 nd time.
Patients will be randomized between t = 0 and t = 12 h after the initiation of antibiotic treatment. Part of the neonates will be eligible for inclusion immediately after birth. Since it is not always possible to ask informed consent from parents immediately after a child is born, this timeframe for randomization of 12 hours is necessary. Randomization will be to either a standard treatment based on conventional laboratory parameters (standard group) or to PCT-guided treatment (PCT group) blocked by centre: Randomization is done by drawing group assignment cards in opaque sealed envelopes (Switzerland) and by computer based digital randomization (The Netherlands).
Statistical analyses
This trial is designed to exclude a difference in the rates of re-infection or death greater than 2% (non-inferiority aspect of the trial). Assuming a 2% reinfection/death rate in each group, 770 patients are required per arm for a power of 80% at one-sided alpha of 0.025. Based on our data of the study in Lucerne [ 47 ], with this number of patients a difference between mean antibiotic therapy durations of 10 hours can be detected at two-sided alpha of 0.05 with a power of 95% (superiority aspect of the trial). To allow for some unevaluable cases 800 per group will be included.
Primary analyses: The two-sided 95% confidence interval for the difference (experimental - control) in re-infection/death rates will be calculated for the total randomized arms and if this difference excludes +2%, non-inferiority is considered to be shown [ 48 ]. Comparison of durations of antibiotic therapy will be done using the Mann-Whitney test with stratification by centre. Any patients dying will be considered as the worst outcome in this evaluation and their duration will be set at the highest duration found. All analyses will be done according to the intention-to-treat principle and will be done with stratification by centre. A per-protocol analysis, excluding patients with major protocol violations, will also be done.
Secondary analysis: Exploratory analysis of primary endpoints regarding the relations with and clinical outcomes assessed at T = 24-72 h will be done using logistic regression or Anova. In the Anova the duration's antibiotic treatment will be transformed logarithmically. All tests are two-sided and p = 0.05 is the limit of significance. These analyses will be informal only because it cannot be excluded that the ratings of clinical outcomes at T = 24-72 h are influenced by the PCT results in the PCT-group. In a secondary analysis reinfection/deaths rates will be compared between treatment arms using random effects logistic regression allowing for centre as a random factor.
Laboratory examinations
Complete blood counts and CRP concentrations are obtained in all patients. Serial PCT measurements are performed in all patients of the PCT group and if blinding for the PCT results of the standard group is feasible also in the standard group. Laboratory examinations are stopped by ending of antibiotic therapy. Blood sampling will be limited to normal frequencies already used in daily practice for neonatal care. This means that for this trial blood will collected at time points 0 (moment of inclusion and start point of antibiotic therapy), 24 h (+/- 6 h) after inclusion, between 36 to 72 h after inclusion and then 24-48 hourly until end of antibiotic therapy. One additional sample will be collected in the PCT group 12 h (+/- 6 h) after inclusion. With the exception of the one additional sample in the PCT group, no additional punctures will be done for research purposes.
Procalcitonin will be measured on the automated Kryptor platform, supplied by the firm BRAHMS AG of Hennigsdorf, Germany, via the Roche Elecsys BRAHMS procalcitonin assay. The BRAHMS Kryptor sensitive procalcitonin will be applied on this platform using Time Resolved Amplified Cryptate Emission (TRACE) technology. This assay is based on a polyclonal antibody against calcitonin and a monoclonal antibody against katacalcin, which binds to the calcitonin and katacalcin sequence of the calcitonin prohormone. The test is considered a homogeneous immunoassay (sandwich principle) and is validated on serum and plasma (EDTA and heparin) matrix. The direct measuring range of the assay is from 0.02-50 ng/ml, with automated dilution extending the upper range to 1.000 ng/ml. The Functional Assay Sensitivity (FAS) is 0.06 ng/ml. Procedure time of the assay is very short at 19 minutes. The needed sample volume is limited to 50 micro litres. The Roche Elecsys BRAHMS procalcitonin assay equally uses an immunoassay based on a sandwich principle based on a polyclonal antibody against calcitonin and a monoclonal antibody against katacalcin, which binds to the calcitonin and katacalcin sequence of the calcitonin prohormone. The direct measuring range of the assay is from 0.02-100 ng/ml. The Functional Assay Sensitivity (FAS) is 0.06 ng/ml. Procedure time of the assay is also very short at 18 minutes. The needed sample volume is limited to 30 micro litres. The assay is validated on serum and plasma (EDTA and heparin) matrix. CRP and all other requested laboratory assays will be measured on the routine analyzers of the various sites and will be made available to the physician through the routine laboratory systems in place.
Procalcitonin-guided decision making
The normal age-adapted PCT ranges according to our previous study are shown in Figure 1 . To provide a margin of safety, the maximal normal value of PCT is defined as 10 ng/ml (18-36 hours of life) which, according to the literature is about 50% of the highest PCT concentrations measured in neonates with respiratory distress not related to infection [ 33 ].
At time-point T = 24-72 h (one time-point between 24-72 h after start of antibiotic therapy: time-point of positive culture result or time-point of early stop of antibiotic therapy (less than 72 h of therapy) or 72 h after start of therapy) neonates will be assessed and divided into categories of risk classification (Table 1 and 2 ): infection proven (category 1), infection probable (category 2), infection possible (category 3), and infection unlikely (category 4). The treating physician will decide in which risk classification a neonate belongs. Expected distribution based on the results of our first study [ 47 ]: Category 1 50%, Category 2 30-35%, Category 3 and 4 15-20%.
The duration of antibiotic therapy in the standard group is based on the attending physician's assessment of the risk classification during hospitalisation: in category 4 patients, antibiotics are given for 2 - 3 days, in category 3 patients for 5 - 7 days. In the PCT group, if infection is considered to be unlikely or possible, antibiotic therapy is discontinued when two consecutive PCT values are within the normal range (Figure 1 Table 3 ). Antibiotic therapy can be continued despite fulfilled PCT criteria at the discretion of the attending physician. These diversions from the stopping rules will be reported for further analysis. If infection is proven (category 1) or considered to be probable (category 2) antibiotics are given for 7 - 21 days in both groups.
Follow up
Patients with a recurrent infection will not enter the study for a second time. Recurrent infections will be analysed for focus and relapse of previous infection and will be treated at the physician's discretion. Recurrent infections in this trial will be defined as a new infection occurring within 72 hours after stopping antibiotic therapy for the initial infection. Any recurrent infection will be reported immediately to the steering committee.
Follow-up of the patients will be performed as in standard practice. Parents of discharged patients have 24 hours/day, 7 days/week access to the hospital and to contact a paediatrician on call. Phone numbers to contact the department will be provided upon discharge of the child. A follow-up interview after 1 month will be done with questions about undercurrent illness, physician visits, medications and hospitalisations.
Adverse events
All serious adverse events will be reported to the principal investigator within 24 hours after their occurrence. Also the data and safety monitoring board and the ethical committee that approved the protocol will be informed by the principal investigator, according to the requirement of the ethical committee. All adverse events will be followed until they have abated, or until a stable situation has been reached. The members of the data and safety monitoring board will be blinded to the assigned intervention arm of the patient. At any time the members of this board can ask to be unblinded by requesting the treatment code from the independent statistical centre.
Data collection and management
All data are collected in the neonatal intensive care department or pediatric ward. The medical history and used medication can be obtained from the patient's medical record. All collected data will be stored anonymously in the study database. Data management will be performed by the local investigator or the trial nurses of the participating center. The subjects will be identified by a trial identification number. Data base integrity and data safety as well as privacy are warranted by the participating research hospitals. Source data verification will be conducted by an independent Data Monitor.
An independent data- and safety monitoring committee is responsible for the ongoing conduct of the study, based on unbiased and independent review of efficacy and safety data. Dr. R. Oostenbrink, paediatrician and clinical epidemiologist, Erasmus MC-Sophia, Rotterdam and Prof. E. Lesaffre, statistician at Erasmus MC will serve as members from The Netherlands and Prof. G. Schubiger, neonatologist and head of the local ethical committee in Lucerne and Dr. C. Hagmann, consultant neonatologist at the University Hospital in Zurich from Switzerland. The committee assesses the progress of the trial, and the safety data and advises whether to continue, modify or stop the trial every 3 months. The DMC charter informs about purpose, composition, responsibilities and structure of the committee. The board will operate under strict confidentiality.
Ethical approval and trial registration
For this trial a nationwide ethical approval was requested for The Netherlands and local ethical approval was requested for all participating centres by their local ethical committee. Further participating study centers outside Switzerland and The Netherlands have to request approval by their ethical committee and to present the approval to the steering committee of the study. This trial will be conducted in accordance with the ethical guidelines of the World Medical Association's declaration of Helsinki Ethical Principles for Medical Research Involving Human Subjects as adopted by the 18th WMA General Assembly, Helsinki, Finland, June 1964, and amended by the 29th WMA General Assembly, Tokyo, Japan, October 1975; 35th WMA General Assembly, Venice, Italy, October 1983; 41st WMA General Assembly, Hong Kong, September 1989; 48th WMA General Assembly, Somerset West, Republic of South Africa, October 1996; 52nd WMA General Assembly, Edinburgh, Scotland, October 2000; 53th WMA General Assembly, Washington 2002 (Note of Clarification on paragraph 29 added); 55th WMA General Assembly, Tokyo 2004 (Note of Clarification on Paragraph 30 added); 59th WMA General Assembly, Seoul, October 2008. Also, this trial will comply with the regulations set forth by the Medical Research Involving Human Subjects Act (WMO) and other guidelines, regulations and Acts that apply.
This trial is registered in the U.S. National Institutes of Health's register, located at http://www.clinicaltrials.gov . under number NCT00854932. | Discussion
It is essential for observational studies to compare new markers with the gold standard. The gold standard for the diagnosis of neonatal early-onset sepsis, i.e. a positive blood culture is often unreliable because of the frequent use of intrapartum antibiotics or insufficient amounts of blood available for culture [ 8 ]. Therefore, the problem of all observational studies of neonatal early-onset sepsis is the definition of sepsis. An intervention study offers a valuable alternative because the calculation of the main results is independent of the definition of sepsis. A control group is necessary to demonstrate that results from the intervention group differ from what will be observed in the standard group. We choose the design of a non-inferiority trial to show that a reduced duration of antibiotic therapy (superiority aspect) doesn't change the outcome (recurrence of infection, mortality).
As this trial will focus on the effectiveness and safety of a Procalcitonin-guided antibiotic treatment, it will not focus on the clinical reasons why a clinician will decide to start antibiotics. The start of antibiotic treatment is solely the decision of the attending neonatologist. The purpose of the choice to not interact with this clinical decision is to evaluate current clinical practice the closest without creating a bias towards biomarker started (or withheld) therapy. Therefore, when designing this intervention study, the definition of sepsis is less crucial than in an observation study, because in an intervention study all newborns are suspected to have bacterial infection and are therefore treated with antibiotics. The key point is the comparison of the outcomes after intervention. The probability of infection must be assessed by the attending physician at t = 24-72 h during the hospitalisation, because antibiotic treatment is not discontinued in neonates with proven or a high risk of infection based on maternal risk factors, clinical signs and symptoms, and conventional laboratory parameters.
Benefit of the study is a possible limitation of unnecessary use of antibiotics. On a population level, unnecessary long-term use of broad-spectrum antibiotics is a serious concern because it can promote the development of resistant bacteria, which will result in untreatable infections over time. Because the treatment consists of intravenously administered antibiotics, this means admission to the hospital for the neonate with separation of mother and child during these important first days of life. It is obvious that shortening this period when possible is desirable. It will also result in fewer painful punctures. Neonates need punctures for new intravenous lines very frequently, so shorter treatment will result in fewer punctures.
The burden of the trial is minimal, because only one extra time point for blood drawing will be done. For the other time points no additional diagnostic procedures are needed. The additional burden consists of a couple of extra blood drops during routine blood sampling.
The estimated risk is low. There is a low risk on discontinuing antibiotic treatment too early, resulting in the development of a neonatal infection with its morbidity and mortality. Based on follow-up data of our first study no mortality was observed in 121 neonates [ 47 ]. In only two children antibiotic treatment was restarted with good outcome and without proof for secondary infections due to early stop of primary antibiotic therapy. | Background
Early diagnosis and treatment of the newborn infant with suspected sepsis are essential to prevent severe and life threatening complications. Diagnosis of neonatal sepsis is difficult because of the variable and nonspecific clinical presentation. Therefore, many newborns with nonspecific symptoms are started on antibiotic treatment before the presence of sepsis has been proven. With our recently published single-centre intervention study we were able to show that Procalcitonin determinations allowed to shorten the duration of antibiotic therapy in newborns with suspected early-onset sepsis.
Methods/Design
The study is designed as randomized controlled international multicenter intervention trial on the efficacy and safety of Procalcitonin guided treatment. Term and near-term infants (gestational age ≥ 34 0/7 weeks) with suspected sepsis in the first 3 days of life requiring empiric antibiotic therapy will be included. The duration of antibiotic therapy in the standard group is based on the attending physician's assessment of the likelihood of infection (infection unlikely, possible, probable or proven). In the Procalcitonin group, if infection is considered to be unlikely or possible, antibiotic therapy is discontinued when two consecutive Procalcitonin values are within the normal range. Co-primary outcome measures are the duration of antibiotic therapy (superiority aspect of the trial) and the proportion of infants with a recurrence of infection requiring additional courses of antibiotic therapy and/or death in the first month of life (safety of study intervention, non-inferiority aspect of the trial). The number of infants to be included equals 800 per arm. With these numbers the power of the study to demonstrate superiority for duration of antibiotic therapy as well as non-inferiority regarding safety, i.e. excluding a disadvantage difference larger than 2% for the experimental arm, will both be greater than 80%.
Discussion
Benefit of the study is a possible limitation of unnecessary use of antibiotics. The results of our first study suggest that there is a low risk on discontinuing antibiotic treatment too early, resulting in the development of a neonatal infection with its morbidity and mortality.
Trial registration
This trial is registered in the U.S. National Institutes of Health's register, located at http://www.clinicaltrials.gov . (NCT00854932). | Competing interests
The authors declare that they have no competing interests. BRAHMS diagnostica, manufacturer of the procalcitonin assay provided the testing kits for PCT determinations but is not involved in any other aspect of the study or manuscript preparation. No financial compensation is offered to participating patients.
Authors' contributions
MS is principal investigator of the study and is responsible for the design of the study, for coordination, supervision and data management in Switzerland and Prague. WH is participated in the design of the study, performs the statistical analysis and is responsible for future data analysis. AR is Co-principal investigator and is responsible for the design of the study, for coordination, supervision and data management in The Netherlands. All authors read and approved the final manuscript.
Pre-publication history
The pre-publication history for this paper can be accessed here:
http://www.biomedcentral.com/1471-2431/10/89/prepub | Acknowledgements
Following centres are currently participating in the study, alphabetic order (local investigator): Aarau, Kantonsspital, CH (I. Hoigne, MD); Almere, Flevo Ziekenhuis, NL (F. Schuerman, MD); Amsterdam, VU Medisch Centrum, NL (M. von Weissenbruch, MD); Bern, Universitätskinderspital, CH (L. Schlapbach, MD); Chur, Kantonspital, CH (W. Baer, MD); Delft, Reinier de Graaf Gasthuis, NL (L. van der Meer, MD); DenBosch, Jeroen Bosch Ziekenhuis, NL (E. de Vries, MD); Heerlen, Atrium Medisch Centrum, NL (R. Moonen, MD); Luzern, Kinderspital, CH (M. Fontana, MD); Prague, Thomayer University Hospital, CZ (J. Janota, MD); Rotterdam, Sophia Kinderziekenhuis, Erasmus MC, NL (R. Cornelisse, MD); Winterthur, Kantonsspital, CH (U. Zimmermann, MD); Zürich, Triemli Spital, CH (M. Tomaske, MD). | CC BY | no | 2022-01-12 15:21:45 | BMC Pediatr. 2010 Dec 8; 10:89 | oa_package/0c/48/PMC3016366.tar.gz |
||
PMC3016367 | 21182791 | Background
Globally, the prevalence of Type 2 Diabetes (T2D) in the pediatric population is increasing, most notably among 15 - 18 year olds [ 1 - 4 ]. At diagnosis, most boys and girls with T2D are overweight or obese, have a positive family history of T2D, are peri- or post-pubertal, and present with metabolic risk factors (e.g., dyslipidemia) [ 5 ]. With the early onset of this chronic condition and the associated co-morbidities, a life-long reduction in quality of life and premature mortality due to micro- and macro-vascular complications can be expected [ 6 ]. To address this health challenge resulting from pediatric T2D, effective and efficient management strategies are necessary.
Strong evidence supports the role of lifestyle modification to prevent (or at least delay) T2D in adults [ 7 - 11 ]. On this basis, many current clinical practice guidelines for adults with T2D recommend lifestyle modifications that include improving dietary quality as well as increasing the quantity and quality of physical activity to promote weight management and improve glycemic control [ 12 - 14 ]. Current treatment guidelines for children and youth with T2D do not differ from adult recommendations. Among children and youth who are asymptomatic (i.e., free of polyuria, polydipsia, or ketoacidosis) at diagnosis, intensive lifestyle counseling is recommended to achieve good glycemic control (i.e., HbA1c < 7.0% or fasting plasma glucose < 6.6 mmol/L) within 3 to 6 months [ 13 - 17 ]. If this clinical target is not achieved, initiation of metformin, an oral hypoglycemic agent, is recommended; in some cases, insulin therapy may also be necessary [ 16 , 18 ].
Although there is little data currently available on treatment patterns for children with T2D, it appears most boys and girls with T2D are treated pharmacologically [ 12 ]. It is not known whether this is a reflection of poor adherence to lifestyle modifications in children and youth or because clinicians' perceptions of and experiences with lifestyle recommendations and interventions are less effective for managing T2D in this population. To address this knowledge gap, we conducted a systematic review to evaluate the impact of lifestyle behavior modification on glycemic control among children and youth with clinically defined T2D. | Methods
Literature Search Strategy
A research librarian, with input from the research team, developed and implemented a comprehensive search strategy in selected high-yield electronic databases (EMBASE, CINAHL ® , Cochrane Library, Medline ® , PASCAL, PsycINFO ® , and Sociological Abstracts) from their date of inception until October 2007. An updated search was completed in May 2009 (See Additional file1 for search terms). Relevant articles were also sought by searching the reference lists from articles retrieved for detailed review as well as related review articles published from January 2002 onward. Personal contact was established with content experts and authors of selected review articles to ensure relevant publications were not missed. No language restrictions were applied in this search strategy.
Study Inclusion and Selection Criteria
We included all studies designed to evaluate the impact of lifestyle modification (diet and/or physical activity) on glycemic control (HbA1c) in children or youth with T2D. Lifestyle modification and glycemic control among children and youth with impaired glucose tolerance or impaired fasting glucose were not included. Similarly, the use of anti-diabetic drug therapies was not formally assessed. However, if a lifestyle modification group was included in any study design, the data were considered for inclusion. Studies were excluded if they did not include a comparison group or were not relevant to children and youth.
Two reviewers (STJ and MC) independently reviewed all abstracts and references. Studies were included if they met the following criteria: original research, participants ≤18 years of age with T2D, evaluated the effect of lifestyle modification (diet and/or physical activity) on glycemic control (HbA1c). Inter-observer agreement for study inclusion was high (κ = 0.92). Once the initial review was complete, a third investigator (GDCB) resolved any discrepancies by consensus.
Quality Assessment
Assessment of the methodologic quality of included studies was completed using criteria from Downs and Black [ 19 ], which assessed study characteristics including internal and external validity, power, and reporting. A maximal quality index (QI) score was given to selected studies. A QI score >20 rated good , 11 to 20 rated moderate and <11 rated poor [ 19 ]. Two reviewers (STJ and MC) independently completed quality assessments of included studies. Any discrepancies were resolved through third party discussion (GDCB). | Results
Figure 1 shows the selection process for this systematic review. From the 4,572 publications identified by reviewing titles and abstracts, 61 manuscripts were selected for complete review. Of these manuscripts, 19 review articles were identified and removed. Additionally, seven cohort studies [ 20 - 26 ] and one case-study [ 27 ] were identified and reviewed because they were closely related to our research question, but did not fully match our inclusion criteria (Table 1 ). The cohort studies mainly described treatment trajectories, behavioral characteristics, disease management strategies, or quality of life among children and youth with T2D; however, they lacked a defined lifestyle intervention or comparator group. Although the identified case-study included a specific lifestyle intervention for one child with T2D, it was excluded based on the nature of the study design. One other study [ 28 ] described the evaluation of a community-based program that focused on food preparation skills, but did not include behavioral or clinical outcomes (Table 1 ). No additional studies were identified through our examination of study reference lists. Contact with experts in the field of pediatric T2D yielded two additional manuscripts, but upon review, these studies did not satisfy our inclusion criteria. In the end, one study (a retrospective, case-control design) met our inclusion criteria [ 29 ].
In the study by Willi et al. , [ 29 ], the use of a very low carbohydrate diet in the treatment of T2D was found be an effective short-term therapy. However, this study was of poor methodological quality and its results should be interpreted with caution. The results are at high risk for bias because it was a convenience sample of hospital-based patients and was not a prospective design with random assignment to treatment and control. This study received a QI score of <11 (rated poor ); additional details of this report are described in Table 2 . | Discussion
Current treatment guidelines state that children and youth with T2D should receive intensive lifestyle counseling to help them achieve target glycemia within 3 - 6 months following diagnosis [ 12 - 16 ]. Despite this recommendation, our review of the literature revealed only one study that targeted lifestyle modification (diet) as an approach for improving glycemic control in this population. To date, most published studies have either included adults exclusively or have been based on retrospective or cross-sectional cohort studies of boys and girls with insulin resistance, but without T2D. Our review did not reveal any high quality studies that included physical activity interventions to improve short- or long-term glycemic control in children and youth with T2D, nor did it uncover any studies that examined the influence of combining diet and physical activity in the treatment of pediatric T2D. While we identified many review articles concerning the management of T2D in this population, none offered new data regarding the efficacy or effectiveness of lifestyle management for glycemic control.
The lack of published studies of lifestyle management for pediatric T2D may reflect the relatively low prevalence of the disease. Despite some having described pediatric diabetes to be at 'epidemic levels' [ 30 - 32 ], others argue prevalence estimates are of modest concern, even among those populations believed to be at greater risk [ 33 ]. Nevertheless, current prevalence estimates of overweight and obesity are cause for concern with respect to the potential for developing T2D and suggest a need for evidence-based recommendations for those who have already been diagnosed and those at high risk.
The Diabetes Prevention Program [ 7 ] and other lifestyle interventions for adults at high risk for T2D [ 8 , 9 ] have provided a model for clinicians and researchers upon which to base the design, delivery, and evaluation of clinical trials for T2D management. Indeed, these studies have informed the design of a recently-launched T2D management trial, which includes a lifestyle component [ 34 ]. However, it is important to bear in mind that pediatric and adult populations rarely receive similar therapies in research or clinical settings due to distinct metabolic, physical, developmental, and cognitive differences. Moreover, within the context of pediatric behavior modification, consideration of the complex interrelationships between environmental factors (i.e., family, peers, school, media, built environment) must be taken into account. Therefore, caution must be exercised when generalizing the clinical findings from currently available adult data to the pediatric population.
Of additional importance is the selection of appropriate study outcomes among the pediatric population [ 35 ]. Although good glycemic control and healthy body weights are of clinical importance, the antecedents of these clinical outcomes may be more salient [ 36 ]. For example, parental interactions with their sons and daughters when making family lifestyle changes have a meaningful impact [ 37 ] to the extent that the style with which parents communicate with and set boundaries within their family has considerable influence on children's nutrition and physical activity behaviours [ 38 ]. In this regard, evaluating outcomes such as parenting style, self-efficacy, and motivation to change lifestyle behaviours can help to contextualize nutrition and physical activity behaviours as well as metabolic outcomes that are influenced by lifestyle. Moreover, the current evidence-base for weight management may not be a suitable proxy for programs for pediatric T2D since many of the contemporary studies of pediatric weight management have been carried out on pre-adolescent children from primarily Caucasian, middle socioeconomic families. The current cohort of pediatric T2D patients includes (primarily) less affluent families of minority ethnic/racial backgrounds as well as families living with generations of chronic disease and co-morbidities of diabetes for which the disparities in health outcomes are well known in the adult T2D population.
Children and youth with T2D are usually overweight or obese [ 17 ]. Current pediatric and adult literature provides good evidence for reducing energy intake and increasing energy expenditure to enable weight management and reduce the risk of T2D. In adults, however, weight loss is not always necessary to improve glycemic control [ 39 ]. Until more evidence is available, it remains unknown whether glycemia can be improved in children and youth with T2D independent of weight loss. However, factors that can impact the achievement and sustainability of healthy lifestyle changes are increasingly being characterized. For example, a comprehensive health assessment prior to intervention enrolment would enable the design of interventions that are tailored to the needs of individuals and families, an advancement that can optimize outcomes in sub-groups with similar features. Weight management interventions that customize treatment based on loss of control eating [ 40 ], melanocortin 4 receptor gene mutation [ 41 ], maternal mental health [ 42 ], and/or motivation [ 43 ] could maximize individual responsiveness to weight management therapies. This degree of sophistication represents a substantial improvement beyond traditional variables (i.e., age, gender, obesity status) that, to date, have determined study inclusion and intervention approaches. This would also provide a degree of intervention sophistication that moves beyond a 'one size fits all' for managing T2D.
Presently, the U.S. National Institute of Diabetes and Digestive and Kidney Diseases (NIDDK) is supporting a number of large-scale, multi-centre trials designed to prevent or treat T2D in children and youth under the collaborative titled Studies to Treat Or Prevent Pediatric Type 2 Diabetes (STOPP-T2D). This partnership focuses on treating adolescents already diagnosed with T2D [ 34 ] and on the primary prevention of T2D among middle-school aged youth [ 44 ]. Unfortunately, following the completion of the NIDDK sponsored trial (Treatment Options for type 2 Diabetes in Adolescents and Youth: TODAY) [ 34 ], the independent effects of the dietary and physical activity behavioural changes on glycemic control will remain unknown; the trial does not include an independent lifestyle modification arm. Nevertheless, trials such as these are urgently needed to inform clinical practice.
The strengths of this study include the systematic, comprehensive and unbiased approach. The results of our systematic review should however be viewed in light of several limitations. An intrinsic limitation of any systematic review is the potential for publication and selection bias. We acknowledge this methodological drawback and undertook manual searches and contacted recognized experts in pediatric endocrinology. This strategy did not yield any additional unpublished articles that satisfied our inclusion criteria, so it is unlikely that we missed any relevant articles. | Conclusion
In summary, our systematic review indicated that no well-designed studies have evaluated the impact of lifestyle modification on glycemic control in children and youth with T2D. Numerous review articles have been published in this area, but contribute little to our evidence base. Randomized clinical trials must be performed to clearly establish the role of nutrition and physical activity interventions in managing pediatric T2D. These studies might also help to determine the optimal lifestyle treatment approaches for good glycemic control independent of pharmacologic therapy for the pediatric T2D population. We believe that research to examine lifestyle-based therapies, which consider both qualitative and quantitative aspects of nutrition and physical activity in boys and girls with T2D, should remain research and public health priorities. | Background
Our purpose was to evaluate the impact of lifestyle behavior modification on glycemic control among children and youth with clinically defined Type 2 Diabetes (T2D).
Methods
We conducted a systematic review of studies (randomized trials, quasi-experimental studies) evaluating lifestyle (diet and/or physical activity) modification and glycemic control (HbA1c). Our data sources included bibliographic databases (EMBASE, CINAHL ® , Cochrane Library, Medline ® , PASCAL, PsycINFO ® , and Sociological Abstracts), manual reference search, and contact with study authors. Two reviewers independently selected studies that included any intervention targeting diet and/or physical activity alone or in combination as a means to reduce HbA1c in children and youth under the age of 18 with T2D.
Results
Our search strategy generated 4,572 citations. The majority of citations were not relevant to the study objective. One study met inclusion criteria. In this retrospective study, morbidly obese youth with T2D were treated with a very low carbohydrate diet. This single study received a quality index score of < 11, indicating poor study quality and thus limiting confidence in the study's conclusions.
Conclusions
There is no high quality evidence to suggest lifestyle modification improves either short- or long-term glycemic control in children and youth with T2D. Additional research is clearly warranted to define optimal lifestyle behaviour strategies for young people with T2D. | Abbreviations
BMI: body mass index; HbA1c: glycated hemoglobin; Kg: kilogram; L: liter; m 2 : metres squared; mmol: millimole; NIDDK: Institute of Diabetes and Digestive and Kidney Diseases; STOPP-T2D: Studies to Treat Or Prevent Pediatric Type 2 Diabetes; TODAY: Treatment Options for type 2 Diabetes in Adolescents and Youth; T2D:type 2 diabetes; VLCD: very-low-calorie diet; QI: quality index; QOL: quality of life
Competing interests
The authors declare that they have no competing interests.
Authors' contributions
STJ contributed to study design, data collection, abstraction and interpretation, drafted the first manuscript and made subsequent revisions. ASN conceived the study, made substantial contributions to the study design and made critical revisions of early manuscript versions. MC participated in data collection and data abstraction. JB developed the search strategy and conducted the literature search. TTKH, PWF and MMJ provided critical revisions to the manuscript and provided important intellectual contributions. GDCB conceived the study, helped to solidify the study design and interpretation of data, drafted critical revisions, and, as did all authors, approved the final version of the manuscript.
Pre-publication history
The pre-publication history for this paper can be accessed here:
http://www.biomedcentral.com/1471-2431/10/97/prepub
Supplementary Material | Acknowledgements
STJ, MC, JB, and TT-KH received no external support. PWF is supported by Västerbotten's Health Authority (ALF strategic appointment 2006-2009), the Swedish Heart-Lung Foundation (20070633), and the Swedish Diabetes Association (DIA2006-013). ASN is supported by a Career Development Award from the Canadian Child Health Clinician Scientist Program (funded by the Canadian Institutes of Health Research [CIHR]). GDCB is supported by a Population Health Investigator Award from Alberta Innovates - Health Solutions and a New Investigator Award from CIHR. These study sponsors did not play any role in this research or in the decision to submit the paper for publication. Contents of the publication do not necessarily represent the views or policies of the National Institutes of Health. | CC BY | no | 2022-01-12 15:21:45 | BMC Pediatr. 2010 Dec 23; 10:97 | oa_package/4a/fc/PMC3016367.tar.gz |
PMC3016368 | 21118531 | Background
This review is the first of two that aim to cover the basic physical principles underlying the most commonly used cardiovascular magnetic resonance (CMR) techniques. There are numerous texts and journal articles that provide excellent, in-depth explanations of MR physics and in particular CMR physics [ 1 - 5 ]. This review does not intend in any way to supplant these but rather to provide an overview of the key physical principles that underlie the most commonly used CMR techniques. This review begins with the basic principles of MR signal generation and image formation, outlines the principles of cardiac synchronisation and fast, breath-hold imaging. Finally, the principles behind the two most common CMR techniques; anatomical imaging using a double inversion, black-blood spin echo pulse sequence and bright blood functional cine imaging using two gradient echo-based pulse sequences are described in some detail. | Conclusion
This review has outlined the key physical principles that underlie the most commonly used cardiac MR imaging techniques. The basic principles of MR signal generation and image production have been explained and key imaging parameters have been defined, explaining their influence on image contrast, resolution and acquisition time. It has been shown how fast spin echo and gradient echo imaging techniques can be combined with cardiac synchronisation methods to provide high quality anatomical and functional cine imaging of the heart within a single breath-hold period. | There are many excellent specialised texts and articles that describe the physical principles of cardiovascular magnetic resonance (CMR) techniques. There are also many texts written with the clinician in mind that provide an understandable, more general introduction to the basic physical principles of magnetic resonance (MR) techniques and applications. There are however very few texts or articles that attempt to provide a basic MR physics introduction that is tailored for clinicians using CMR in their daily practice. This is the first of two reviews that are intended to cover the essential aspects of CMR physics in a way that is understandable and relevant to this group. It begins by explaining the basic physical principles of MR, including a description of the main components of an MR imaging system and the three types of magnetic field that they generate. The origin and method of production of the MR signal in biological systems are explained, focusing in particular on the two tissue magnetisation relaxation properties (T1 and T2) that give rise to signal differences from tissues, showing how they can be exploited to generate image contrast for tissue characterisation. The method most commonly used to localise and encode MR signal echoes to form a cross sectional image is described, introducing the concept of k-space and showing how the MR signal data stored within it relates to properties within the reconstructed image. Before describing the CMR acquisition methods in detail, the basic spin echo and gradient pulse sequences are introduced, identifying the key parameters that influence image contrast, including appearances in the presence of flowing blood, resolution and image acquisition time. The main derivatives of these two pulse sequences used for cardiac imaging are then described in more detail. Two of the key requirements for CMR are the need for data acquisition first to be to be synchronised with the subject's ECG and to be fast enough for the subject to be able to hold their breath. Methods of ECG synchronisation using both triggering and retrospective gating approaches, and accelerated data acquisition using turbo or fast spin echo and gradient echo pulse sequences are therefore outlined in some detail. It is shown how double inversion black blood preparation combined with turbo or fast spin echo pulse sequences acquisition is used to achieve high quality anatomical imaging. For functional cardiac imaging using cine gradient echo pulse sequences two derivatives of the gradient echo pulse sequence; spoiled gradient echo and balanced steady state free precession (bSSFP) are compared. In each case key relevant imaging parameters and vendor-specific terms are defined and explained. | MR system components
A magnetic resonance imaging (MRI) system comprises three main electromagnetic components; a set of main magnet coils, three gradient coils and an integral radiofrequency transmitter coil (Figure 1a ). These components each generate a different type of magnetic field which, when applied to a patient in combination, produce spatially encoded magnetic resonance signals that are used to form MR images. The three different types of magnetic field are defined as follows: A strong, constant magnetic field is generated by the main magnet coils. The patient is positioned for imaging within the central bore of the magnet (Figure 1b ), where the strength of this field, denoted by the symbol B o , defines the nominal operating field strength of the particular MRI system. B o is measured in units of Tesla, (T) with 1 Tesla equal to approximately 20,000 times the earth's magnetic field. Nominal field strengths range from 0.2T to 3.0T for commercially produced clinical MR systems, with the most common field strength for cardiac imaging being 1.5T. A reference coordinate system of three orthogonal axes, x, y and z is used to define the magnetic field direction, with the z axis chosen to be parallel to the direction of B o .
A gradient magnetic field that can be rapidly switched on and off is generated by each of the three gradient coils mounted inside the main magnet (Figure 1a ). Each of these gradient coils generates a magnetic field in the same direction as B o but with a strength that changes with position along the x, y or z directions, according to which gradient coil is used. This gradient field is superimposed onto the B o magnetic field so that its strength increases (or decreases) along the direction of the applied gradient field. The strength of the gradient magnetic field reflects the 'steepness' of its slope and is measured in units of millitesla per metre (mT/m).
A radiofrequency (rf) magnetic field is generated by the rf transmitter coil mounted inside the gradient coil, closest to the patient (Figure 1a ). It has a much smaller amplitude than the other magnetic fields, but oscillates at a characteristic frequency in the megahertz range (hence, radiofrequency), the value of which is determined by the nominal field strength of the main magnet. The rf field is often referred to as the B 1 field. The static magnetic field and radiofrequency field combine to generate magnetic resonance signals that are spatially localised and encoded by the gradient magnetic fields to create an MR image. For cardiac imaging, a separate rf receiver coil that is tailored to maximise signal from the heart is normally used to detect the emitted MR signals (Figure 1a ).
Generating MR signals
Origin of the MR signal
The primary origin of the MR signal used to generate images is either from water or fat within the patient's tissue; specifically it is from the hydrogen nuclei (consisting of a single proton) contained within free water or lipid molecules Hydrogen is one of a number of elements, including 31 P, 23 Na, 13 C, whose nuclei exhibit magnetic resonance properties but the high intrinsic sensitivity and natural abundance in the form of water and lipid molecules makes it particularly favourable for imaging. Hydrogen nuclei, (single protons) possess an intrinsic property known as nuclear spin that gives rise to a small magnetic field for each proton, known as a magnetic moment. Normally the magnetic moments (spins) are randomly oriented but in the presence of the externally applied B o field, they tend to align either toward or against the externally applied magnetic field. An equilibrium state is quickly attained where there is a small excess of spins aligned with the field (typically just a few per million) as this is the more energetically favourable direction of alignment. The excess of proton magnetic moments combines to form a net magnetic field or net magnetisation. This is often given the symbol M and at equilibrium it is aligned along the positive z axis (along B o ) with the value, M o . It is often shown as an arrow or vector (Figure 2a ).
The size of this net magnetisation is one of the key determinants of the maximum signal intensity that can be generated and used to form images. The greater the applied magnetic field strength, B o , the greater the excess of protons aligned with the magnetic field and the greater the size of the net magnetisation.
In order to generate a MR signal from the net magnetisation, the radiofrequency (rf) magnetic field described earlier is generated by the integral rf transmitter coil and used to deliver energy to the population of protons. This field is applied at a particular frequency, known as the Larmor frequency, ω o that is determined by the equation:
This equation is known as the Larmor equation. The constant γ is called the gyromagnetic ratio and has a value of 42.6 MHz/Tesla for the proton. The Larmor frequency is therefore proportional to the strength of the magnetic field and for 1.5 Tesla, the Larmor frequency is approximately 64 MHz. This is also known as the resonant frequency, as the protons only absorb energy (or resonate) at this characteristic frequency. The rf field is normally applied as a short pulse, known as an rf pulse.
Radiofrequency pulses and flip angle
Before the rf pulse is switched on the net magnetisation, M o , is at equilibrium, aligned along the z-axis in the same direction as B o (Figure 2a ). When the rf pulse is switched on, the net magnetisation begins to move away from its alignment with the B o field and rotate around it. The speed of this rotational motion, known as precession, is also at the Larmor frequency. The Larmor frequency is therefore also sometimes referred to as the frequency of precession. The movement of the net magnetisation away from alignment with B o is caused by a much slower rotation about the much smaller applied rf field, B 1 . This oscillating field, B 1 is applied as a rotating field at right angles to B o in the plane of the x and y axes. As it rotates at the same frequency as the Larmor frequency, it appears as an additional static field to the rotating net magnetisation vector. The net magnetisation therefore rotates about both the B o and the B 1 fields. As a result of these two rotations, the net magnetisation follows a spiral path from its alignment with the B o field (z-axis) towards a rotational motion in the plane of the x and y axes.
Remember that the net magnetisation is the result of the sum of many individual magnetic moments. So long as they rotate together (a condition known as coherence) they will produce a net magnetisation that is rotating. The greater the amount of energy applied by the rf pulse, the greater the angle that the net magnetisation makes with the B o field (the z axis). This depends upon both the amplitude and duration of the pulse. The rf pulse is switched off once the angle of precession has reached a prescribed value. This is known as the flip angle of the rf pulse (Figure 2b ).
Once the rf pulse has caused the net magnetisation to make an angle with the z-axis, it can be split into two components (Figure 2b ). One component is parallel to the z-axis. This is known as the z-component of the magnetisation, M z , also know as the longitudinal component. The other component lies at right angles to the z axis within the plane of the x and y axes and is known as the x-y component of the net magnetisation, M xy , or the transverse component. The transverse component rotates at the Larmor frequency within the xy plane and as it rotates, it generates its own small, oscillating magnetic field which is detected as an MR signal by the rf receiver coil. Radiofrequency pulses are commonly classified by both their flip angle and by their effect.
Radiofrequency pulses that generate an MR signal by delivering energy to the hydrogen spin population, causing the magnetisation to move away from its equilibrium position are known as excitation pulses. The 90° rf excitation pulse delivers just enough energy to rotate the net magnetisation through 90° (Figure 2c ). This transfers all of the net magnetisation from the z-axis into the xy (transverse) plane, leaving no component of magnetisation along the z-axis immediately after the pulse. The system of protons is then said to be 'saturated' and the 90° rf pulse is therefore sometimes referred to as a saturation pulse. When applied once, a 90° rf pulse produces the largest possible transverse magnetisation and MR signal. This pulse is used to initially generate the signal for spin echo-based pulse sequences.
Low flip angle rf excitation pulses rotate the net magnetisation through a pre-defined angle of less than 90° (Figure 2b ). A low flip is represented by the symbol α or can be assigned a specific value, e.g. 30°. Only a proportion of the net magnetisation is transferred from the z axis into the xy plane, with some remaining along the z axis. While a low flip angle rf pulse produces an intrinsically lower signal than the 90° excitation pulse described above, it can be repeated more rapidly as some of the magnetisation remains along the z-axis immediately after the pulse. This excitation pulse is used to generate the signal in gradient echo pulse sequences to control the amount of magnetisation that is transferred between the z-axis and the xy plane for fast imaging applications.
The 180° refocusing pulse is used in spin echo pulse sequences after the 90° excitation pulse, where the net magnetisation has already been transferred into the x-y plane. It flips the direction of the magnetisation in the x-y plane through 180° as it rotates at the Larmor frequency (Figure 2d ). This pulse is used in spin echo-based techniques to reverse the loss of coherence caused by magnetic field inhomogeneities (described in the next section).
The 180° pulses are also used to prepare the net magnetisation before the application of an excitation pulse. These are known as inversion pulses and are used in inversion recovery or dark-blood pulse sequences. They are applied when the net magnetisation is at or close to equilibrium and invert the excess population of proton magnetic moments from being aligned to anti-aligned with the B o field (Figure 2e ). Because the resultant magnetisation lies only along the z axis this pulse does not result in a detectable signal. It is used to prepare the z-magnetisation in inversion recovery pulse sequences and in black blood preparation schemes. This type of pulse is therefore also often referred to as a magnetisation preparation pulse.
MR signal characteristics - T1, T2 and T2* relaxation
Immediately after the rf pulse the spin system starts to return back to its original state, at equilibrium. This process is known as relaxation. There are two distinct relaxation processes that relate to the two components of the Net Magnetisation, the longitudinal (z) and transverse (xy) components. The first relaxation process, longitudinal relaxation, commonly referred to as T1 relaxation is responsible for the recovery of the z component along the longitudinal (z) axis to its original value at equilibrium. The second relaxation process, transverse relaxation, is responsible for the decay of the xy component as it rotates about the z axis, causing a corresponding decay of the observed MR signal. Longitudinal and transverse relaxation both occur at the same time, however, transverse relaxation is typically a much faster process for human tissue. The signal decays away long before the spin system returns to its equilibrium state.
T1 relaxation is an exponential process with a time constant T1. For example, if a 90° pulse (a saturation pulse) is applied at equilibrium, the z-magnetisation is saturated (reduced to zero) immediately after the pulse, but then recovers along the z-axis towards its equilibrium value, initially rapidly, slowing down as it approaches its equilibrium value (Figure 3 ). The shorter the T1 time constant is, the faster the relaxation process and the return to equilibrium. Recovery of the z-magnetisation after a 90° rf pulse is sometimes referred to as saturation recovery.
Transverse relaxation can be understood by remembering that the net magnetisation is the result of the sum of the magnetic moments (spins) of a whole population of protons. Immediately after the rf pulse they rotate together in a coherent fashion, so that as they rotate they continuously point in the same direction as each other within the xy plane. The angle of the direction they point at any instant is known as the phase angle and the spins having similar phase angles are said at this initial stage to be 'in phase' (Figure 4 ). Over time, for reasons explained in a moment, the phase angles gradually spread out, there is a loss of coherence and the magnetic moments no longer rotate together and they are said to move 'out of phase'. The net sum of the magnetic moments is thus reduced, resulting in a reduction in the measured net (transverse) magnetisation. The signal that the receiver coil detects (if no further rf pulses or magnetic field gradients are applied) is therefore seen as an oscillating magnetic field that gradually decays (known as a Free Induction Decay or FID). There are two causes of this loss of coherence. Firstly, the presence of interactions between neighbouring protons causes a loss of phase coherence known as T2 relaxation.
This arises from the fact that the rate of precession for an individual proton depends on the magnetic field it experiences at a particular instant. While the applied magnetic field B o is constant, it is however possible for the magnetic moment of one proton to slightly modify the magnetic field experienced by a neighbouring proton. As the protons are constituents of atoms within molecules, they are moving rapidly and randomly and so such effects are transient and random. The net effect is for the Larmor frequency of the individual protons to fluctuate in a random fashion, leading to a loss of coherence across the population of protons. i.e. the spins gradually acquire different phase angles, pointing in different directions to one another and are said to move out of phase with one another (this is often referred to as de-phasing). The resultant decay of the transverse component of the magnetisation (M xy ) has an exponential form with a time constant, T2, hence this contribution to transverse relaxation is known as T2 relaxation (Figure 4 ). As it is caused by interactions between neighbouring proton spins it is also sometimes known as spin-spin relaxation. Due to the random nature of the spin-spin interactions, the signal decay caused by T2 relaxation is irreversible.
The second cause for the loss of coherence (de-phasing) relates to local static variations (inhomogeneities) in the applied magnetic field, B o which are constant in time. If this field varies between different locations, then so does the Larmor frequency. Protons at different spatial locations will therefore rotate at different rates, causing further de-phasing so that the signal decays more rapidly. In this case, as the cause of the variation in Larmor frequency is fixed, the resultant de-phasing is potentially reversible. The combined effect of T2 relaxation and the effect of magnetic field non-uniformities is referred to as T2* relaxation and this determines the actual rate of decay observed when measuring an FID signal (Figure 4 ). T2* relaxation is also an exponential process with a time constant T2*.
Significance of the T1 value
T1 relaxation involves the release of energy from the proton spin population as it returns to its equilibrium state. The rate of relaxation is related to the rate at which energy is released to the surrounding molecular structure. This in turn is related to the size of the molecule that contains the hydrogen nuclei and in particular the rate of molecular motion, known as the tumbling rate of the particular molecule. As molecules tumble or rotate they give rise to a fluctuating magnetic field which is experienced by protons in adjacent molecules. When this fluctuating magnetic field is close to the Larmor frequency, energy exchange is more favourable. For example, lipid molecules are of a size that gives rise to a tumbling rate which is close to the Larmor frequency and therefore extremely favourable for energy exchange. Fat therefore has one of the fastest relaxation rates of all body tissues and therefore the shortest T1 relaxation time. Larger molecules have much slower tumbling rates that are unfavourable for energy exchange, giving rise to long relaxation times. For free water, its smaller molecular size has a much faster molecular tumbling rate which is also unfavourable for energy exchange and therefore it has a long T1 relaxation time. The tumbling rates of water molecules that are adjacent to large macromolecules can however be slowed down towards the Larmor frequency shortening the T1 value. Water- based tissues with a high macromolecular content (e.g. muscle) therefore tend to have shorter T1 values. Conversely, when the water content is increased, for example by an inflammatory process, the T1 value also increases.
Significance of the T2 value
T2 relaxation is related to the amount of spin-spin interaction that takes place. Free water contains small molecules that are relatively far apart and moving rapidly and therefore spin-spin interactions are less frequent and T2 relaxation is slow (leading to long T2 relaxation times). Water molecules bound to large molecules are slowed down and more likely in interact, leading to faster T2 relaxation and shorter T2 relaxation times. Water- based tissues with a high macromolecular content (e.g. muscle) tend to have shorter T2 values. Conversely, when the water content is increased, for example by an inflammatory process, the T2 value also increases. Lipid molecules are of an intermediate size and there are interactions between the hydrogen nuclei on the long carbon chains (an effect known as J -coupling) that cause a reduction of the T2 relaxation time constant to an intermediate value. Rapidly repeated rf pulses, such as those used in turbo or fast spin echo techniques, can have the effect of reducing J -coupling, resulting in an increased T2 relaxation time and higher signal intensity from fat [ 6 ].
MR echoes
Whilst the FID can be detected as a MR signal, for MR imaging it is more common to generate and measure the MR signal in the form of an echo. This is because the magnetic field gradients that are used to localise and encode the MR signals in space cause additional de-phasing which disrupts the FID. The two most common types of echo used for MR imaging are gradient echoes and spin echoes. The following sections describe how these echoes are generated.
Gradient echoes
Gradient echoes are generated by the controlled application of magnetic field gradients. Magnetic field gradients are used to produce a change in field strength and hence a corresponding change in Larmor frequency along a particular direction. When a magnetic field gradient is switched on it causes proton spins to lose coherence or de-phase rapidly along the direction of the gradient as they precess at different frequencies. This de-phasing causes the amplitude of the FID signal to rapidly drop to zero (Figure 5 ). The amount of de-phasing caused by one magnetic field gradient can however be reversed by applying a second magnetic field gradient along the same direction with a slope of equal amplitude but in the opposite direction. If the second gradient is applied for the same amount of time as the first gradient, the de-phasing caused by the first gradient is cancelled and the FID re-appears. It reaches a maximum amplitude at the point at which the spins de-phased by the first gradient have moved back into phase, or 're-phased'. If the second gradient then continues to be applied, the FID signal de-phases and disappears once more. The signal that is re-phased through the switching of the gradient direction is known as a gradient echo. The time from the point at which the transverse magnetisation (the FID) is generated by the rf pulse, to the point at which the gradient echo reaches it's maximum amplitude is known as the echo time (abbreviated TE). This can be controlled by varying the timing of the applied magnetic field gradients. If the echo time is chosen to be longer, more natural T2* de-phasing occurs and the maximum echo amplitude becomes smaller. In practice, the TE is set by the MR system operator (in milliseconds) as it determines, amongst other things, the influence of T2* on the image contrast.
Spin echoes
Spin echoes are generated by the application of a 180° refocusing rf pulse after the 90° excitation pulse (Figure 6 ). While the de-phasing caused by T2 relaxation is a random, irreversible process, the additional de-phasing caused by the presence of static magnetic field inhomogeneities is potentially reversible. At a certain time after the initial generation of the FID signal, a proportion of the relative phase change for each proton spin is related to the local value of the applied magnetic field. The application of a 180° refocusing pulse rotates the spins through 180°, effectively changing the sign of the relative phase change within the xy plane. Where the previous relative phase change was positive due to a locally increased field, the 180° pulse causes it to become negative and visa versa. As the local field variations remain fixed, the spins still continue to have the same Larmor frequency, so a spin in an increased field continues to gain in phase, while a spin in a decrease field continues to lose phase. Because the sign of their phase shifts has been swapped halfway through by the 180° refocusing pulse, the spins all come back into phase causing the FID to increase in amplitude, reaching a maximum at the echo time, TE. For the spin de-phasing caused by the field non-uniformities to be completely reversed at time TE, the 180° pulse must be applied at time TE/2. The signal that re-appears (re-phases) through the application of the 180° rf refocusing pulse is known as a spin echo. After reaching a maximum amplitude at time TE, the signal again de-phases due to the T2* relaxation process. For the purposes of imaging, magnetic field gradients are also applied during the de-phasing period and during the measurement of the spin echo.
Spin echo versus gradient echo
In general, because of the 180° refocusing pulse removes the de-phasing caused by magnetic field inhomogeneities, the amplitude of the spin echo signal is greater than the gradient echo signal. Imaging based on spin echo is also less affected by the presence of field inhomogeneities caused by metallic artefacts (e.g. sternal wires or metallic heart). Gradient echo imaging is however more affected by the presence of magnetic field inhomogeneities caused by iron and so is useful, for example, in the assessment of patients with increased iron deposition within the heart and liver.
Localising and encoding MR signals to make an image
The MR echo signals produced above can be localised and encoded by applying magnetic field gradients as they are generated to produce an image. This is because the application of a magnetic field gradient causes the strength of the magnetic field and hence, the Larmor frequency to depend on position along that direction. The sections that follow describe the most commonly used method to build up a cross-sectional 2-dimensional image (or image slice) using a combination of rf pulses and gradient magnetic fields.
Step 1 - Selection of an image slice
First, the resonance of protons is confined to a slice of tissue. This is done by applying a gradient magnetic field at the same time as the rf excitation pulse is transmitted (Figure 7 ). The frequency of the rf pulse corresponds to the Larmor frequency at a chosen point along the direction of the applied gradient. The result is for resonance only to occur for protons in a plane that cuts through that point at right angles to the gradient direction, effectively defining a slice of tissue. This process is known as slice selection and the gradient is known as the slice selection gradient, G S . The orientation of the slice is determined by the direction of the applied gradient known as the slice selection direction (in the example of Figure 7 this is the z-direction). Rather than just a single frequency, the transmitted rf pulse is comprised of a small range of frequencies, known as the transmit bandwidth of the rf pulse. This gives the slice a thickness. The thickness of the slice is determined by the combination of the rf pulse bandwidth and the steepness (or strength) of the gradient.
Step 2 - Phase encoding
Following slice selection, a phase encoding gradient, G P , is applied for a specified period (Figure 8 ). This causes the protons to rotate at different frequencies according to their relative position along the gradient. Where the gradient increases the magnetic field, the protons acquire a higher frequency of precession, while where the gradient decreases the magnetic field, the protons acquire a lower frequency of precession. The protons therefore also constantly change their relative phase according to their position along the gradient. When the gradient is switched off, the protons will have changed their relative phase by an amount depending on their position along the gradient. This process is known as phase encoding and the direction of the applied gradient is known as the phase encoding direction.
Step 3 - Frequency encoding
Following the phase encoding gradient, the frequency encoding gradient, G F , is applied in a direction at right angles to it and in a similar way causes the protons to rotate at different frequencies according to their relative position along that direction gradient (Figure 8 ). This gradient is applied for longer, and at the same time the signal is measured or digitally sampled. The signal is comprised of a range of frequencies (or bandwidth), corresponding to the Larmor frequencies of the proton magnetic moments at their different locations along the gradient. This process is known as frequency encoding, the direction of the frequency encoding gradient defines the frequency encoding direction. The phase encoding and frequency encoding processes in steps 2 and 3 are further illustrated in an animation provided in Additional File 1 .
In summary, to localise the MR signal in three dimensions, three separate magnetic field gradients are applied in a three step process. For the examples in Figure 7 and 8 these gradients are applied in sequence with the slice-section gradient, G S applied along the z-axis, the phase-encoding gradient, G P applied along the y-axis and the frequency-encoding gradient, G F applied along the x- axis (Figure 9 ). This defines a slice perpendicular to the z axis i.e. a slice oriented in the transaxial plane. Other slice orientations are obtained by re-assigning each of the gradients to a different axis. An angled slice is obtained by combining gradients along two or more axes to perform each of the localisation tasks. The ability to define an arbitrary slice orientation is a key strength of magnetic resonance imaging, especially for cardiac applications, which necessitate double oblique angled slices to achieve standard views of the cardiac chambers and valve planes.
Note that in Figure 9 additional gradient pulses are shown both after the slice selection gradient and before the frequency encoding gradient. These extra gradient pulses are required to counteract de-phasing that is caused by these two imaging gradients, to ensure the maximum possible signal at the centre of the MR signal echo. The additional gradient pulses are applied along the same direction as the imaging gradients, but with opposite slope, so that the transverse magnetisation is brought back into phase. For the slice selection gradient, de-phasing only occurs during the second half of the slice selection gradient since the transverse magnetisation is only generated halfway through the applied rf pulse. It is therefore followed by a re-phasing gradient that is only half the length of the slice selection gradient. This ensures that de-phasing that occurs along the slice selection gradient is reversed. The frequency encoding gradient is normally preceded by a de-phasing gradient so that when the frequency encoding gradient is applied, the de-phasing is reversed by the first half of the frequency encoding gradient and the signal echo reaches its maximum amplitude at the centre of the sampling period.
Image reconstruction
The frequency encoded signal is analysed using a Fourier transform. This is a mathematical tool that transforms the time-dependent MR signal into its different frequency components (Figure 10 ). The amplitude of each frequency component can be mapped onto a location along the frequency encoding gradient to determine the relative amount of signal at each location. The field of view in the frequency encoding direction is defined by the operator in mm or cm. The range of frequencies across this field of view is known as the receiver bandwidth and is determined by the amplitude (or slope) required for the frequency encoding gradient.
While analysis of the encoded MR signal by the Fourier Transform provides the frequency content of the signal, the phase changes imparted by the phase encoding gradient cannot be decoded by a similar process. The Fourier Transform can only analyse a signal that changes over time. To enable this, a number of signal echoes are generated by repeating the above three-step process (slice selection, phase encoding and frequency encoding), each time applying the same slice selection and frequency encoding gradient, but a different amount of phase encoding (Figure 11 ). This is done by increasing the strength (or slope) of the phase encoding gradient for each repetition by equal increments or steps. For each phase encoding step the signal echo is measured, digitised and stored in a raw data matrix. The time interval between each repetition is known as the repetition time, TR. Once all the signals for a prescribed number of phase encoding steps have been acquired and stored, they are analysed together by a two-dimensional (2D) Fourier transform to decode both the frequency and the phase information (Figure 12 ).
Repetition time and image acquisition time
The repetition time, TR is another important parameter that can be set by the operator in milliseconds. As we will see later it not only determines how fast MR images can be acquired but also affects the image contrast. The number of pixels in the phase encoding direction of the reconstructed image is determined by the number of phase encoding steps used, N P . The spatial resolution of the image therefore also depends on the number of phase encoding steps, and as a consequence is often limited by the image acquisition time, as
If a greater spatial resolution is required in the phase encoding direction (for a fixed field of view), the number of pixels in that direction (sometimes referred to as the acquired image matrix size) must be increased. This requires a greater number of repetitions, and therefore a longer image acquisition time.
k-space
The way that the MR signals are generated and encoded by the use of magnetic field gradients gives rise to a particular relationship between the data points in the signal and those in the image. A single data point in an MR signal contributes a particular attribute to the whole image. Conversely, a single pixel in the image may have contributions from all of the MR signals collected. Just as each pixel occupies a unique location in image space, each point of an MR signal echo belongs to a particular location in a related space known as k-space [ 7 ]. There is an inverse relationship between the image space and k-space (Figure 12 ). Whereas the coordinates of the image represent spatial position (x and y), the coordinates of k-space represent 1/x and 1/y, sometimes referred to as spatial frequencies, k x and k y . The value of each point in k-space represents how much of a particular spatial frequency is contained within the corresponding image.
A spatial frequency is difficult to picture. An image consisting of a single spatial frequency looks like a wave propagating across the image with bright and dark peaks and troughs (Figure 13 ). A low spatial frequency (arising from a point near the centre of k space) has peaks and troughs far apart and thus contributes mostly the signal content and contrast of the image. A high spatial frequency (arising from a point near the edge of k-space) has peaks and troughs close together and thus contributes fine detail or edges, effectively defining the spatial resolution of the image. To make an image that is a totally faithful representation of the imaged subject, it is important that the whole range of spatial frequencies is acquired (up to a maximum that defines the spatial resolution of the image), i.e. that the whole of k-space is covered. For standard imaging this is done by filling k-space with equally spaced parallel lines of signal data, line by line, along the k x direction. This is known as a Cartesian acquisition (Figure 12 ). The phase encoding gradient determines the position of the line being filled in the k y direction. Usually the amplitude of the phase encoding gradient is incremented in steps such that the next adjacent line in k-space is filled with each successive repetition, starting at one edge of k-space and finishing at the opposite edge. This is known as a linear phase encoding order (Figure 14 ). Choosing a different phase encoding step order is particularly important in some dynamic applications such as contrast enhanced angiography, where it is important to acquire the contrast information immediately at the start of the image data acquisition once the contrast agent reaches a particular vessel segment. In this case, the phase encoding gradient is incremented from zero, but with an alternating sign, starting at the centre of k space and working outwards to the edges of k-space, known as centric or low-high k-space order (Figure 14 ).
Pulse sequences and image contrast
Image contrast and weighting
One of the most important advantages of MR imaging over other imaging modalities is the ability to generate contrast between different soft tissue types. This is because different types of soft tissue have different characteristic T1 and T2 relaxation times. The dependence of the MR signal for a particular tissue on its relaxation properties is controlled by the choice of the pulse sequence parameters. For spin echo pulse sequences the excitation flip angle is fixed at 90° and the choice of TR and TE only control the influence of a tissue's T1 and T2 relaxation times on the signal. For gradient echo pulse sequences, the TR, TE and flip angle control the influence of a tissue's T1 and T2* relaxation times on the signal.
Spin echo contrast and weighting
For spin echo pulse sequences the addition of a 180° refocusing pulse removes the effect of T2* relaxation and determines that the amplitude of the spin echo is influenced by T2 relaxation only. The TR and TE are chosen to weight the image contrast so that it is either primarily dependent upon the differences in T1 relaxation times (T1-weighted), or primarily dependent on the differences in T2 relaxation times (T2 weighted). If the parameters are chosen so that the image contrast is influenced by neither the T1 or T2 differences, the tissue signal is said to be primarily 'proton density' weighted. The TR controls the T1 weighting, while the TE controls the T2 weighting.
T1-weighted spin echo
The parameter choice for T1-weighted spin echo is a short TR and short TE (Figure 15 ). The choice of a short TR determines that tissues with a long T1 (e.g. fluid) will recover less than those with a short T1 (e.g. fat)). This determines the initial value of the transverse magnetisation, M xy , when the next rf pulse is applied. Tissues that have recovered less quickly will have a smaller longitudinal magnetisation before the next rf pulse, resulting in a smaller transverse magnetisation after the rf pulse. The short TE limits the influence of the different T2 decay rates. The resultant contrast is therefore said to be T1-weighted. T1 weighted spin echo images are typically characterised by bright fat signal and a low signal from fluid and are useful for anatomical imaging where high contrast is required between, fat, muscle and fluid. For cardiac imaging the pulse sequence is synchronised with the cardiac cycle and so the TR is determined by the patient's heart rate. For T1-weighted imaging the TR is set to one RR interval and the TE is set to a short value to minimise T2 weighting.
T2-weighted spin echo
The parameter choice for T2-weighted spin echo is a long TR and long TE (Figure 16 ). The choice of a long TR allows the z-magnetisation to recover close to the equilibrium values for most of the tissues, therefore reducing the influence of differences in T1 relaxation time. The longer echo time however allows more decay of the xy component of the magnetisation. The differential rate of decay between a tissue with a short T2 (e.g. muscle) and a tissue with a long T2 (e.g. fluid), leads to a difference in signal that is said to be T2-weighted. The short T2 leads to a reduced signal intensity, while the long T2 leads to an increased signal intensity. These images are characterised by bright fluid and are useful for the depiction of fluid collections and the characterisation of cardiac masses and oedema. For T2-weighted imaging with cardiac synchronisation, the TR is set to two or three RR intervals, depending on the heart rate, to provide a long TR and minimise T1-weighting.
Proton density-weighted spin echo
The parameter choice for proton density-weighted spin echo is a short TR and short TE (Figure 17 ). The choice of long TR allows recovery of the z-magnetisation for most tissues, therefore reducing the influence of differences in T1 relaxation time and the 90° excitation pulse therefore transfers a similar amount of signal into the xy plane for all tissues. The choice of a short TE limits the amount of T2 decay for any tissue at the time of measurement. This results in a high signal from all tissues, with little difference between them. So the signal amplitude is not particularly affected by the T1 relaxation properties, or by the T2 relaxation properties. The primary determinant of the signal amplitude is therefore the equilibrium magnetisation of the tissue and the image contrast is said to be 'proton density'-weighted. This type of weighting is useful where the depiction of anatomical structure is required, without the need to introduce soft tissue contrast.
Black blood contrast of spin echo pulse sequences
The spin echo pulse sequence generates images that have intrinsic black blood contrast when imaging fast moving blood flowing through the image slice [ 8 ]. This is because it uses two pulses, the 90° and 180° pulses, to produce the spin echo signal. Both of these pulses are slice-selective but are separated by a time equal to half the echo time (Figure 18 ). The transverse magnetisation of blood flowing through the slice that moves out of the slice between the two pulses is not refocused by the 180° pulse and does not contribute to the generation of a spin echo. If the flow is sufficiently rapid for all the blood receiving the 90° pulse to move out of the slice, this results in a signal void. This effect is also known as the spin washout effect, which describes the 'washout' of proton spins form the image slice that would otherwise be refocused and contribute to the spin echo signal. When there is significant blood flow through the slice, this 'black blood' appearance provides high intrinsic contrast between the blood pool and the heart and blood vessel walls, which, in the early days of cardiac MR, made spin echo the pulse sequence of choice for anatomical imaging. The spin washout effect is reduced, however, where blood moves either slowly through or within the plane of the image slice. This results in a loss of dark blood contrast and a high residual blood signal that can lead to significant ghosting artefacts. The spin echo-based approach has therefore been modified for widespread clinical use by the addition of a black blood magnetisation preparation scheme [ 9 ] which provides more reliable black blood contrast (see later).
Gradient echo contrast and weighting
There are a number of types of gradient echo pulse sequence, each having quite different contrast behaviour [ 10 ]. The two main types of gradient echo pulse sequence used for cardiac cine imaging have the generic names, spoiled gradient echo and balanced steady state free precession (bSSFP). MR manufacturers also have their own names for these pulse sequences and these are also given in the following sections:
Spoiled gradient echo
Siemens: FLASH F ast L ow A ngle Sh ot
Philips: T1 FFE T1 -weighted F ast F ield E cho
GE: SPGR Sp oiled GR ASS ( G radient R ecalled
A cquisition in the S teady S tate)
Gradient echo pulse sequences in cardiac imaging typically use very short TR values (<10 milliseconds) which gives rise to a more complex contrast behaviour. The TR values used are much shorter than the T2 relaxation times of blood and myocardium. This means that unless the transverse magnetisation generated by each rf pulse is destroyed after it has been sampled, it would still exist when the next rf pulse is applied. This can potentially contribute to, or interfere with, the signal during the following TR. In spoiled gradient echo, this signal is de-phased (or spoiled) either using a spoiler gradient at the end of each TR period, or by using a technique known as rf spoiling [ 11 ] so that its contribution to subsequent TR periods is suppressed.
Spoiled gradient echo pulse sequences follow a similar contrast behaviour to that described for spin echo, however there are some key differences. For gradient echo pulse sequences the absence of a 180° refocusing pulse determines that the amplitude of the gradient echo at the TE, is influenced by T2* relaxation. Furthermore, a variable flip angle for the excitation pulse, as well as the TR and TE, is used to control image contrast. These three parameters can be chosen to weight the image contrast so that it is either primarily dependent upon the differences in T1 relaxation times (T1-weighted), or primarily dependent on the differences in T2* relaxation times (T2*-weighted). The use of a low flip angle is important for spoiled gradient echo techniques [ 12 , 13 ] as it allows the TR to be reduced to much lower values than are possible for spin echo techniques (Figure 19 ). The low flip angle, α, is normally chosen to be less than 90° (typically 30° or less). While this initially results in a smaller transverse magnetisation, (and therefore signal) as only a proportion of the z-magnetisation is transferred into the xy plane, the magnetisation that remains along the z axis returns back to its equilibrium value sooner, allowing the repetition time to be reduced. In this case, a much larger transverse magnetisation is achieved following the subsequent low flip angle pulses, compared to that generated by a train of 90° pulses in combination with the same very short TR. This is known as low flip angle imaging and it forms the main basis by which spoiled gradient echo pulse sequences are used for fast imaging.
T1-weighted spoiled gradient echo
For spoiled gradient echo, T1-contrast is controlled by both the TR and the flip angle.
Cardiac cine imaging requires very short repetition times to be used and so resultant spoiled gradient echo sequence, with both a short TR(<10 ms) and TE (<5 ms), combined with a flip angle of around 30° essentially behaves as a T1-weighted pulse sequence. As a very short TR is used, myocardial tissue or blood that remains in the slice becomes saturated. This sequence thus relies on the flow of blood to generate contrast.
T2*-weighted spoiled gradient echo
T2* weighting with spoiled gradient echo pulse sequences is achieved by increasing the TR and TE to relatively long values. As the T2* values for tissues are shorter than the T2 values, the echo times chosen to achieve T2* weighting with gradient echo are also much shorter than the echo times required to achieve T2 weighting with spin echo sequences. For T2*-weighted gradient echo, the image contrast is strongly influenced by the presence of magnetic susceptibility effects and can be used to detect the presence of iron, for example where there is haemorrhage or iron loading of tissue [ 14 ].
Balanced steady state free precession (bSSFP)
GE: FIESTA F ast I maging E mploying S teady s T ate A cquisition
Philips: bFFE b alanced F ast F ield E cho
Siemens: True FISP True F ast I maging with S teady P recession
Balanced SSFP gradient echo sequences are designed to ensure that the transverse magnetisation is not spoiled but brought back into phase at the end of each TR period when the next rf pulse is applied. This then carries over into the next repetition and is superimposed onto the to the transverse magnetisation generated by that rf pulse. After a number of repetitions this gives rise to a steady state condition where the transverse magnetisation from two or three successive repetition periods combine to give a much greater signal [ 15 , 16 ].
The contrast behaviour of bSSFP sequences is very different to that of the spoiled gradient echo sequences. SSFP contrast is related to the tissue's T2/T1 ratio, with fluid and fat in particular appearing as brighter than other tissues. Because the transverse magnetisation originating from several TR's are combined, the MR signal amplitude for bSSFP is much greater compared to spoiled gradient echo. The increased signal allows a higher receiver bandwidths to be used, resulting in a shorter TE and TR compared to spoiled gradient echo pulse sequences and therefore improved imaging efficiency. However, if the magnetic field is not uniform, the transverse magnetisation from different TRs can destructively cancel rather than add together in areas of magnetic field inhomogeneity, making the SSFP technique prone to dark banding artefacts across the image [ 16 ]. It is therefore very important to ensure that the magnetic field is as uniform as possible over the region of interest to achieve images that are free of banding artefacts. This is achieved by a patient-specific process called dynamic shimming which uses the magnetic field gradients to correct for patient induced field inhomogeneities. Keeping the TR as short as possible also helps to minimise the banding artefacts observed in bSSFP imaging.
Bright blood contrast of gradient echo pulse sequences
In contrast to the spin echo sequence, the gradient echo sequence only uses one rf pulse to generate the signal and so the spin washout effect does not apply and the signal from flowing blood is usually visible. Indeed, rather than suffering from a reduction in signal, flowing blood often appears with an apparently increased signal, compared to the surrounding tissues [ 17 ]. The gradient echo pulse sequence is therefore commonly referred to as a bright blood imaging technique.
The very short TR (TR < 10 ms) employed for the purpose of fast imaging means that the magnetisation of tissue that remains in the image slice becomes partially saturated as rf pulses are rapidly applied to the same tissue, as there is little time for recovery of the z-magnetisation between pulses (Figure 20 ). This has the effect of reducing the signal from stationary tissue or blood that remains within the slice. Flowing blood that moves into the slice, however, has not received any previous pulses and the spin population is therefore fully magnetised. The moving blood is therefore able to generate a much higher signal than the surrounding tissue, thus the blood signal appears enhanced or bright. This effect is known as inflow enhancement and is particularly important for spoiled gradient echo pulse sequences for which saturation of the blood signal plays a greater part. When there is significant blood flow through the slice, the bright blood signal provides good intrinsic contrast between the blood pool and the heart and blood vessel walls. The flow enhancement effect in spoiled gradient echo techniques is also used as the basis for time-of-flight MR angiography (TOF MRA). Where the blood is flowing slowly through the slice or in a direction within the plane of the image slice, inflow enhancement is reduced and the bright blood contrast is reduced. Inflow enhancement plays less of a role in bSSFP pulse sequences as the bright blood signal mainly arises because of the intrinsic contrast based on the higher T2/T1 ratio for blood compared to that of the myocardium and vessel walls.
Spin echo vs. gradient echo
The 90° excitation pulse used by the spin echo pulse sequence transfers all of the available z magnetisation into the transverse plane. This combined with the 180° refocusing pulse gives the largest possible signal provided the magnetisation is allowed to recover sufficiently between repetitions. These two attributes make the spin echo technique ideally suited when the primary goal is to achieve images with a high signal-to-noise ratio and a reduced sensitivity to artefacts caused by magnetic field inhomogeneities. Fast gradient echo pulse sequences are used where imaging speed is more important than image quality. The absence of the 180° refocusing pulse in the gradient echo sequence leads to signal loss in the presence of magnetic susceptibility effects and at the boundaries between water and fat-based tissues. Flowing blood also appears differently between the two sequences, with spin echo giving an intrinsic black blood appearance and gradient echo pulse sequences giving an intrinsic bright blood appearance. The key differences between gradient echo and spin based techniques are summarised in Table 1 .
Cardiovascular MR
Synchronising with the cardiac cycle
To capture an image of the heart that is unaffected by motion requires an image to be acquired in just a few tens of milliseconds. This means both limiting the number of phase encoding steps (and thus the spatial resolution) and making the TR as short as possible. Whilst this can be done, it is at the cost of accepting a significant reduction in image quality. On the other hand, to achieve acceptable image quality, the image acquisition time becomes too long to 'freeze' heart motion. For routine CMR therefore, the MR signals are acquired over multiple heart beats, synchronising the pulse sequence and therefore the signal acquisition to a particular time point in the cardiac cycle. Cardiac synchronisation is achieved by using the patient's ECG signal, obtained by applying ECG pads and leads onto the patient's chest (Figure 21 ). Software is used to detect the 'R' wave of the ECG and to generate a synchronisation pulse which is then used to synchronise the MR data acquisition [ 18 ]. This enables images of the beating heart to be obtained either at a single time point (still imaging) or at multiple time points through the cardiac cycle (cine imaging).
Dealing with respiratory motion
For conventional spin echo and gradient echo imaging techniques, the phase encoding gradient is incremented with each successive heart beat, acquiring a single line of k-space each heart beat and resulting in imaging times of several minutes. This means that images using these techniques are degraded by respiratory motion. Image degradation caused by respiratory motion can be reduced by using one of three possible approaches, namely respiratory compensation methods (respiratory gating), cardiac synchronised fast imaging techniques combined with patient breath-holding or ultra-fast (single-shot) imaging techniques (the so called real-time imaging techniques described later). In practice, most cardiac imaging is performed with patient breath-holding combined with fast imaging techniques and these are described in the following section.
Fast imaging techniques
Conventional imaging techniques acquire only one phase encoding step (one line of k-space) per heart beat. Thus the TR for those pulse sequences is defined by the patient's heart rate and is equal to the R-R interval. It therefore invariably takes several minutes to acquire an anatomical image dataset with conventional spin echo (SE) or a cine image dataset with conventional gradient echo sequences (spoiled gradient echo or bSSFP pulse sequences). In order to overcome this limitation to achieve shorter image acquisition times, fast imaging techniques acquire more than one line of k-space in each heart beat [ 19 ]. This fills up k-space more rapidly, leading to shorter image acquisition times. Spin echo and gradient echo pulse sequences that use this principle are known as turbo or fast pulse sequences.
Still Imaging (Black blood anatomical imaging)
The ECG synchronisation technique used for still imaging is known as triggering. The synchronisation pulse is used as a trigger to initiate the pulse sequence at a particular time point after the R-wave in each cardiac cycle. This time point is known as the trigger delay and is selectable by the system operator to determine the point in the cardiac cycle at which the heart is to be imaged. This still imaging approach can be used for myocardial viability assessment or coronary angiography anatomical imaging, but the most routine application is to use it in combination with a fast or turbo spin echo pulse sequence to acquire black blood images for anatomical imaging.
Turbo (or fast) spin echo
Philips, Siemens TSE T urbo S pin E cho
GE FSE F ast S pin E cho
The conventional spin echo (SE) pulse sequence generates a single spin echo signal by the use of an excitation pulse followed by a 180° refocusing pulse. The turbo or fast spin echo pulse sequence [ 20 , 21 ] generates multiple echoes by applying multiple 180° pulses after the initial 90° pulse (Figure 22 ). Each time a spin echo becomes de-phased due to the presence of magnetic field inhomogeneities, the de-phasing is reversed by the application of a further 180° pulse, generating a further corresponding spin echo. Each echo is used to fill a new line of k-space by applying a different amount of phase encoding to each echo, prior to data sampling. The number of echoes acquired for each excitation pulse is known as the echo train length (ETL) or 'turbofactor'. This effectively defines the factor by which the pulse sequence is accelerated. For example, if an ETL of 2 is chosen, four lines of k-space are filled within each TR rather than one. Typically echo train lengths of 15 or 16 are used in order to reduce the imaging time to within a breath-hold period. Note that each successive echo in the echo train has a different echo time, with the amplitude of each echo diminishing as it's echo time increases according to T2 decay. The effective echo time is defined as that of the echo which is acquired closest to the centre of k-space (with the smallest phase encoding gradient) as this is the echo that has the greatest influence on the image contrast. In comparison to conventional spin echo imaging, images acquired using turbo or fast spin echo pulse sequences are characterised by high signal intensity from fat, despite its intermediate T2 value [ 6 ]. This increase in signal is attributed to the application of the rapidly repeated 180° rf pulses that breaks down an interaction known as J -coupling that is present between the hydrogen nuclei in molecules with long carbon chains. This interaction has the effect of reducing the T2-relaxation time and therefore the signal intensity in conventional spin echo images. For cardiac imaging, turbo or fast spin echo pulse sequences are commonly used in combination with a double inversion 'black-blood' magnetisation preparation scheme to acquire anatomical images of the heart and major vessels.
Black Blood Double Inversion preparation pulses
Reliance on the spin washout effect alone to produce dark blood contrast often leads to inconsistent results due to insufficient blood flow. In order to improve the effectiveness of black blood imaging it is common to use a black blood magnetisation preparation scheme in combination with the spin echo pulse sequence [ 9 , 22 ]. The preparation scheme consists of the addition of two 180° rf inversion pulses followed by a delay, prior to the spin echo pulse sequence (Figure 23 ). The first 180° pulse inverts the magnetisation of all blood and tissues within range of the rf body transmitter coil. The second 180° pulse re-inverts the magnetisation only within the slice of tissue to be imaged. The net effect of these two pulses is to invert the magnetisation of blood and tissue outside the slice, while the magnetisation within the slice remains close to its equilibrium value. There is then a time delay before the excitation pulse (Time from Inversion, TI). During this time, the inverted blood magnetisation recovers due to T1 relaxation from its initial negative value, towards its positive equilibrium value. The TI is calculated to be equal to the time it takes the inverted magnetisation of blood to pass through zero. At that time, the 90° excitation pulse of the turbo or fast spin echo pulse sequence is applied. During the same TI period, blood flow causes the blood with inverted magnetisation to move into the image slice, replacing the blood that has remained at equilibrium. As the spin echo pulse sequence is applied at the same time as the inverted blood magnetisation reaches zero, no signal is produced from the blood. The double inversion pulse black blood preparation scheme provides much better signal suppression as the time delay used here (TI) is much greater than the time period (TE/2) that gives rise to the intrinsic black blood contrast of the conventional spin echo pulse sequence.
Anatomical imaging with black blood FSE/TSE pulse sequences
The most commonly-used pulse sequence for anatomical imaging combines the black-blood preparation scheme with the turbo or fast spin echo pulse sequence (Figure 24 ). The black-blood preparation scheme provides consistently high contrast between the heart and vessel walls and the blood pool. The use of the turbo or fast spin echo pulse sequence with an echo train length (turbofactor) of between 15-20 shortens the image acquisition time so that it falls within a typical breath-hold period. One or two slices are typically acquired within each breath-hold period. Adjustment of the k-space order within the echo train controls the effective echo time and therefore the T2-weighting of the contrast. For T1 weighting, a short effective echo time is chosen and the pulse sequence is triggered every heart beat to keep the repetition time short. For T2 weighting, a long effective echo time is used and the pulse sequence is triggered only every two or three heart beats to achieve a long repetition time. Frequency selective fat suppression may also be applied to suppress the signal from fat if required. The time delay (TI) after the Black blood preparation scheme is automatically calculated by the MR system software to provide the best suppression of signal from blood. This depends on the TR of the pulse sequence which is determined by the patient's heart rate and the number of heart beats between each trigger pulse.
A common problem with this pulse sequence is loss of signal from the myocardium due to motion of the re-inverted myocardial tissue out of the image slice between the time of the black-blood preparation scheme and the time of the turbo or fast spin echo data acquisition. This effect can be reduced by increasing the thickness of the slice of tissue that is re-inverted by the second 180° pulse of the black-blood preparation scheme. While the image slice thickness may be typically 6-8 mm, a more appropriate value for the black blood inversion preparation pulse is 20 mm. The exact choice depends on how much displacement of the myocardium there is through the slice and it requires some adjustment depending on the trigger delay, slice orientation and location within the heart.
Cine Imaging
Cine imaging requires very short repetition times to be used and therefore can only be achieved using gradient echo-based pulse sequences. It involves the acquisition of data at multiple time points, known as cardiac phases, throughout the cardiac cycle (Figure 25 ). The trigger delay for the first time point is set to the shortest possible time after the R wave to enable images to be acquired from the beginning of the cardiac cycle. Data acquired within each cardiac phase fills a separate k-space, resulting in the reconstruction of a separate image corresponding to each cardiac phase. The images for all the cardiac phases are then viewed as a movie sequence or cine, allowing functional assessment of the heart, its wall motion and a visual, qualitative assessment of blood flow.
Triggering versus retrospective gating for cine imaging
For cine imaging, cardiac synchronisation can be performed in either of two ways: ECG triggering or ECG Gating (Figure 25 ). With ECG triggering the shortest possible trigger delay is used to commence data acquisition immediately after the QRS complex. Data is then acquired for multiple consecutive cardiac phases until nearly at the end of the cardiac cycle. Data acquisition is then stopped until the synchronisation pulse from the next 'R'-wave is received. This method requires the system to estimate an average R-R interval for the patient being imaged (This is either entered by the operator or captured from the ECG trace by the MR system). This is then used to determine the average length of the cardiac cycle over which data can be acquired and therefore how many cardiac phases can be acquired.
A consequence of this approach is that there is a 'blind spot' where no data is acquired at the end of the cardiac cycle while the system waits for the next trigger pulse. This is a disadvantage if imaging of diastolic function or mitral and tricuspid valve function is important. An alternative is to use retrospective ECG gating [ 23 ]. Here the pulse sequence runs continuously with a short TR. The synchronisation pulse is used to record when a repetition of the pulse sequence is coincident with the 'R'-wave. The MR signal data from this and subsequent repetitions are then allocated to the corresponding time points in the cardiac cycle at the end of the entire k-space acquisition. With some refinement, retrospective gating can be used successfully when imaging patients with small beat-to-beat variations in RR interval. The method acquires data from the whole of each heart beat, so that heart beats of different lengths will have different numbers of data points recorded. At the end of the data acquisition, an average heart beat interval is calculated from the whole acquisition. The time intervals between data points acquired from shorter heart beats are stretched and data from longer heart beats are compressed to fit the average heart beat interval, ensuring that all points in the cardiac cycle are imaged. The use of retrospective gating is essential for applications such as imaging mitral or tricuspid valve function or atrial contraction.
While retrospective gating works well for small beat-to-beat variations in the R-R interval, imaging of patients with large beat to beat variations is problematic. For occasional arrhythmias, there is usually an option for data points acquired from excessively long or short heart beats to be rejected and reacquired. This is known as arrhythmia rejection. In cases where there are many arrhythmias however, rejection of data is not practical and the only options are to revert to a triggered data acquisition if only systolic information is required, or to use a 'real-time' image data acquisition for which ECG synchronisation is not required [ 24 - 27 ]. The latter approach can only be taken at the expense of temporal and spatial resolution.
Turbo (or Fast) Gradient echo pulse sequences
Acceleration of cine gradient echo imaging is achieved simply by rapidly repeating the gradient echo pulse sequence a number of times to acquire a number of lines of k-space within each cardiac phase (Figure 26 ). Each group of k-space lines acquired is known as a shot. This is repeated for each cardiac phase and then for each heart beat, each time acquiring a different group of lines in each successive heart beat until the whole of k-space is filled. This is known as segmented k-space gradient echo imaging, as k-space is segmented into a series of groups of lines [ 28 , 29 ], and is sometimes referred to as multiple shot imaging.
The parameter that defines the number of lines of k-space acquired in each shot is dependent upon the manufacturer as follows:
Philips turbofactor
Siemens no of segments*
GE views per segment
(*Note that on the Siemens interface, a single line of k-space is called a segment). This determines the acceleration factor for a particular pulse sequence. For functional imaging it also determines the length of the acquisition window corresponding to each phase of the cardiac cycle. Increasing the 'turbofactor' decreases the scan time (shortens the length of breath-hold) but increases the acquisition window for each cardiac phase, thus limiting the number of cardiac phases that can be imaged, resulting in a lower cine frame rate or poorer temporal resolution. In order to maximise the number of cardiac phases and minimise the breath-hold period, the ability of an MR system to achieve very short TR values is therefore an advantage.
For breath hold cine gradient echo imaging, this method of accelerated image acquisition can be applied to both spoiled gradient echo and bSSFP pulse sequences commonly used for cardiac imaging. The vendor-specific names for the 'turbo' or 'fast' versions of these sequences are given below:
Fast spoiled gradient echo.
Siemens TFL T urbo FL ASH
Philips T1-TFE T1- weighted T urbo F ield E cho
GE FSPGR F ast SP oiled GR ASS
Balanced Steady State Free Precession (bSSFP)
Siemens Segmented True FISP
Philips BTFE B alanced T urbo F ield E cho
GE FIESTA F ast I maging E mploying S teady s T ate A cquisition
Functional imaging with turbo or fast cine gradient echo pulse sequences
Imaging of cardiac function, including the assessment of wall motion and volumetric assessment is performed using a 'bright-blood' cine gradient echo technique that acquires images at multiple heart phases which are displayed as a movie. The most common approach to cine imaging is to combine retrospective gating with a turbo or fast gradient echo method (Figure 27 ). This allows imaging of the entire cardiac cycle within a single breath-hold period. The choice of gradient echo pulse sequence depends on the field strength and the specific application. At 1.5 Tesla the balanced SSFP gradient echo sequence is used for most functional imaging applications and volumetric assessments due to the high intrinsic contrast it achieves between blood and myocardium throughout the cardiac cycle [ 30 - 33 ]. The presence of artefacts initially limited the use of the bSSFP pulse sequence at 3.0T [ 34 ], although it is has been used successfully in clinical practice and recent technical advances have indicated that improvements in quality and reliability of this method at 3.0T are possible [ 35 - 37 ]. The spoiled gradient echo pulse sequence is often used for the assessment of valvular disease and flow jets, due to its greater flow sensitivity. It is also often used in preference to the bSSFP technique at 3.0T as it is less prone to artefacts [ 34 , 36 ]. A comparison of clinical applications for spoiled gradient echo and bSSFP techniques is given in Table 2 .
A key parameter for cine pulse sequences is the number of lines of k-space acquired within each heart phase (turbofactor, no of segments or number of views per segment). Increasing the value of this parameter shortens the acquisition time, but also increases the time between cardiac phases. This reduces the number of cardiac phases within the cardiac cycle and therefore the cine frame rate or 'temporal resolution' of the image acquisition (the ability to resolve faster motion). 'Real-Time' imaging is achieved by selecting a very high turbofactor [ 24 - 27 ], such that the whole image acquisition is completed in a single cardiac phase in a single heart beat (a single-shot acquisition). Since all the phase encoding steps are acquired in a single heart beat, cardiac synchronisation is not required for real time imaging. The drawback of the high turbofactor is a poor cine frame rate. An acceptable frame rate can only be achieved by reducing the total number of k-space lines acquired. Real time imaging therefore suffers from poor temporal and spatial resolution, although the use of parallel imaging [ 38 - 40 ] can help to preserve spatio-temporal resolution.
Competing interests
The author declares that they have no competing interests.
Authors' contributions
JPR drafted, read, revised and approved the final manuscript.
Supplementary Material | Acknowledgements
The author wishes to thank John Biglands for providing the additional animation file. | CC BY | no | 2022-01-12 15:21:45 | J Cardiovasc Magn Reson. 2010 Nov 30; 12(1):71 | oa_package/8a/43/PMC3016368.tar.gz |
|||
PMC3016369 | 21138589 | Background
Injury to the anterior cruciate ligament (ACL) of the knee is common[ 1 ]. A recent population-based study reported that 80% of knee ligament surgery involved the ACL[ 2 ]. Following complete rupture of the ACL, insufficient re-vascularization of the ligament prevents it from healing completely, creating a need for reconstruction[ 3 ]. Anterior cruciate ligament reconstruction aims to reinstate the functional stability of the knee; in turn, preventing further damage to the menisci and reducing the risk of degenerative osteoarthritis[ 1 , 4 ]. The early success of reconstructive surgery has lead to the progression from open extra-articular stabilisation to arthroscopic anatomic grafting[ 5 - 7 ].
A variety of grafts are available for use in ACL reconstruction surgery. Broadly, all grafts can be placed into one of three categories; autologous grafts, allografts, and synthetic ligaments. Currently, autologous grafts such as hamstring and bone-patella tendon grafts are widely used[ 8 ]. Autologous grafts provide a strong scaffold for in-growth of collagen fibers, without the risk of graft rejection[ 9 ]. However, autologous grafts carry a risk of harvest site morbidity and require prolonged avoidance of activities during revascularization (while the graft itself has a reduced tensile strength) for a period of up to 12 months[ 10 - 12 ]. Allografts are less common and although they eliminate harvest site morbidity they are more prone to graft rejection, potential viral infection risk, slower healing, and higher failure rates[ 9 , 11 - 13 ].
Synthetic materials were first used in ACL reconstruction in the 1980 s to improve the strength and stability of the graft immediately post operatively, reduce donor site morbidity and eliminate the potential for disease transmission[ 11 , 14 , 15 ]. The first synthetic ligaments were associated with high rates of failure and reactive synovitis[ 14 , 16 ]. Over the last two decades with advancing technology, new types of synthetic ligaments have been developed. One of these synthetic ligaments, the Ligament Advanced Reinforcement System (LARS), has recently gained popularity with some orthopedic surgeons and in the media[ 17 , 18 ].
The LARS is a non-absorbable synthetic ligament device made of terephthalic polyethylene polyester fibres[ 16 , 19 ]. The ligament is highly cleaned to remove potential machining residues and oils to further encourage soft tissue in-growth and reduce the risk of reactive synovitis[ 16 ]. The intra-articular portion, or scaffold, of the ligament consists of multiple parallel fibres twisted at 90 degree angles[ 16 , 20 ]. This design aims to prevent the fibre breakdown that was previously seen in grafts made from woven materials. Additionally, this design is thought to facilitate even tensioning of the graft fibres during knee movement[ 16 ]. The scaffold provides a meshwork for the injured ligament to heal and repair[ 16 , 21 ]. One in-vitro laboratory study has demonstrated cellular growth after six months, subsequent to seeding of human fibroblast and osteoblast like cells onto the LARS[ 16 ].
Traditional ACL reconstruction techniques require debriding of the torn ACL fibres and synovial lining that normally envelops the ligament, in order to visualise the position for the graft[ 17 , 22 ]. The LARS surgical technique uses an intra operative image intensifier X-ray to position the tunnels for the LARS through the ACL stump and is therefore able to leave the synovial lining and the torn ACL fibres insitu. The proposed advantage of this technique is reduced trauma to the soft tissues of the knee and less surgical time[ 17 ]. The ACL stump is anchored to the meshwork of the LARS to support it in an optimum position while healing. Overall, the LARS surgical technique aims to maximise in-growth of the original ACL tissue, thus preserving some vascular and proprioceptive nerve supply.
Aim
The potential advantages of LARS are immediate graft stability, reduced rehabilitation time and quicker return to pre-injury function. Despite the current popularity of LARS and some promising clinical results, no systematic review has yet evaluated its effectiveness in terms of these advantages. Therefore the aim of this systematic review was to assess the current best available evidence for the effectiveness of the LARS as a surgical option for symptomatic, anterior cruciate ligament rupture in terms of graft stability, rehabilitation time and return to pre-injury function. | Methods
This systematic review included studies using subjects with symptomatic, ACL ruptures undergoing LARS reconstruction. Studies using other types of synthetic ligaments and studies of posterior cruciate ligament reconstruction were excluded. Studies published in a language other than English or in non-peer reviewed journals were also excluded. No gender or age limits were placed on the search. All types of comparison groups were included (either control, conservative or alternative surgical intervention). Outcomes of interest included knee stability measures, surgical complication rates, quality of life (QOL), function, and return to sport.
Search Strategy
Searches were performed in May 2010 on the following databases: MEDLINE, CINAHL, PubMed, SPORTSDiscus, Embase, Ovid, PEDro, Cochrane Controlled Trials Register (DARE and CCRCT) and Google Scholar. No date limits were set. The following keywords were used: 'Ligament augmentation and reconstruction system', OR 'Ligament advancement reinforcement system', OR 'LARS', AND knee, OR 'cruciate ligament', 'anterior cruciate'. Truncation symbols were utilized as appropriate across the different databases. References of retrieved studies were reviewed for further potentially relevant studies. Duplicates were removed to create a master list.
Hierarchy of evidence
As this review addressed an effectiveness question, only research studies from a quantitative research paradigm were included. The National Health and Medical Research Council (NHMRC) hierarchy of evidence was used to determine the level of evidence of studies (Levels I to IV)[ 23 ]. An initial search of the evidence revealed limited published, peer reviewed quantitative studies. Consequently, this review considered all experimental, quantitative research designs within the NHMRC hierarchy of evidence. This approach allowed an exploration of the best available evidence on LARS.
Quality Evaluation
The modified Law critical appraisal tool was used to appraise the methodological quality of studies[ 24 ]. The Law critical appraisal tool was chosen because of its generic nature (applicable to all quantitative research designs) and the authors' prior experience in using the tool. The Law critical appraisal tool contains twelve criteria, which require a yes or no answer, each representing key elements of the methodological quality of a study. Each criterion was given a score of one for yes and a zero score for no answers. Additionally for case study design studies, criterion 11(drop outs), was not applicable and therefore was not given a score. Each study was independently critically appraised by three authors [ZM, IS, SK]. Disagreements were resolved through discussion until consensus was achieved. A copy of this tool is provided in Additional Files.
Data Extraction
Data was extracted from the individual studies by two reviewers [ZM, IS]. Data relating to study characteristics such as study population, comparison groups, and follow up periods was collected to gain an overview of the included studies. To gain an understanding of the benefits associated with LARS, details of surgical times, complication rates and outcomes related to knee stability were collected. Data regarding QOL, function, and return to sport were collected in order to gain an understanding of patient-relevant outcomes post surgery. Finally, details of rehabilitation programs and timeframes were collected in order to potentially provide recommendations for clinical practice.
Body of evidence Framework
To better interpret and understand the findings of this review, the National Health and Medical Research Council (NHMRC) body of evidence framework was used[ 25 ]. The authors have experience in successfully using this framework and operationalizing it to varying bodies of evidence[ 26 ]. This framework considers multiple dimensions of evidence for all included studies, and based on this framework, evidence-based recommendations can be drawn. The components of the NHMRC framework are evidence base, consistency, clinical impact, generalisability, and applicability of the research. The applicability component was not used in this review, as this focuses on the applicability of research findings to specific local settings (for example Australia). As this review was targeted at a larger, international audience, the applicability to a one local health care setting was not considered relevant. | Results
Search Results
Search results and reasons for study exclusion are outlined in Figure 1 . Sixteen studies were excluded from twenty potentially relevant studies. All of the excluded studies in the English language were a case-series design (Level IV). Overall, four studies met the inclusion criteria for this review[ 8 , 27 - 29 ]. All were published between 2000 and 2010.
Methodological quality of included studies
Two studies scored ten out of twelve on the Law critical appraisal tool (83%)[ 27 , 29 ]. Two studies (case series) scored nine out of eleven (81%)[ 8 , 28 ]. All four studies provided a description of their sample population, but no study justified its sample size. An adequate description of the process used to calculate sample size is important; inadequate description could potentially indicate sampling bias[ 30 ]. Three studies did not adequately control for co-intervention and contamination, making interpretation of their results more difficult[ 8 , 27 , 28 ]. Two studies included co-morbidities such as meniscal pathology[ 8 , 27 ]. Two studies used magnetic resonance imaging (MRI) to exclude subjects with visible degenerative changes or combined ligament injury[ 8 , 28 ]. Meniscal lesions, if found, were treated concurrently with ACL reconstruction. Individual methodological quality scores are displayed in Table 1 .
Characteristics of studies
One randomised control trial, [ 29 ] one cohort study, [ 27 ] and two retrospective case series were identified[ 8 , 28 ]. One retrospective case series used a retrospective comparison group[ 28 ]. All four studies included populations with chronic ACL ruptures. For the purpose of this review, chronic was defined as being greater than three months from injury to surgery. Follow up periods ranged from 2 months to 5 years. Table 2 provides an overview of the characteristics and post operative rehabilitation protocols.
Two studies used a mixed population of acute and chronic ACL ruptures[ 8 , 27 ]. Lavoie and colleagues included patients with acute or subacute injured knees, in addition to chronic ACL ruptures[ 27 ]. No definitions of acute and subacute were provided by the authors. Gao and colleagues included both acute and chronic populations and defined acute injury as duration less than 3 months and chronic as greater than 3 months[ 8 ]. The use of mixed populations served to confuse the findings of these studies as it generated a subgroup within their populations. Both studies did not provide a subgroup analysis to compare outcomes in acute and chronic presentations. The cohort study by Lavoie and colleagues included patients with associated pathologies and a history of previous knee surgery, whereas the remaining three studies chose to exclude these patients. Two studies utilised comparison groups; comparing traditional surgical techniques to LARS[ 28 , 29 ].
All four studies differed in their post operative rehabilitation protocols. The first study did not report their protocol[ 27 ]. The second study reported using the same protocol for both comparison groups (BPB and LARS), but provided very little detail of the protocol itself[ 29 ]. The third study used a similar protocol for both groups with significantly reduced timeframes for their LARS interventional group[ 28 ]. The fourth study provided a general outline of their rehabilitation aims for the first 6 months post operatively[ 8 ].
Individual study results
Nau and colleagues compared ipsilateral bone-patellar tendon-bone autologous graft with LARS in a population of chronic, symptomatic, ACL ruptures[ 29 ]. This study demonstrated that LARS was comparable to bone patella bone reconstruction in terms of subjective functional scores over a 24 month period. The authors commented on the high likelihood of return to high-level activity in the LARS group, but did not provide statistical analysis to support this contention.
Lavoie and colleagues in a cohort study evaluated patient satisfaction scores for knee stability following ACL reconstructive surgery using LARS[ 27 ]. Their study population consisted of subjects with ACL rupture and included associated pathologies such as meniscal tears. This study concluded that the LARS could be considered as a viable option for ACL reconstruction in terms of patient satisfaction. Interestingly, positive patient satisfaction scores were reported despite ongoing knee laxity (average posterior-anterior displacement scores of 7.3 mm).
Liu and colleagues in a retrospective case series compared the effectiveness of the LARS to matched controls who had received traditional ACL reconstruction using a four-strand hamstring autologous graft (4SHG)[ 28 ]. All subjects had a period of more than four months since time of injury to time of surgery, and were hence classified as chronic by our definition. This study demonstrated that both the LARS and the 4SHG surgical interventions can result in improvements in functional outcomes after four years.
Gao and colleagues in a retrospective, multicentre case series assessed the clinical outcome of LARS reconstruction, with a 3 to 5 year follow up assessment[ 8 ]. LARS surgery was only performed on subjects who, on arthroscopic finding, had a viable ACL stump for the LARS to pass through. Prior to surgery, subjects gave consent for the LARS procedure but were informed that without a viable stump a more traditional approach, either BPB or hamstring tendon autologous graft would be performed. Additional surgical intervention was performed on meniscal injuries, when present. This study concluded that LARS performed in subjects presenting surgically with a viable stump can be a suitable option for ACL reconstruction in terms of function and pain outcomes.
Outcome measures
A range of subjective and objective outcome measures were used. The International Knee Documentation Committee Subjective Knee Evaluation Form (IKDC) and the Knee injury and Osteoarthritis Outcome Score (KOOS) were used to assess QOL and function. Application of these instruments within each individual study varied considerably. The IKDC assesses symptom, function and sport activity in patients with a variety of knee disorders and has been shown to be a reliable and valid scoring system[ 31 , 32 ]. KOOS, a self administered questionnaire which assesses patient satisfaction, was utilised in two studies[ 27 , 29 ]. The KOOS has been demonstrated to be valid and reliable[ 33 , 34 ].
The Tegner score is an activity grading scale where work and sport level activity is quantified pre-injury, pre-surgery and post-surgery, and was used in all four studies. Both the KT 1000 arthrometer and the Telos Stress System were used to measure structural stability of the knee. The Telos Stress System is a measure of anterior tibial shift relative to the femur, and was used in two studies[ 27 , 29 ]. However, as with other measures, the application of this outcome measure varied between studies. The Lysholm score, which aims to measure change in knee instability, is intended to correspond with the patient's subjective opinion of their function and perceived instability after knee reconstruction. The Lysholm score has questionable psychometric properties[ 35 , 36 ]. Table 3 outlines the outcome measures and outcomes of each study.
In addition to these outcomes, one study provided data for isokinetic peak torque testing for quadricep and hamstring muscle groups[ 8 ]. In this multicentre study, two out of four clinics had the resources for this type of testing and hence data presented was compiled from 68 of 159 patients (43%). Data were not presented in terms of statistical significance and it was not clear at what stage of follow up data were collected. Post operative knee range of motion and knee stability was assessed using the Lachman and pivot shift tests; manual tests of knee joint stability. Significant differences were reported immediately post operatively for Lachman and pivot shift tests, but long term follow up for these outcomes measures was not reported.
Complication rates
A number of complications from LARS were reported, including superficial wound infections, graft failure and pain arising from surgical screws. Only one study reported complication related to knee synovitis which may have been secondary to LARS rupture (partial or complete rupture not specified)[ 8 ]. Rates of complication for superficial wound infection were 2%, [ 27 ] and 1%[ 8 ]. Both studies reported that infections resolved with antibiotic treatment.
Device failure, which included either failure of screw fixation or failure of synthetic ligament ranged from 4% to 8%[ 8 , 27 , 29 ]. Pain relating to surgical screws ranged from less than 1% to 4%[ 8 , 27 , 28 ]. One study reported three cases of either partial or complete LARS rupture, all linked to sport trauma to the knee at 16, 18 and 21 months post surgery respectively[ 8 ]. Tibial or femoral and tibial tunnels were reported to be placed too anteriorly in all three cases, potentially explaining the ongoing instability identified. All three cases underwent revision surgery with traditional techniques (either hamstring allograft or autologous graft) with reported good outcomes.
Body of Evidence Matrix
The results of the NHMRC body of evidence matrix for this review are presented in Table 4 . When reviewing studies within this framework, it is apparent that LARS, as a surgical intervention for symptomatic ACL rupture, should be used with caution. The evidence to date is limited, and as such, cannot support or negate the use of LARS in clinical practice. Further research is required for LARS to be recommended as a suitable, viable and safe option in the management of ACL rupture. It is recommended that due to limitations within the existing evidence base, regular review of progress and evaluation of outcomes should be undertaken as part of implementing the LARS as a surgical intervention. | Discussion
This systematic review aimed to evaluate the effectiveness of LARS as a surgical option for symptomatic, anterior cruciate ligament rupture in terms of graft stability, rehabilitation time and return to pre-injury function. This review identified four studies of various designs, of a moderate methodological quality. Graft stability outcomes were found to be inconsistent between the four studies at both post operative and follow up periods. The time frames of rehabilitation periods were poorly reported and at times omitted. Return to pre-injury function and activity was often discussed but not adequately reported in results.
So far, no study has compared LARS and traditional ACL reconstruction methods in terms of return to previous level of function. Furthermore, no study has directly investigated autologous ligament healing along the synthetic meshwork of the LARS, a proposed benefit of LARS over traditional techniques. One study reported finding autologous tissue on the synthetic meshwork in three patients who had undergone revision surgery[ 8 ]. The authors of the same study reported that all other patients demonstrated complete autologous tissue covering of the on the synthetic strut. However the method of determining this finding was not reported.
Another proposed benefit of LARS is reduced surgical time[ 17 , 18 ]. Interestingly, no study reported the length of time for LARS ACL reconstruction surgery. A high level of patient satisfaction was the only consistent finding reported in studies.
Graft stability
It has previously been suggested that the LARS surgical technique may not be appropriate where there is a poor quality ACL stump[ 17 , 22 ]. A viable stump is thought to be important as it allows new ligamentous and neurovascular tissue to regenerate along the synthetic scaffold[ 17 , 21 ]. In chronic cases, scar tissue can interfere with the potential for re-growth[ 8 , 17 ]. Without the support of new ligamentous tissue, the synthetic ligament may be subject to fatigue failure over time[ 11 , 14 ]. All four studies included in this review included patients with chronic ACL ruptures. This could help explain the laxity that was reported in two studies[ 27 , 29 ]. Knee joint laxity may lead to poorer long term outcomes in chronic populations[ 1 , 4 ]. For this reason, LARS ACL reconstruction may be most suited to acute settings where a viable cruciate stump is present.
Synovitis and graft failure
Previously, high failure rates and a lack of resistance to abrasion lead to a high incidence of reactive synovitis following ACL reconstruction with synthetic grafts[ 1 , 11 , 37 ]. Only one study included in this review reported a case of knee synovitis. The most common cause of complication was fixation failure, either at the tibial or femoral tunnel, or both. Comparative rates of complications related to fixation failure have been reported for more traditional autologous surgical techniques[ 1 ].
Rehabilitation and return to function
Another proposed benefit of LARS is a reduced risk of graft breakdown in the early phase post-operatively[ 10 ]. Due to the potential capacity for early loading it would be inappropriate to stress other types of grafts in the same fashion. Previous literature emphasizes the importance of protecting autologous grafts in the initial stages and has reported high failure rates with early return to impact activity[ 38 , 39 ]. The success of knee reconstruction surgery will therefore not only depend on the surgery, but also the rehabilitation program. Therefore, adequate reporting of rehabilitation programs is essential when determining the effectiveness of ACL reconstruction surgery.
Two studies adequately reported their rehabilitation program[ 8 , 28 ]. The first utilized two different rehabilitation programs for a comparison of 4SHG to the LARS group. The 4SHG group was protected over a period of 3 months and gradually returned to sporting activity over 6-9 months. The LARS group was not protected and non-competitive sports activity was allowed within 2 months with a return to full pre-injury activity between 3-4 months. No differences in terms of functional outcomes were reported. The second study (LARS group only) allowed patients to return to full activities of daily living within 1-2 months, return to jogging at three months and return to sport at 6 months. Both studies allowed weight bearing with crutches, without the use of a knee brace, post operatively.
Nau and colleagues used the same rehabilitation protocol for their LARS and BPB groups[ 29 ]. Both groups received physiotherapy input (parameters not specified) three times a week for a total period of three months. This timeframe is not adequate for the BPB grafts as current literature reports ACL rehabilitation for autologous grafts should be a minimum of nine months[ 39 ]. One study did not provide any information about their rehabilitation protocol[ 27 ]. Although there is a sub-category in the IKDC relating to sporting activity, the IKDC does not specifically assess the timeframe or specific functional requirements of individual sports. Overall no study reported data for timeframes of when patients returned to pre-injury level of function or sport. Therefore recommendations and comparisons in regards to return to pre-injury function or sport and rehabilitation protocols cannot be made based on this review.
So what/bottom line
As the current body of evidence is limited, the use of LARS to treat symptomatic ACL rupture must be undertaken with caution and respect to individual clinical and organisational circumstances.
To date there is emerging evidence on the benefits associated with LARS surgery, reduced rehabilitative timeframes and early return to pre-injury and/or sports level and therefore it needs to be considered with caution. With regards to complications arising from LARS, the historical finding of increased reactive synovitis was not supported by this review, with only one case reported across all four studies. Furthermore other complications rates were comparable to traditional ACL reconstructive techniques. This is a positive finding which requires ongoing investigations and monitoring.
Limitations
As with any systematic review, this review has several limitations. There is currently a profound lack of high level, high quality primary evidence to support the use of LARS as a surgical intervention for symptomatic, ACL rupture. The majority of studies were limited in their statistical power by small sample size and sufficient statistical data was often not provided. This limited the amount of comparisons that were able to be made between studies. Furthermore, rehabilitation programs were poorly described and lacked detail. Follow up data were not available beyond 5 years and therefore longer term recommendations cannot be made. The exclusion of studies not published in the English language is a major limitation to this review. | Conclusions
Implications for clinical practice
Due to the limited evidence base for LARS for ACL reconstruction, clinical practice continues to be guided by clinician's expertise and experiential knowledge. Low current rupture rate, minimal synovitis, perceived improvements in overall knee stability and possible early return to impact loading activities are some reasons for the use of the synthetic grafts. Synthetic ligaments may be a viable alternative where traditional techniques may not be possible. Examples of this include poor access to allograft, multiple knee surgeries and/or revisions.
Implications for research
While this systematic review has identified an emerging body of evidence for LARS, it has also recognised important research gaps requiring future prospective investigations. While current research provides positive evidence of patient satisfaction with LARS, it is ambivalent with regards to other measures such as graft stability and long term functional outcomes. Also absent is any research on the cost effectiveness of LARS when compared to other traditional techniques. Therefore, future research should consider important long term outcomes, ideally over a period of five years or longer, which includes outcomes relevant to safety, effectiveness and cost- effectiveness. While the importance of rehabilitation following LARS is well recognised, literature is scant with regards to rehabilitation protocols. This has significant clinical implications as these protocols cannot be replicated in clinical settings due to lack of detail. Therefore, just as the surgical techniques are described in detail, future research should sufficiently describe and implement rehabilitation protocols that are well structured and appropriately designed. | Background
Injury to the anterior cruciate ligament (ACL) of the knee is common. Following complete rupture of the ACL, insufficient re-vascularization of the ligament prevents it from healing completely, creating a need for reconstruction. A variety of grafts are available for use in ACL reconstruction surgery, including synthetic grafts. Over the last two decades new types of synthetic ligaments have been developed. One of these synthetic ligaments, the Ligament Advanced Reinforcement System (LARS), has recently gained popularity.
The aim of this systematic review was to assess the current best available evidence for the effectiveness of the LARS as a surgical option for symptomatic, anterior cruciate ligament rupture in terms of graft stability, rehabilitation time and return to pre-injury function.
Method
This systematic review included studies using subjects with symptomatic, ACL ruptures undergoing LARS reconstruction. A range of electronic databases were searched in May 2010. The methodological quality of studies was appraised with a modified version of the Law critical appraisal tool. Data relating to study characteristics, surgical times, complication rates, outcomes related to knee stability, quality of life, function, and return to sport as well as details of rehabilitation programs and timeframes were collected.
Results
This review identified four studies of various designs, of a moderate methodological quality. Only one case of knee synovitis was reported. Patient satisfaction with LARS was high. Graft stability outcomes were found to be inconsistent both at post operative and at follow up periods. The time frames of rehabilitation periods were poorly reported and at times omitted. Return to pre-injury function and activity was often discussed but not reported in results.
Conclusions
There is an emerging body of evidence for LARS with comparable complication rates to traditional surgical techniques, and high patient satisfaction scores. However, this systematic review has highlighted several important gaps in the existing literature that require future prospective investigation. The findings of this review were equivocal with regards to other measures such as graft stability and long term functional outcomes. While the importance of rehabilitation following LARS is well recognised, there is limited evidence to guide rehabilitation protocols. | Declaration of competing interests
WD performs LARS surgery at the Wakefield Orthopaedic Clinic in Adelaide, Australia. He does not receive financial reimbursements, fees, funding or salary from any organisation associated with LARS. All authors declare that they have no competing interests or external financial support.
Authors' contributions
ZM conceptualized the topic and devised the search strategy and carried out the initial search. ZM, IS and SK assessed inclusion of studies into this review and independently assessed the quality of studies. Data extraction was carried out by ZM and IS. WD provided background literature and clinical expertise for discussion. SK and LP added to discussion and conclusion sections and performed editing roles. All authors have read and approved the manuscript.
Supplementary Material | Acknowledgements
We would like to thank Mr. Richard Clarnette, orthopaedic surgeon, for his clinical input during the conceptualisation of this paper. | CC BY | no | 2022-01-12 15:21:45 | Sports Med Arthrosc Rehabil Ther Technol. 2010 Dec 7; 2:29 | oa_package/b2/70/PMC3016369.tar.gz |
PMC3016370 | 21143871 | Introduction
A specific interest regarding quality of life of patients with schizophrenia dates back to the de-institutionalisation process which took place in the 1960 s and 1970 s in several western countries [ 1 , 2 ]. In fact, as a result of mental health reforms, the effects of the shift of care from asylum to community health centres became a necessity for clinicians, researchers and health policy makers.
It was apparent that capturing psychopathological symptoms alone was not sufficient to reflect relevant outcomes. In particular, information on the social functioning and quality of life are regarded as essential for evaluating long-term outcomes.
Quality of life may be defined as a person's sense of wellbeing and satisfaction with his/her life circumstances, as well as a person's health status and access to resources and opportunities [ 3 ]. Clearly, such an outcome is of particular importance in order to develop treatments that can help individuals with schizophrenia to lead more fulfilling and satisfying lives [ 4 ].
Unfortunately, factors influencing QoL in schizophrenia are not well known. Studies addressing QoL for patients with schizophrenia and other severe mental illnesses have identified a number of important influential factors, such as social support [ 5 ], unmet needs, [ 6 ] and medication side effects [ 7 ].
However, most of the research examining factors affecting QoL has primarily focused on the impact of psychiatric symptoms. Some studies found from small to moderate relationships between psychiatric symptoms and QoL [ 8 , 9 ] while others presented findings suggesting that certain aspects of these concepts may be indistinguishable [ 10 , 11 ], particularly as far as negative symptoms and general psychopathology (eg, anxiety, depression) are concerned [ 4 ]. In recent years, several research groups have concluded that the so-called negative symptoms of schizophrenia are much more closely related to quality of life than positive symptoms [ 12 , 13 ].
In a study of 128 patients, Norman et al. [ 11 ] have examined the relationship of symptoms and level of functioning in schizophrenia to the quality of life: their results show that negative symptoms, level of functioning and positive symptoms all were related to the Quality of Life Scale.
On the contrary, in a study of 193 patients, Fitzgerald et al. [ 10 ] showed that subjectively reported life satisfaction was not related to positive or negative symptoms of schizophrenia but did correlate with depressive symptoms. A study in five European centres [ 6 ] measuring QoL and other patient and illness characteristics in a group of 143 outpatients with schizophrenia, found that patient's QoL is predicted mainly by anxiety and depression and by global functioning.
These variations among studies appear to be at least partially due to differences in the definition and measurement of QoL [ 14 ], given the complexity and heterogeneity of the concept of quality of life [ 15 ]. Some definitions of QoL refer to it as a multidimensional set of components consisting of a person's [ 1 ] satisfaction with his/her life as a whole, or general wellbeing; [ 2 ] observable social and material wellbeing, or objective QoL; [ 16 ] satisfaction with his/her social and material wellbeing, or subjective QoL; and [ 17 ] health and functional status, or health-related QoL [ 3 ]. Clarifying the relationship between psychiatric symptoms, global functioning and QoL represents an important step both in elucidating factors affecting QoL for individuals with schizophrenia and in understanding the utility of the concept of QoL for guiding future treatment development efforts [ 4 ].
Italy has implemented a decentralisation of its mental health services since 1978 with a major function in psychiatric care being placed in the Community Mental Health Centres (CMHC) that provide psychiatric integrated interventions in different settings, including outpatients clinics, and patients' own homes [ 18 ].
The aim of the present study was to assess the outcomes in all subjects with schizophrenic diagnosis attending a Community Mental Health Centre in Copparo (Ferrara-Italy) and to examine the relationships between quality of life, psychiatric symptoms and level of functioning. | Materials and methods
Study procedure
The present study was carried out from August to September 2008 in a Community Mental Health Centre, Copparo, of the Integrated Department of Mental Health in Ferrara, Emilia-Romagna Region, Northern Italy. Since the creation of the Department of Mental Health and the implementation of the psychiatric services after the 1978 reform in the Emilia-Romagna Region [ 18 ], community mental health centers have played a central role in delivering integrated care for patients with schizophrenia and their families.
Copparo Center is the health facility to which 37.803 inhabitants refer to, 19678 females and 18125 males; 4051 are under 18 and 10526 are over 65. Population is spread over 6 municipalities and mainly works in agricultural, commercial and industrial activities, in a flat countryside, well-supplied with facilities.
The Center is made up by an outpatient department and a day-hospital, both open eight hours a day on working days; it provides medical examinations and home consulting, drug therapies whenever needed, group therapies, individual and group rehabilitations activities, meetings with family members. Compulsory psychiatric hospitalizations for acute illnesses are made by the Psychiatric Unit of Ferrara Hospital, which is 18 km far, while voluntary hospitalizations for severe illnesses use both Ferrara and Lagosanto Hospital Units, 30 km far. Both units provide 15 beds, for the whole district, with 351463 caseload.
Rehabilitating hospitalizations, with projects from 1 to 3 months, are made in specific district facilities. There are three residences, providing 50 beds.
The Centre staff is composed by 6 psychiatric nurses, 2 psychiatrists, 1 psychologist and 1 social worker. The Centre deals with any sort of psychiatric illness, from reactive forms to psychoses, and supplies consulting at the local hospital and cooperates with family doctors and with the Department for Drug Abuse, which is in a different building. All subjects, even non-residents, have the right to receive free assistance, paying simply a money contribution. Subjects with a low income or with severe illnesses don't pay the money contribution. Schizophrenic subjects have totally free assistance.
Criteria for the inclusion in the study were: a) being outpatients at the CMHC; b) a diagnosis of schizophrenia according to the WHO-ICD-10 classification (F20.0 - F20:9) [ 19 ] c) age above 18 years.
Each patient was contacted by a research psychiatrist and a visit scheduled into the outpatient clinic of the CMH service, informed consent according to the Local Ethical Committee was gathered during the meeting.
A series of instruments were used to assess psychopathology, level of functioning and quality of life.
The World Health Organization Quality of Life - Brief (WHOQoL-Bref) was used to assess the patients' quality of life. The WHOQoL-BREF includes 26 items measuring the following domains: physical health, psychological health, social relationships, and environment. Two further items evaluate the individual's overall perception of quality of life and the individual's overall perception of their health. Domain scores are scaled in a positive direction (i.e. higher scores correspond to better quality of life). The average score of items within each domain is used to calculate the domain score. Mean scores are then multiplied by 4 in order to make domain scores comparable with the scores used in the WHOQOL-100. Where more than 20% of data is missing from an assessment, the assessment should be discarded. Where an item is missing, the average of other items in the domain is substituted. Where more than two items are missing from the domain, the domain score should not be calculated (with the exception of domain 3, where the domain should only be calculated if < 1 item is missing) [ 16 ].
The VADO Personal and Social Functioning Scale (FPS), which is a modified version of the Social and Occupational Functioning Assessment Scale (SOFAS) [ 17 ], was used to assess the patients' level of functioning in four main areas: work and/or socially useful activities; family, personal and social relationships; self-care; aggressive and destructive behaviours. Suicide risk is considered in the score only as much as suicidal ruminations may interfere with social-functioning. The FPS requires a brief and simple training, that is described in the VADO manual [ 20 ]. An evaluation is assessed according to the following levels: absent, slight, evident, marked, severe. The evaluation is then turned into a score from 0 to 100 (higher scores correspond to better functioning) according to VADO guide instructions.
In the present study, separate evaluations were carried out by two psychiatrists with training in the administration of the scale.
The Italian version of Brief Psychiatric Rating Scale (BPRS), in its 24-item 4.0 version [ 21 , 22 ] was used to assess psychiatric symptoms.
Each item is rated on a seven-point Likert scale (from 1 = no symptom to 7 = extremely severe symptom - range score = 24-168), yielding four factors: positive symptoms (items 9-12, 14-15, and 24), negative symptoms (items 13, 16-18, and 20), anxiety and depression (item 1-5, and 19), mania and hostility (items 6-8. and 21-23).
Statistical Analysis
Statistic analysis was carried out using the SPSS 12.0 and Winstat for Excel statistic systems. Statistic procedure included the following survey: descriptive statistics, variation in answer distributions (frequencies), Pearson r Test, significance (p) test with a level of significance set at 0.05. | Results
Patients' characteristics
The study was proposed to all the 107 subjects of the Centre: 3 subjects, 2 women over 70, and a man who was 36 years'old refused to participate.
The characteristics of the subjects participating in the study are reported in Table 1 . The sample consisted of 104 patients, of whom 62 males (59,6%) and 42 females (40,4%) with a mean age of 47 years (SD 13.5) The mean age of first contact with the CMH was 33.3 years (SD 13.6), and the duration of illness 13.4 years (SD 7.4). Distribution by age groups is shown in figure 1 . 32 of the subjects (30.7%) were employed, 4 (3.8%) unemployed, 6 (5.7%) retired and 62 (59.6%) had a disability pension. 89 (85.6%) lived in their own house, while the rest lived in a group home, boarding home, or halfway house. 40 (38.5%) had a partner and 64 (61.5%) were single.
2 subjects had no therapy, 20 subjects were under typic antipsychotics therapy, 54 atypics antipsychotics and 28 depot antipsychotics (26 of them were atypics).
The average hospitalization rate is 18.0 days per year of disease in males, 7.3 days per year in females.
On average, males had 54.15 ambulatory interventions, females 39.1; ambulatory interventions include examinations, conversations, group or family therapy meetings, with both nurse and doctor staff members.
32 males (51.6%) have attended a Day Center for at least a month, whereas only 12 females have (28.5%).
Outpatients in the past had 50 compulsory admissions in males, out of 358 admissions (13.97%), and 15 for females, out of 254 admissions (5.9%). Hospital admissions in the latest year were 8, divided among 6 persons, 5 males e 1 female: a man had three hospitalizations in a year, 5 subjects were undergoing a 6-months residential rehabilitation program.
Quality of Life
Table 2 shows the mean scores related to every item of the four domains of the WHOQoL-BREF. In the first two items of the test, quality of life perception (Q1) recorded a 3.21 (SD 0.94) average score and 3.14 mean score on the perception of one's own health (Q2), corresponding to "neither satisfied nor dissatisfied". Total mean scores of the four domains show a higher level of satisfaction on physical and environmental health domain, while a lower level of satisfaction emerged in social relations domain.
In table 3 the scores of the first 2 items and of the four domains are presented separately for men and women and by age groups (< 45 years and ≥ 45 years). Males' scores reveal a higher level of satisfaction than females' scores in all domains except domain 3 (social); males who were over 45 showed a wider quality of life perception (Q1) and perception of one's own health (Q2) than younger males, and show a better QoL in social and environmental relations; they show, however, a lower satisfaction in physical and psychological domain. Among the females, the group aging over 45 declared a lower satisfaction than the younger ones in all WHOQoL domains. Regarding physical health, differences between sexes and ages are statistically significant (p < 0.05).
Scores were later turned into 0-100 scale according to specific guidelines, and compared to average of the national Italian WHOQoL validation group [ 23 ]. The comparison showed that sample subjects had a lower degree of satisfaction in social relations and psychological domain; however, greater satisfaction in environmental and physical domain emerged. (Figure 2 )
Psychiatric symptoms and global functioning
Average score obtained by symptoms survey through BPRS was 47.3 (SD 18.1; range from 24 to 110). According to the BPRS cut-off score, 12.5% of subjects had a BPRS >72, 27.9% between 49 and 72, and 59.6% ≤48. Table 4 shows average scores obtained by the four subscales characterising BPRS. The total average score obtained in anxiety-depression subscale is 12.88 (SD 5.9), in positive symptoms is 21.02 (SD 5.3), in negative symptoms is 13.41 (SD 7.2) and in mania-hostility 10.39 (SD 5.3).
Average scores obtained by Personal and Social Functioning Scale are presented in Table 4 ; total average score is 55.35 (SD 22.4), ranging from 11 to 100. 31.7% of subjects had severe or very severe social difficulties (FPS ≤ 40), 24% had marked difficulties (FPS 41-60), 34.6% had plain or light difficulties (FPS 61-80) and 9.6% had very light or no difficulties at all (FPS >80). In the 4 domains examined by FPS, most difficulties were found in work and socio-relational functioning (difficulty degree mostly between light and plain), followed by self-care and health domain, and finally in disturbed and aggressive behaviours, where only light or no difficulties at all are recorded.
Variables associated with QoL in patients with schizophrenia
Correlations between WHOQoL, BPRS and FPS are shown in table 5 .
WHOQoL results related to FPS total score and subscales, especially health and environment satisfaction scales. Personal and social functioning is related negatively to the "health satisfaction" -WHOQoL domain 1 (r = 0.57; p < 0.0005), to the "social relationships satisfaction" -WHOQoL domain 3 (r = 0.37; p < 0.05) and to the"environment satisfaction" -WHOQoL domain 4 (r = 0.43; p < 0.0005).
Patients' age was also negatively correlated to QoL, particularly to "health satisfaction" (r = - 0.35; p < 0.0005) and to "psychological health satisfaction" (r = - 0.12; p < 0.05); similarly disease length was negatively correlated to "health satisfaction" (r = - 0.18; p < 0.05).
Global functioning in our subjects (FPS tot) resulted negatively related to symptoms (BPRS tot), in particular to negative symptoms with a marked negative correlation (r = - 0.65; p < 0.0005).
Significant correlations between negative symptoms (BPRS negative symptoms) and "physical area"-WHOQoL domain 1 (r = 0.29; p < 0.05) and "environment domain"-WHOQoL domain 4 (r = 0.16; p < 0.05) were found; general psychopathology (BPRS anxiety-depression) was related to "psychological domain"-WHOQoL domain 2 (r = - 0.28; p < 0.05), to the"social domain"-WHOQoL domain 3 (r = - 0.15; p < 0.05) and to the individual's overall perception of QoL -Q1 (p = -0.32; p < 0.05); also positive symptoms (BPRS positive symptoms) had correlation to "social domain"-WHOQoL domain 3 (r = - 0.15; p < 0.05) and individual's overall perception of QoL (p = - 0.17; p < 0.05). | Discussion
In the present study we examined all the subjects with a diagnosis of schizophrenia attending the same Community Mental Health Centre in Northern Italy. Taking into account schizophrenia outcome variability [ 24 - 26 ], psychiatric symptoms and overall functioning and quality of life were assessed in order to verify the relationship between these variables and quality of life.
First, findings obtained through self-administered WHOQoL-BREF suggest that quality of life is not extremely negative, though schizophrenia is often an impairing chronic illness. In some respects, the population studied obtained higher scores on environment and physical dimensions of the WHO-QoL-Bref than average of the national Italian WHOQoL validation group.
With aging, perceived physical health quality gets worse, probably in relation to the increase of such diseases as hypertension, diabetes and rheumatic diseases, that were recorded in our analysis. Curiously enough, as the disease length increases, perceived QoL does not get worse, on the contrary, overall QoL gets better. This could be caused by an increased knowledge of the disease and its treatment possibilities.
In WHOQoL single items scoring, major insatisfactions regard sexual life (absolutely nonsatisfying for 59.7% of males and 54.7% of females), working life (54.8% of males and 50% of females are totally unsatisfied of their working life);
The relation with the health department, including mental health service, is considered negatively only by 12.9% of males and 0.5% of females, a flattering result for the staff members; over 70% of the subjects declare to be satisfied with the environments where they live and of the facilities that are provided.
In our study the psychiatric symptoms and overall functioning resulted diversified, as one third of subjects showed severe or extremely severe functioning difficulties. Consistently with literature [ 25 , 27 ], this suggests that schizophrenia is an extremely heterogeneous illness with many possible variables influencing psychopathology and disability.
Regarding the association of psychiatric symptoms and global functioning to QoL, we found a statistically significant correlation between BPRS and quality of life. Literature shows a great variability among studies on symptoms and QoL in schizophrenia, mostly due to wide variations in measurement strategies and definitions of QoL. Fitzgerald et al. [ 10 ], in a study comparing subjective to observer-rated QoL in schizophrenia, showed how subjective reported life satisfaction, measured with "SCAP instrument" (which is a 100-item self-report instrument that contains a number of items of self-report life satisfaction) was not related to positive or negative symptoms, but was correlated with depressive symptoms. Ross et al. [ 11 ], in a study on QoL, symptoms and level of functioning in schizophrenia, suggest that QoL in schizophrenia is more highly related to negative rather than positive symptoms. Weighted effect size analyses revealed small relationships between psychiatric symptoms and QoL, with general psychopathology showing the strongest negative associations across all QoL indicators [ 4 ].
In our study, we found a statistically significant correlation between symptoms and quality of life. More symptomatic subjects have a worse perception of QoL. Especially anxiety-depression symptoms determine a worse evaluation of general QoL, satisfaction own health and psychological health, whilst negative symptoms are associated with a bad perception of health satisfaction. These results agree with what reported by other authors [ 10 , 11 , 28 ].
Major statistically significant correlations of QoL regarded overall functioning; most correlations were found between all subscales of FPS and WHOQoL domain 1 (physical area), and domain 4 (environment), while lower, even if statistically significant, correlations were shown between FPS tot and WHOQoL domains 2 and 3 (psychological and social). As the level of personal and social disability increases, the dissatisfaction about physical health, environment, social and eventually psychological aspects increases, in an overall unsatisfaction about QoL.
These results point out how personal and social functioning plays a key role, in schizophrenia, in determining subjects' quality of life, while symptoms, though related to QoL, do not seem to be the major variable in molding QoL.
Psychiatric symptoms also were found to be negatively associated with overall functioning. The results of a cross-sectional study using data from a large study [ 29 ] suggested similarly that symptoms may be strongly related to functioning [ 30 ]. According to our data, a major highly significant correlation is related to negative symptoms and functioning measured through FPS (r = 0.65; p < 0.0005). These findings confirm the well-documented strong relationship between negative symptoms and social disability [ 31 - 33 ].
Certain limitations of the study should be mentioned. First, the study was cross-sectional; therefore, the exploration of causal relationship was rather tentative. Second, a relatively limited number of variables was examined; in addition to socio-demographic and clinical data, a complex interaction of other factors, such as self-esteem, premorbid adjustment, therapy and social support network could also play a role in determining subjective quality of life.
Finally, a potential limitation of the study was the use of the WHOQOL-BREF, a generic questionnaire that may not have detected subtle changes of subjective QoL in the specific population of schizophrenia patients. | Conclusions
The World Health Organisation [ 34 ] reports that there has been a worldwide pattern shift from hospital care to community-based care of people with mental illnesses. Italy has worked on decentralising its mental health services since 1978. The present research on social functioning, psychiatric symptoms and QoL in people with schizophrenia suggests that symptoms, but, above all, personal and social functioning are important elements to determine QoL. These studies point to the importance of looking beyond symptom-reduction strategies for improving QoL in schizophrenia; furthermore, they underline how rehabilitation facilities and increased participation of families and communities in the treatment significantly improve quality of life of people with mental illness. | Objective
the relationship between Quality of life (QoL) and global functioning and symptoms in outpatients with Schizophrenia
Method
The study was carried out on the outpatients with schizophrenia attending a Community Mental Health Centre in 2008. Each patient completed the WHO QoL Instrument - Brief and was administered the Brief Psychiatric Rating Scale-24 to assess psychiatric symptoms and the VADO Personal and social Functioning Scale to assess the level of functioning.
Results
subjects showed an intermediate satisfaction on the overall QoL and health; these data can be juxtaposed to the national standard sample rates. QoL resulted positively associated to personal and social functioning, while it was negatively related to psychiatric symptoms.
Conclusion
patients showed a fairly good satisfaction in regard to their QoL. The severity of psychiatric symptoms is one of the elements influencing QoL, together with personal and social functioning that plays a relevant role. | Competing interests
The authors declare that they have no competing interests.
Authors' contributions
All authors participated in the design of the study. AG, MCT, MGN and PM wrote the manuscript. MCT and LG reviewed the manuscript. MCT performed the statistical analysis. All authors read and approved the final manuscript. | Acknowledgements
The authors wish to thank all the staff and patients of Community Mental Health Centre of the Integrated Department of Mental Health and Drug Abuse in Ferrara (Italy) who participated in, or aided in the conduct of this study. | CC BY | no | 2022-01-12 15:21:45 | Int J Ment Health Syst. 2010 Dec 8; 4:31 | oa_package/f2/97/PMC3016370.tar.gz |
PMC3016371 | 21176157 | Background
Mental disorders are common in all countries, with considerable variation in reported prevalence across countries, even when the same methods for estimation of prevalence are used [ 1 ]. The treatment gap is wide everywhere, widest in developing countries [ 2 , 3 ]. Among the reasons for failure of coverage is massive under-investment in mental health service provision [ 4 ], resulting in serious shortages of health facilities and skilled mental health professionals [ 5 ], reliance on outmoded mental hospitals and lack of community mental health services [ 6 ], and frequent neglect and abuse of the human rights of people with mental illness [ 7 - 9 ]. This developing country profile of population mental health and mental health services - variability in prevalence, under-investment in provision of appropriate and accessible mental health services, low rates of utilisation of mental health services, and failures of rights protection - is replicated in the case of cultural minorities, even in wealthy countries with relatively well-developed mental health systems [ 10 , 11 ]. In both cases (developing countries and cultural minorities), there are legitimate concerns about the validity of 'western' diagnostic and treatment practices [ 12 ], and the applicability of 'western' models of mental health service [ 13 ]. Also in both cases there have been calls for scaling up mental health services [ 11 , 14 ] for new ways of working [ 15 ], and for evidence-informed policies and practice [ 14 , 16 ]. | Conclusion
Evidence is most important as a basis for decision-making and action, such as medical treatment of an individual, decisions about health policy, or actions such as investment in one rather than another form of service delivery. It is clear that such decisions will be made and actions embarked upon whether or not there is evidence or whether the decision-maker or practitioner is familiar with such evidence as does exist. Consensus methods can provide a basis for decision-making and considered action when there is no evidence or when there are doubts about the applicability in a particular setting of evidence that has been generated from other populations or health systems. The issue is not whether consensus methods provide evidence that is as good as other ways of generating evidence, but whether the evidence generated by using such methods is better than no evidence or inapplicable evidence.
Beyond questions of the availability or quality of evidence, expert consensus methods have a particular advantage when applied in fields that are neglected and that require urgent action, such as mental health system development in low-income countries and for immigrant and refugee communities. The process of identifying consensus shifts the focus away from areas of disagreement, which frequently are many, to areas of agreement. If key stakeholders, such as government decision makers, service users, as well as clinicians and researchers, are involved in the expert panel considering a particular issue, this can facilitate informed decision-making and action. | Background
In both developing countries and in relation to cultural minorities there have been calls to scale up mental health services and for evidence-informed policy and practice.
Evidence based medicine
The evidence based medicine movement has had a major influence in improving practice. However, implementation of this approach has some major difficulties. One that has been neglected is the situation where there is no relevant evidence. This situation is more likely to occur for healthcare decisions in developing countries or for cultural minorities within developed countries, because resources do not exist for expensive research studies.
Consensus methods
Consensus methods, such as the Delphi process, can be useful in providing an evidence base in situations where there is insufficient evidence. They provide a way of systematically tapping the expertise of people working in the area and give evidence that is readily applicable for a particular country and culture. Although consensus methods are often thought of as low in the hierarchy of evidence, consensus is central to the scientific process. We present four examples where the Delphi method was used to assess expert consensus in situations where no other evidence existed: estimating the prevalence of dementia in developing countries, developing mental health first aid guidelines in Asian countries, mental health first aid guidelines for Australian Aboriginal people, and modification of the concept of 'recovery' for Australian immigrant communities.
Conclusion
Consensus methods can provide a basis for decision-making and considered action when there is no evidence or when there are doubts about the applicability of evidence that has been generated from other populations or health system settings. | Evidence based practice
Over the past 30 years, the evidence based medicine movement has had a major influence in improving practice. Evidence based medicine involves "integrating clinical expertise with the best available clinical evidence derived from systematic research" [ 17 ]. This movement has led to improvements in the reporting and registration of clinical trials, the provision of systematic reviews of controlled trials (e.g. the Cochrane Collaboration) and the development and promotion of clinical practice guidelines. While the movement began in developed countries, there has also been advocacy for evidence based medicine in developing countries [ 18 ]. It has been argued that the need for evidence based practice is vital in these countries because financial resources are scarce and need to be channelled to the best possible care options [ 19 ]. In particular there is a need for evidence-informed decision making by policy makers, health system managers and others who responsible for shaping health systems and delivering services [ 20 ].
Despite its many successes, the evidence based medicine movement has encountered some significant barriers in changing clinical practice, including the difficulty that practitioners have in finding and interpreting the evidence, and in applying it in practice, due to organisational barriers, patient adherence and lack of quality practitioner education [ 21 ]. There has also been criticism of the movement because of its emphasis on high quality evidence from randomised controlled trials, which tend to be carried out in ideal circumstances that may be remote from everyday practice. This has led to calls for more "practice-based evidence", where the evidence is gathered in real-life clinical settings and there is greater emphasis on the external validity of the evidence (generalisability) rather than on its internal validity (validity of causal inference) [ 22 ]. An additional issue is the relative (compared to biomedical research) scarcity of health system research and evidence for policy and health system design [ 23 - 25 ].
Here we discuss an obvious, but neglected, barrier to evidence based practice--situations where there is no evidence [ 26 ]. This barrier can be found for many areas of healthcare, but is more likely to be found in developing countries and for cultural minorities within developed countries.
Developing countries have limited resources to support research [ 27 ]. Therefore, evidence is often imported from developed countries, raising the issue of applicability of this evidence in settings and from populations other than those in which it was generated. Where the evidence reflects universal biological processes (e.g. infection and immunisation), it will be possible to generalise, but where it involves social and cultural processes (e.g. persuading people to control infection or to immunise), it may not be. In psychiatry, where social and cultural factors are integral in making healthcare decisions and developing systems of care, much of the evidence from developed countries will be limited in generalisability. The same applies to cultural minorities within developed countries. Although it may be possible to develop an adequate evidence base for the mainstream population, the resources may not exist to do this for small indigenous or immigrant groups. In addition, immigrant and refugee communities that do not speak the host country language are frequently excluded from research because of the difficulties and expense of translation.
Expert consensus
The development of a locally relevant evidence base using expert consensus is a valuable approach where other evidence is unavailable. While developing countries may not have the resources to carry out randomised controlled trials, population surveys, cohort studies or health service evaluations [ 27 ], they do have considerable local experience derived from practice. The same is true for cultural minorities in developed countries. There are formal methods of developing consensus from experts, such as the Delphi process, nominal group technique [ 28 ] and consensus conference methods [ 29 ], which can be used to harness this experience. These methods have acceptable construct validity [ 30 ] and reliability [ 31 ]. By using these methods to develop an evidence base that can guide decisions, policy makers and practitioners can move beyond relying on their own experience and drawing on the accumulated experience of a larger, expert group.
As well as being feasible with limited resources, consensus methods overcome some of the limitations that have been identified with evidence based medicine. They embody the principle of practice-based evidence, provide results that are relevant to the local population and culture, and are readily implementable within the healthcare system. This approach fits with the call to improve practice by capitalising on accumulated practical experience and using this to develop better interventions.
The Delphi consensus method has been widely used in disparate fields to inform: policy-making [ 32 , 33 ]; design of health services [ 34 , 35 ]; development of diagnostic guidelines and protocols [ 36 , 37 ]; developing mental health first aid guidelines for psychosis [ 38 , 39 ], suicide [ 40 - 42 ], self-injury [ 42 ], and panic attacks [ 43 ]; and service and research priority-setting [ 44 , 45 ]. In relation to dementia, the Delphi consensus method has been used to clarify issues and identify consensus in diagnosis and clinical assessment [ 46 ], treatment and management [ 47 - 49 ], services, outcomes [ 48 ], research, and to estimate prevalence [ 50 ] in settings where the necessary epidemiological studies have not been done.
There are several possible objections to the adequacy of expert consensus as a source of evidence. The evidence based medicine movement has been associated with a hierarchy of levels of evidence. Typically, the strongest form of evidence is held to be a meta-analysis of randomised controlled trials, with expert consensus ranked at the bottom of the hierarchy. However, while systematic reviews and meta-analyses are held to be the highest form of evidence, there is little agreement on the reporting of a critically important component of the systematic review process - the search methods used in carrying out systematic reviews [ 51 ]. Even where relevant systematic reviews exist, there are often questions about the applicability of the findings of the systematic review to the task of choosing among policy or program options in particular circumstances [ 52 ].
If expert consensus is so weak, should we be proposing it as a strategy for contributing to informed decision-making? An important counter-question to ask is: how were some types of evidence assigned to be higher in the hierarchy than others? The answer is "expert consensus". Expert consensus is seen as adequate to validate hierarchies of evidence or indeed other components of the evidence based medicine enterprise, such as the CONSORT Statement for reporting randomised controlled trials [ 53 ] and principles for developing practice guidelines [ 54 ].
More generally, consensus has an important role in the scientific process. New theories gain ground as more members of the scientific community see a new theory as giving a better account of the evidence than older ones [ 55 ]. The role of consensus can be seen in the past rise of scientific developments like continental drift and prions as infectious agents, but also in contemporary challenges such as climate change [ 56 ].
A paradox in recommending consensus in order to make progress is that consensus may hinder progress and promote no longer useful or even harmful practices. Consensus is an expression of the prevailing political, social, cultural, scientific world view of the time. There may, for example, be a broad consensus within society with which scientists do not agree, or there may be a consensus among experts that produces undesirable outcomes [ 57 ]. The validity or probity of consensus-based decisions can be called into question, as has famously been done recently in relation to the work of the Inter-Governmental Panel on Climate Change (IPCC) which, as a global body of 800 climate scientists, relies on reaching consensus [ 58 ].
The issue is not whether consensus per se is weak or strong, because it is integral to how science works, but rather what foundation of evidence the consensus rests on. A consensus based on a set of randomised controlled trials [ 50 ] may well be better than a consensus based on personal clinical experiences. What we are proposing here is not that expert consensus is as good a source of evidence as a meta-analysis of trials, but that it is a better source of evidence than the experience of a single individual, which is the alternative when there is no other evidence available. Expert consensus may well turn out to be wrong. Indeed, the history of science indicates that it inevitably will. However, it presents a better basis for guiding action, because group consensus will generally produce better judgements than any individual's judgement [ 59 ]. Expert consensus methods are a step on the road to informed decision-making. When better forms of evidence become available, these should of course be relied upon.
Examples of how expert consensus can be used
To illustrate the use of expert consensus methods in situations where there is no evidence, we describe four projects that have used the Delphi process to fill evidence gaps in developing countries and with cultural minorities.
Estimating the prevalence of dementia
Studies of the prevalence of dementia are major research undertakings. Estimates of prevalence rates are available from well-conducted studies in most developed countries, but for the rest of the world there is limited or no evidence. To overcome this lack, Alzheimer's Disease International carried out a Delphi study with a panel of 12 international experts [ 50 ]. The panel members were provided with a systematic review of the published studies and, on the basis of those studies and additional information of possible relevance such as development status and child and adult mortality in the regions, were asked to estimate prevalence for every WHO world region for 5-year age groups from age 60 onwards. "When published information is scarce, experts can make inferences using other data from comparable contexts" [ 50 ]. The consensus estimates of prevalence led to an estimate that 24 million people had dementia in 2001 and that 60% of them lived in developing countries, with this rising to 71% by 2040. These prevalence estimates are currently the best available basis for making policy and planning services. In particular, they raise awareness of the need for planning in relation to the rapid rise in number of dementia cases in developing countries. The purpose of the study was to generate estimates that are "the best currently available basis for policymaking, planning, and allocation of health and welfare resources" [ 49 ].
Mental health first aid in developing countries
Mental health first aid is the help provided to a person who is developing a mental health problem or is in a mental health crisis. A Mental Health First Aid training course has been developed in Australia and has spread to many other countries [ 42 ]. Mental health first aid guidelines have been developed for English-speaking countries and used as a basis for the Mental Health First Aid training curriculum [ 60 ], but these may not be generalisable to countries with very different cultures and health systems. As a low-cost solution to this problem, psychosis first aid guidelines were developed using the Delphi method with clinicians from a wide range of Asian countries [ 39 ]. Because guidelines need to be specific to particular countries, more recent work has used the Delphi process to develop suicide first aid guidelines for India [ 40 ], the Philippines and Japan, using the consensus of panels of clinicians from each of these countries.
Mental health first aid with Aboriginal Australians
A related example concerns Aboriginal Australians, who comprise less than 3% of the Australian population, but are a priority for action because they are disadvantaged in both physical and mental health. A Mental Health First Aid training program has been developed for Aboriginal and Torres Strait Islander peoples and found to be culturally acceptable [ 61 ]. However, the first aid actions recommended in the training were based on a modification of those for the mainstream Australian community. To develop more culturally appropriate mental health first aid, a Delphi study was carried out with an expert panel of Aboriginal mental health professionals [ 61 ]. Panel members were presented with statements about possible first aid actions and were encouraged to suggest other actions not already covered by the survey content. The endorsed statements, covering depression, psychosis, suicidal thoughts and behaviours, deliberate non-suicidal self-injury, trauma and loss, and cultural considerations in giving mental health first aid, were used to write guidelines and a revised curriculum for Aboriginal and Torres Strait Islander Mental Health First Aid training [ 62 ].
The construct of 'recovery' in immigrant communities
Mental health policy, particularly as it relates to the mental health NGO sector that provides rehabilitation, social support, housing, employment and related services, increasingly requires that recovery principles should underpin service design and delivery. However, the origins of the recovery movement are in western Anglophone countries, and recovery principles reflect the cultural commitment of these countries to values such as privacy and confidentiality, individual autonomy, and self-determination. The relevance of these underlying values to many immigrant communities in Australia, which tend to place greater value on interdependence, collective decision-making, and authority of the senior member of the family, has been called into question, therefore challenging whether the current recovery construct can be applied in a multicultural society. This study [ 63 ] used the Delphi method, with senior staff of a major NGO constituting the expert panel, to explore ways in which the construct of recovery may need to be modified to make it more relevant for a culturally diverse population of clients. The study also focused on ways in which a modified conception of recovery can inform the design and delivery of mental health services by the organisation. It is an example of use of the consensus method to inform organisational change for service improvement.
Competing interests
The authors declare that they have no competing interests.
Authors' contributions
HM and AJF contributed equally to the conception and writing of this paper. Both authors read and approved the final manuscript. | Acknowledgements
The authors wish to acknowledge the thoughtful comments of reviewers. The revised manuscript is improved as a result of those comments. | CC BY | no | 2022-01-12 15:21:45 | Int J Ment Health Syst. 2010 Dec 21; 4:33 | oa_package/49/e1/PMC3016371.tar.gz |
|||
PMC3016373 | 21156052 | Background
Rapid and accurate diagnosis of Plasmodium infections is crucial for morbidity and mortality reduction in tropical areas, specially in regions where mixed infections are prevalent such as Papua New Guinea (PNG), where all four parasites infecting humans coexist and mixed species infections are common [ 1 , 2 ]. For improving accuracy in large epidemiological studies, molecular diagnostic tools permitting high through put analysis for the identification and quantification of malaria parasites would be of great benefit.
Traditionally, light microscopy (LM) examination of blood smears has been considered the gold standard for the diagnosis of malaria [ 3 ]. LM has clear advantages over molecular typing, since it incurs only low costs, does neither need complex sample preparation nor advanced technology and permits species identification and quantification [ 4 ]. However, the role of LM as a gold standard has been questioned due to false negative results at low parasitemia and frequent errors in species identification in mixed infections [ 5 , 6 ].
The availability of commercial rapid diagnostic tests (RDT) has greatly facilitated the in situ diagnosis of malaria infections in the field. The advantages of RDT are that they do not need special preparation of the sample and diagnostic results become immediately available [ 7 ]. However, their use is limited due to lack of sensitivity for Plasmodium vivax, Plasmodium malariae and Plasmodium ovale [ 8 ].
Nested PCR (nPCR), first described by Snounou and co-workers [ 9 ], is a widely used method and is considered as a molecular gold standard due to its good performance in detection of mixed species infection. This assay amplifies the multicopy 18 S rRNA genes of the four Plasmodium species infecting humans. Even though this genotyping technique is now performed in many field laboratories in endemic countries, its use for routine clinical diagnosis is limited, because the analysis is time consuming due to the need of multiple reactions per sample, and the risk of contamination through the requirement for nPCR [ 9 ]. Moreover, the technique is not quantitative.
In recent years, various real time quantitative PCR (qPCR) assays have been developed for the detection of Plasmodium species, with most assays targeting the 18 S rRNA genes. Two of these methods detect the genus Plasmodium using generic primers, and thus do not distinguish between species [ 10 , 11 ]. Sybergreen reagent has been used by other groups, to identify the four Plasmodium species infecting humans in a single reaction by melting curve analysis [ 8 , 12 ]. The use of TaqMan probes contributed an additional level of specificity to qPCR assays [ 13 ]. However, using a single pair of primers for Plasmodium genus detection in duplex assays, introduces competition for amplification among species, which likely leads to difficulties in detecting mixed infections [ 14 ]. In order to address this issue, various groups had used different strategies [ 15 - 18 ]. The multiplex PCR-ligase detection reaction-fluorescent microsphere assay (PCR_LDR_FMA) has also been used in molecular epidemiological studies for simultaneously detecting all four Plasmodium species [ 19 ].
Despite the variety of molecular tools available for the diagnosis of malaria and their wide use for the diagnosis of imported malaria in travel clinics, implementation of these techniques in endemic areas has remained limited until now. Even though the low sensitivity and limited detection of asymptomatic and mixed infections by LM constitutes a critical shortfall for some epidemiological studies, LM remains to date the most frequently used method for the diagnosis of malaria in endemic areas. The transfer of molecular techniques for diagnosis of malaria to laboratories in endemic settings is essential for overcoming the limitations by LM. Moreover, a molecular technique with quantification capacity contributes to correctly estimate the burden of Plasmodium species often found in concomitant infections and will be a valuable tool to explore competition in mixed infections.
A qPCR assay initially developed for malaria diagnosis in returning travellers at a reference laboratory was implemented and validated at the PNG Institute of Medical Research (IMR). This assay was chosen because it detects with high specificity all four Plasmodium species which jointly occur in our study area in PNG. The performance of this qPCR assay in conditions of a field laboratory and on field samples was compared to light microscopy, nPCR and PCR_LDR_FMA results. | Methods
Study site and sample collection
Samples were collected in 10 villages from Ilahita area of East Sepik Province, PNG [ 2 ], in the framework of a longitudinal cohort study conducted during 2008. As part of the baseline, venous blood samples were obtained from 452 children one to five years of age, after written informed consent was obtained from parents or legal guardians of each child. This genotyping study was approved by the PNG IMR Institutional Review Board (IMR IRB 0720) and PNG Medical Research Advisory Committee (MRAC 07.34).
Blood smear examination
Thick/thin blood smears were prepared as described previously [ 20 , 21 ]. Blood smears were stained with a 5% Giemsa solution and examined independently by two microscopists, with a third microscopist reading for those slides with discrepant results. A minimum of 200 microscopic fields were examined at a magnification of 1000× using oil immersion optics before a slide was declared negative for malaria parasites by LM. Plasmodium species were quantified by counting infected parasites over 200 leukocytes. Conversion of parasite counts into parasites/μL was performed assuming a mean leukocyte count of 8000/μL whole blood. Routinely slides were read twice. Discordant results were evaluated by a third slide reading. Final species diagnosis was based on the majority agreement.
DNA template extraction and amplification
DNA was extracted from 200 μL whole blood (venous blood collected in EDTA anti-coagulant) using QIAamp 96 DNA Blood Mini Kit (QIAGEN, Valencia, CA), and eluted in a final volume of 200 μL dH2O according to the supplier's instructions. nPCR was carried out as described [ 22 ].
PCR_LDR_FMA was carried out as described elsewhere [ 19 ]. Mixes of serially diluted plasmids containing inserts of P. falciparum , P. vivax , P. malariae or P. ovale 18 S rDNA were used as positive controls in addition to P. falciparum or P . vivax positive samples obtained from field isolates. The threshold for positivity for each species was determined using the mean value obtained from negative controls for each species, plus three times the standard deviation.
The primers and probes of the qPCR assay are listed in Table 1 . In the design of a duplex qPCR, the probes combined in one reaction carried different fluorescent labels at their 5' ends. All four probes carried a black hole quencher (BHQ) at their 3'ends. The analytical specificity of primers and probes were evaluated for each Plasmodium species in silico by Blast searches and experimentally by using gDNA of the three alternatives Plasmodium species or of related blood borne parasites. To minimize costs of consumables, duplex reactions were performed in a final volume of 12.5 μL. Amplification and detection of the amplified product was performed in an iQcycling BioRad system, using iQSupermix from BioRad. The P. falciparum/P. vivax ( Pf/Pv ) duplex reaction contained 2.5 μL DNA (corresponding to 2.5 μL whole blood), 6.25 μL SuperMix iQ (BioRad), 0.35 μL Pf primer mix (50 μM), 0.35 μL of Pv primer mix 50 μM, 0.375 μL of Pf probe (10 μM), 0.375 μL of Pv probe (10 μM) and 2.3 μL double distilled water. The P. malariae/P. ovale ( Pm/Po ) duplex reaction contained equivalent amounts and concentrations of the respective primers and probes. The thermal profile used was 2 minutes at 50°C, followed by 10 minutes at 95°C and 45 cycles of 15 seconds at 95°C and 1 minute at 58°C.
qPCR validation
Evaluation of PCR efficiency and reproducibility was performed on standard curves using four positive control plasmids with the respective amplicons inserted. Geometric mean and standard deviation were calculated from triplicates in three independent assays. Standard curve for each Plasmodium species was made from a 10-fold serial dilution of the control plasmids ranging from 10 6 copies/μL to 10 copies/μL.
Amplification efficiencies for the different primer pairs and probes were calculated with the formula:
Efficiency = 10 (-1/Slope) -1. Inter-assay coefficients of variation (CVs) were calculated for each plasmid dilution separately as (SD/mean)*100 using the Ct values from different runs.
Reproducibility of qPCR was further analysed by repeating 10% of all 452 field-samples in the same laboratory, but at different time points. External quality assurance was performed in addition.
The amount of target in an unknown sample was quantified by converting the threshold cycle (Ct) into template copy number by using the four standard curves. Samples yielding Ct values equal or higher than 40 were considered Plasmodium species negative [ 23 ].
Statistical analysis
Results from qPCR was compared with those from LM, nPCR and PCR-LDR-FMA. Agreement between diagnostic methods was determined by calculating Kappa Statistics with 95% confidence intervals. Values were interpreted with the Landis and Koch classification [ 24 ] as follows: k = 0.41-0.60, moderate agreement, k = 0.61-0.80, good agreement; k = 0.81-1.00, almost perfect agreement beyond chance. Prevalence for each species was calculated by dividing the number of positive samples through by the number of all samples tested and McNemar test was calculated in order to test for significance. P values <0.05 were considered statistically significant. Correlation between LM and qPCR quantification was calculated by pairwise correlation coefficient. All statistical calculations were performed with STATAv.10 statistical software. | Results
Optimization of qPCR assay
The large scale of field work required optimization of molecular testing with the aim to reduce costs. Two strategies were applied: multiplexing the qPCR and reducing the reaction volume. The performance of the two duplex qPCRs were evaluated by analysing dilution series of the two respective control plasmids in various ratios (Additional file 1 ). These experiments demonstrated very little inhibition when 200 templates and above were present in the reaction. Only the detection of minute template concentrations, as low as 20 templates per reaction, was slightly inhibited by excess amounts of the alternative template, i.e. in a 100 times excess.
Table 2 shows the duplex qPCR efficiencies for all four templates after reducing the reaction volume to 12.5 μL. The dilution series was performed with control plasmids containing inserts corresponding to the Plasmodium species-specific PCR product. Similar slopes were observed for all species, demonstrating similar amplification efficiency throughout the tested range and therefore comparability between quantification results. Correlation coefficients around 0.980 demonstrated that our assays were linear over the entire quantification range. The inter-assay coefficients of variation (CV) between 0.3% and 5%, obtained in all six dilutions, indicated reproducible results (Additional file 2 ).
Reproducibility of our assay was investigated by re-analysing 10% of the 452 samples in the same laboratory. In 8/45 compared samples discordant results were recorded. Four of these discrepancies occurred in cases of low density in the range of 10-100 copies/μL. Ct values were between 37 and 39 in the positive ones of the duplicate tests. Two other discrepancies were observed in mixed species infections, whereby the minority species was detected in only one of the duplicate experiments in the range of 10 to 100 copies/μL.
Comparison of diagnosis of Plasmodium infection by qPCR, nPCR, PCR-LDR and LM
Prevalence values for P. falciparum showed significant differences when all detection techniques were considered (40.9% by qPCR, 43.8% by nPCR and 47.1% by LDR, p-values < 0.05). However, greater agreement was observed for qPCR compared to LM (k = 0.621) or to PCR-LDR (k = 0.750). Almost perfect agreement was obtained for prevalence by qPCR and by nPCR (k = 0.869) (Table 3 ). Prevalence for P. vivax by qPCR was not significantly different from that by LDR (65.7% versus 67.5%, p-value > 0.5), but a statistically significant difference was seen when comparing to nPCR (prevalence 73.2%, p-value < 0.05) or LM (prevalence 54.9%, p-value < 0.05). Overall, P. vivax prevalence by qPCR was in moderate agreement with the other assays (0.596 with nPCR, 0.553 with PCR-LDR and 0.530 with LM). Prevalence values for P. malariae and P. ovale were highly discrepant between the different techniques. Prevalence of P. malariae by qPCR (4.7%) differed significantly from that of PCR-LDR (prevalence 7.7%, p-value < 0.05), and from that of nPCR (prevalence 3.3%, p-value < 0.05). Moreover, prevalence of P. ovale was higher by qPCR (7.3%) compared to that of LDR (4.4%) and nPCR (3.8%). The difference between all these values was statistically significant. Prevalence by LM was significantly lower for all four species (25.4% for P. falciparum , 54.9% for P. vivax , 2.4% for P. malariae and 0% for P. ovale , p-values < 0.05). P. malariae and P. ovale moderate kappa scores were retrieved when comparing qPCR with the other assays (Table 4 ).
Quantification of Plasmodium parasites
For P. falciparum , quantification by qPCR correlated well with microscopy counts when both were positive (R 2 = 0.8253). A substantial number of samples were negative for P. falciparum by LM, but positive by qPCR; in the scatter blot in Figure 1A these negative samples locate to the ordinate. Very few samples were LM positive, but qPCR negative for P. falciparum . As the densities of these samples were not extremely low and all of them were co-infected with other Plasmodium species, microscopic misclassification of the Plasmodium species is likely. The correlation between microscopy counts and qPCR-based densities was lower for P. vivax (R 2 = 0.5049). This lower agreement might be due to overall lower densities, possibly around the detection limit, in P. vivax infections. Such stochastic variation in parasite presence likely was responsible for negative slide results as well as for a negative qPCR, both indicated by numerous data points on the coordinates in Figure 1B . Generally in all PCR-based techniques low densities lead to alternating positive or negative results in repeated experiments due to the coincidental lack of any template in some reactions. Furthermore, mixed species infections, where P. vivax is found at a very low density, contribute to a lower correlation between microscopy count and qPCR-based densities likely due to misclassification by LM. To illustrate this, P. vivax correlation coefficient significantly increased up to R 2 = 0.7220 when only single infections by qPCR are analysed, but was as low as R 2 = 0.4372 when only mixed infections are considered. Few of the samples positive for P. malariae by qPCR and none of the P. ovale positive samples were identified by light microscopy. | Discussion
In preparation of major molecular epidemiological field studies in PNG essential parasite detection techniques were compared under conditions of a laboratory close to the field site and located in a malaria endemic country. The diagnostic requirements were: (i) good performance in the detection of mixed species infections, as all four species concurrently occur in PNG, (ii) recognition of P. malariae and P. ovale variants present in the study area, (iii) high through put capacity and robustness of assay, (iv) quantitative results and (v) reasonable costs. The qPCR assay described here was implemented and validated at the PNG-IMR site in Madang, demonstrating the feasibility of applying state of the art techniques in this context. In the meantime the qPCR assay is routinely implemented for molecular diagnosis in large scale epidemiologic studies at IMR.
As part of test validation in the field, the performance of this qPCR assay for Plasmodium species discrimination was compared to two other PCR-based assays (nPCR and LDR) and to LM. Traditionally, test outcomes for different assays are compared to an established 'gold-standard' in order to calculate sensitivity and specificity estimates and to evaluate the performance of newly developed tests. The classical 'gold standard' for malaria diagnosis has been LM [ 3 ], however, with the appearance of new molecular diagnostic tools, LM has become less suitable for this purpose due to its lower sensitivity than molecular methods [ 6 ]. Even though the nPCR developed by Snounou et al [ 9 ] has been extensively used as 'gold-standard' for molecular diagnosis [ 25 , 26 ], the concept of using a 'gold-standard' for the evaluation of new assays is being questioned by various authors, which alternatively propose the use of 'non-gold standard' approaches [ 27 , 28 ].
The agreement between qPCR and the other techniques was substantial for P. falciparum , but only moderate for P. vivax , P. malariae and P. ovale . In particular, the agreement between qPCR and nPCR for P. falciparum detection was almost perfect. The lower agreement between PCR-LDR and nPCR, together with the higher prevalence shown by PCR-LDR (47.1% compared to 40.9% by qPCR and 43.8% by nPCR), may indicate false positive results by LDR. This is supported by our pairwise analysis and the agreement of two independent PCR based assays, namely nPCR and qPCR. However, in absence of a suitable diagnostic 'gold standard', it remains unclear if those 33 samples positive for LDR but negative by the two alternative molecular methods, represent a greater sensitivity of LDR or simply false positives. This issue cannot be easily resolved in a study involving 'unknown' samples from the field, potentially infected by four different Plasmodium species.
P. vivax prevalence was higher by nPCR than by both, qPCR and PCR-LDR (73.2% by nPCR, 65.7% by qPCR and 67.5% by PCR-LDR). This again could reflect false postitives by nPCR or lower sensitivity by both other molecular methods. Our observations in qPCR validation using plasmid template suggested that qPCR of P. vivax is lightly compromised by performing a duplex Pf/Pv reaction. nPCR involves a very high number of cycles (55 cycles by nPCR versus 45 cycles by qPCR and 35 cycles by LDR), and therefore is expected to show maximal sensitivity. Despite measurements taken over 45 cycles in qPCR, we followed the consensus rule for considering a sample positive, i.e. a Ct value < 40 [ 23 ]. In our samples this led to the loss of 9 samples with Ct values for P. vivax between 40 and 43.6 cycles, which otherwise would have increased the sensitivity of the assay. Further analysis was performed on samples with discrepant results for P. vivax ( P. vivax negative samples by qPCR and positive by nPCR). Most of these samples were mixed infections by nPCR and harboured P. falciparum with more than 10,000 copies/μl. Thus competition for amplification at the beginning of the PCR due to P. falciparum high densities may be precluding P. vivax detection [ 14 ]. 14/16 of the remaining samples were also negative by LM. Most likely these very low-grade P. vivax infections were missed. The scarcity of the template in case of a very low parasite density is expected to lead to imperfect detection. Prevalence for P. malariae and P. ovale were low with significant differences between assays, even though the agreement between pairwise compaired methods was moderate. Higher prevalence for P. malariae detection by LDR is likely to occur as a result of false positive results, probably occurring due to high background noise of the P. malariae probe used in the assay. Low detection of P. ovale by nPCR (3.8%) is due to the use of a primer pair with sub-optimal amplification of P. ovale sequences present in the study area. Finally, LM measured the lowest prevalence for all four Plasmodium species.
The major advantage of qPCR over the other compared molecular techniques was the quantification of parasite densities. Parasite densities shown as copies of 18 S rRNA template/μL were quantified by converting the threshold cycle (Ct) into template copy number by using the standard curves. When correlating quantification by qPCR with LM counts in samples where both techniques showed positive results, a high correlation for P. falciparum (R 2 = 0.8253) and a lower correlation for P. vivax (R 2 = 0.5049) was found. But for P. vivax this correlation of parasite densities by qPCR and LM increased when only single infections were taken into account. Therefore, our results suggest a variation in the detection limit in both methods, due to overlooking P. vivax in case of an overwhelming P. falciparum infection. Difficulties in identifying P. vivax by LM arise when this parasite is found at low densities and in mixed infections. The high P. falciparum densities found in the samples identified as mixed infection by qPCR (> 10000 target copies/μl) further supports this explanation. The correlation for P. malariae and P. ovale could not be analysed due to poor detection of both species by LM.
The qPCR assay was found optimal for both tasks, detection of all four Plasmodium species and quantification. The latter could only be analyzed for P. falciparum and P. vivax . Overall qPCR shows substantial agreement with other molecular techniques for the detecting prevalence of P. falciparum and P. vivax , while moderate agreement was observed for P. malariae and P. ovale . It is clear, that sensitivity of our qPCR assay can be increased by simply performing independent reactions of each Plasmodium species. However, this would substantially increase costs. Limiting factors, such as duplex assays, need to be balanced against costs or work load. The specific research objectives of a particular study should guide the choice of experimental procedures.
Overall, the superior performance of PCR based methodologies over LM has been clearly demonstrated by these results and others. In a recent study conducted in Benin, a high number of children (between 27% and 44%) aged 5 or above, who initially had negative RDT tests (most also with negative blood slides), were later found to be infected with P. falciparum using PCR [ 29 ]. These undetected submicroscopic infections have an enormous impact for malaria transmission in endemic areas. In a time where malaria erradication has become the primary goal of malaria agendas, the accurate estimation of the burden of malaria infection is imperative to control transmission. | Conclusions
In conclusion, this qPCR assay was sensitive and specific for the detection of all four Plasmodium species and results agreed well with other molecular techniques tested. Added advantage of the qPCR assay is quantification of parasite densities and a less laborious workflow. Moreover, the assay performed well in field samples and due to its high through put capacity, it is suitable for large scale epidemiological studies. Finally, a quantitative assay is of a greatest value for monitoring of malaria control programmes. | Background
Accurate diagnosis of Plasmodium infections is essential for malaria morbidity and mortality reduction in tropical areas. Despite great advantages of light microscopy (LM) for malaria diagnosis, its limited sensitivity is a critical shortfall for epidemiological studies. Robust molecular diagnostics tools are thus needed.
Methods
The present study describes the development of a duplex quantitative real time PCR (qPCR) assay, which specifically detects and quantifies the four human Plasmodium species. Performance of this method was compared to PCR-ligase detection reaction-fluorescent microsphere assay (PCR_LDR_FMA), nested PCR (nPCR) and LM, using field samples collected from 452 children one to five years of age from the Sepik area in Papua New Guinea. Agreement between diagnostic methods was calcualted using kappa statistics.
Results
The agreement of qPCR with other molecular diagnostic methods was substantial for the detection of P. falciparum , but was moderate for the detection of P. vivax , P. malariae and P. ovale . P. falciparum and P. vivax prevalence by qPCR was 40.9% and 65.7% respectively. This compares to 43.8% and 73.2% by nPCR and 47.1% and 67.5% by PCR_LDR_FMA. P. malariae and P. ovale prevalence was 4.7% and 7.3% by qPCR, 3.3% and 3.8% by nPCR, and 7.7% and 4.4% by PCR_LDR_FMA. Prevalence by LM was lower for all four species, being 25.4% for P. falciparum , 54.9% for P. vivax , 2.4% for P. malariae and 0.0% for P. ovale . The quantification by qPCR closely correlated with microscopic quantification for P. falciparum and P. vivax samples (R2 = 0.825 and R2 = 0.505, respectively). The low prevalence of P. malariae and P. ovale did not permit a solid comparative analysis of quantification for these species.
Conclusions
The qPCR assay developed proved optimal for detection of all four Plasmodium species. Densities by LM were well reflected in quantification results by qPCR, whereby congruence was better for P. falciparum than for P. vivax . This likely is a consequence of the generally lower P. vivax densities. Easy performance of the qPCR assay, a less laborious workflow and reduced risk of contamination, together with reduced costs per sample through reduced reaction volume, opens the possibility to implement qPCR in endemic settings as a suitable diagnostic tool for large epidemiological studies. | Competing interests
The authors declare that they have no competing interests.
Authors' contributions
DM and IF conceived and designed qPCR assay and AR coordinated the study. DM validated qPCR in the reference lab and AR in the endemic seating. IB and AR participated in the sample collection. JI did the DNA extractions from the blood samples. CB performed the PCR-LDR-FMA. AR performed qPCR and nPCR methods, statistical analysis and interpretation of the data. AR, IF and IM draft the manuscript. PZ and HP critically reviewed the manuscript. All authors have read and approved the final manuscript.
Supplementary Material | Acknowledgements
We are particularly thankful to all the children of Ilahita area, their parents and/or guardians who participated in the study. We thank the staff from the Institute of Medical Research in Maprik for their assistance in collecting and processing blood samples, to the staff from the Institute of Medical Research in Madang for malaria microscopy readings and to Nomin-Dora Tenakanai for her help with the Bioplex analysis. This investigation received financial support from the Fundación Privada CELLEX (Catalonia, Spain). | CC BY | no | 2022-01-12 15:21:45 | Malar J. 2010 Dec 14; 9:361 | oa_package/30/fd/PMC3016373.tar.gz |
PMC3016374 | 21171970 | Background
Every year, an estimated 500 million people are afflicted with malaria worldwide, killing more than one million people, most of whom are children in sub-Saharan Africa [ 1 , 2 ]. Current control measures for preventing malaria transmission in Africa focus on the use of long-lasting insecticide-treated nets (LLITNs) treated with pyrethroids and indoor residual spraying (IRS) with organochlorines and pyrethroids. Both of these control methods have proven effective as a means for reducing Plasmodium transmission by endophagic malaria vectors but are potentially threatened by vector resistance to the currently used insecticides [ 3 ]. The development of novel methods to reduce Plasmodium transmission that can integrate with and enhance current malaria control measures, as well as other health priorities, is critical.
In 1987, ivermectin (IVM) was registered for human use for the control of onchocerciasis [ 4 ] and later for lymphatic filariasis [ 5 ]. Its primary effects are against microfilariae in the human body, which are the transmissible parasite stages of these diseases. Ivermectin has been used extensively since the mid-1990's in mass drug administration (MDA) campaigns across Africa by the African Programme for Onchocerciasis Control (APOC) [ 6 ] and the Global Program to Eliminate Lymphatic Filariasis (GPELF) [ 5 ]. Annually, more than 80 million people across the tropics are treated with IVM by MDA [ 7 ].
Previous in vitro and animal studies demonstrated that IVM can reduce the survivorship of multiple mosquito species after ingesting the drug in blood [ 8 - 11 ]. Wilson [ 12 ] reviewed these and other studies and suggested that the avermectins might impart their strongest impact on disease transmission by reducing vector longevity thereby reducing vectorial capacity. Concentrations of IVM found in human venous plasma after standard IVM MDA (150 μg/kg) reduced the survivorship and re-blood feeding frequency of laboratory-reared Anopheles gambiae s.s., which are the two most critical variables in models of vectorial capacity [ 13 ]. Two studies have directly blood fed laboratory-reared Anopheles spp. mosquitoes on humans who have ingested IVM. Foley et al [ 14 ] reported reduced Anopheles farauti survivorship when mosquitoes fed on one person who ingested 250 μg/kg of IVM. Chaccour et al [ 15 ] found that An. gambiae s.s. blood fed on humans one day after they had ingested 200 μg/kg of IVM had significantly reduced survivorship, but the effect was not apparent fourteen days post-ingestion. To date, the only field based study on the effects of IVM against wild mosquitoes was performed in Papua New Guinea and focused on lymphatic filariasis control. Bockarie et al [ 16 ] demonstrated that MDA with IVM (400 μg/kg) in combination with diethylcarbamazine citrate (6 mg/kg) in one village reduced the survivorship of wild Anopheles punctulatus up to four days after MDA. In another village, MDA with IVM (400 μg/kg) alone reduced the survivorship of wild An. punctulatus captured the day after MDA [ 16 ].
The goal of this study was to determine if IVM MDA of humans in Senegal for onchocerciasis control could simultaneously reduce the survivorship of wild African malaria vectors, and if so, for how long this effect would occur, and to model this effect on malaria transmission. Villages in south-eastern Senegal have been treated once or twice yearly with IVM MDA (150 μg/kg) for onchocerciasis eradication for over fifteen years [ 17 ]. This same region has hyperendemic malaria transmission [ 18 , 19 ] and has an abundant and diverse Anopheles malaria vector fauna [ 20 ]. In 2008 and 2009, blood-fed Anopheles spp. were captured from inside peoples' huts before and after IVM MDA in three replicate pairs of villages in south-eastern Senegal. Survivorship of the mosquitoes was assessed by holding them in a field insectary for five days. Mosquito survivorship data were then incorporated into a modified previously-developed model [ 21 ] to evaluate the potential of IVM to reduce malaria transmission. The results demonstrate that IVM MDA reduces the survivorship of wild An. gambiae s.s. and that this reduction in survivorship should be sufficient to reduce malaria transmission. | Methods
Study site
The study was conducted in the villages of Boundacoundi, Damboucoye, Nathia, Ibel and Ndebou, all in the Sudano-Guinean zone of rural south-eastern Senegal. The five villages are located along a 15 km stretch of road heading west out of Kedougou. Most of the people in this area are subsistence farmers. They live in extended family compounds with 2-10 sleeping huts, and cultivate maize, sorghum and groundnuts between these compounds and in separate fields outside the village. Cattle, sheep, goats, dogs, and chickens are the primary domesticated animals in the villages. In 2008, two villages were sampled, Ibel and Ndebou. Ibel was treated by MDA with 150 μg/kg of IVM (MectizanTM, Merck & Co., Inc) on August 8, 2008, while Ndebou was not treated and served as the paired control village. In 2009, two pairs of villages were sampled. Ndebou and Boundacoundi were the first pair of villages sampled and MDA occurred on August 6, 2009 in Ndebou, with Boundacoundi serving as the control. Damboucoye and Nathia were the second pair of villages sampled and MDA occurred on October 11, 2009 in Damboucoye, with Nathia serving as the control. MDA was coordinated by APOC in Senegal and the Senegalese Ministry of Health, and performed through community-directed treatment by the local nurses. Permission to conduct mosquito sampling surrounding these MDAs was granted first by the Senegalese Ministry of Health and then by the residents of each village. The study was also reviewed by the Colorado State University Institutional Review Board prior to being conducted.
Mosquito collections
Indoor resting, wild, blood fed Anopheles mosquitoes were collected in the morning from huts people had slept in the previous night using backpack aspirators (John W. Hock, Gainesville, FL, USA). After capture, Anopheles mosquitoes were transferred by mouth aspirators from backpack aspirator cups to 473 ml cardboard containers screened with organdy. The containers were labelled and designated by village, date collected, and the specific hut from which mosquitoes were collected. Containers were placed into a large basket and two moist towels were placed over the top of the basket to keep the mosquitoes humid and cool. Immediately following morning aspirations, the mosquitoes were transported back to the insectary in Bandafassi (2008) or Kedougou (2009) and maintained on shelves. Insectaries were designated rooms of houses and had screened and slatted windows so that they naturally fluctuated with the ambient temperature and humidity. Temperature and humidity within the insectary ranged from (27 - 30°C) and (66 - 86%). Any dead, non-blood fed, or non- Anopheles mosquitoes were removed from the containers upon placement in the insectary. The containers had a moistened sponge and raisins placed on top to serve as water and sugar sources for the mosquitoes. Survivorship was checked daily at 12:00 pm and dead mosquitoes were removed from the containers. The containers were then randomly placed back onto the shelves. All mosquitoes that survived for five days were frozen and counted as alive on day five post-capture. In 2009, both the treated and the control village were sampled on the same day by two separate field teams. In 2008, only one village was sampled on each day (Additional file 1 ).
Mosquito processing
Mosquitoes were identified morphologically to species [ 22 , 23 ] in the field insectary immediately following their death. The abdomens were separated from the thoraxes of all Anopheles spp. and placed into two separate 1.5 ml microfuge tubes containing the silica gel desiccant T.H.E. (EMD Chemicals, NJ, USA). The tubes were labelled with the village, date of aspiration and hut location, and all relevant information was recorded on matching log sheets. Processed mosquitoes were shipped back to Colorado State University for further molecular analysis. DNA was extracted with the Qiagen DNeasy kit (Qiagen Sciences, Maryland, USA) and a Qiacube robot (Qiagen Sciences, Maryland, USA). Multiplex polymerase chain reaction was used to molecularly identify members of the An. gambiae s.l. complex [ 24 ]. A subset of twelve or fewer An. gambiae s.l. from each collection day that died within one day of capture and contained undigested blood at the time of processing had their blood meals analysed by the multiplex polymerase chain reaction to determine the source of blood [ 25 ].
Survivorship model and statistics
A generalized linear mixed (Glimmix) model was used to assess the effects of IVM MDA on mosquito survivorship. Mosquito survivorship results for each village at each sampling date were classified by treatment (whether or not they originated from a treatment or control village), replicate (the three pairs of villages sampled over the two field seasons), and phase. The three phases are groups of sampling dates from each village before MDA (phase 1), one to six days after MDA (phase 2), and seven days or more after MDA (phase 3). The one to six day grouping immediately after MDA was chosen based on the differential survivorship curves generated from the three replicates (Figure 1 ). Treatment and phase were treated as fixed effects. Replicates were treated as random effects with sample date nested within replicate and phase. The model was fit by the SAS Glimmix procedure using maximum likelihood estimation with three Gaussian quadrature points (SAS Institute, Cary, NC, USA). The percent survivorship of mosquitoes was tested for interaction of treatment by phase, and if significant, then post-tests were performed to determine which treatment by phase groups significantly differed from the others. The efficacy of IVM to reduce mosquito survivorship would be reflected by a significant drop in the treated village survivorship at the phase 2 group compared to control and pre-treatment groups.
Modelling the change in the basic reproductive number of malaria due to ivermectin mass drug administrations
A linear calculation of the daily probability of survivorship was determined from the five-day survivorship values of An. gambiae s.l. captured before and after MDA from all three replicates. These daily probability of survivorship estimates were then inserted into an age-structured model of mosquito population dynamics from Billingsley et al [ 21 ] with several minor modifications. The proportion of treated humans was not held constant, but rather varied temporally based on MDA coverage. It was assumed that once treated with drug, humans remained mosquitocidal to feeding Anopheles for six days. Output of this model was used as input for an age-structured model [ 21 ] showing the relative change in the basic reproductive number of malaria during IVM treatment. A relative R 0 < 1 indicates a decrease in transmission, while relative R 0 > 1 indicates an increase in transmission and relative R 0 = 1 indicates no change.
Immature mosquito lifestages were assumed to have a daily survival rate of 0.9. Based on the data estimated from field-collected mosquitoes, it was assumed that mosquitoes that imbibed blood from an untreated human had a daily survival rate of 0.96, while mosquitoes that fed on a treated human had a daily survival rate of 0.86 for 3 days post-feeding. To be conservative, it was assumed that after feeding on a treated human, mosquitoes experienced a decrease in their daily survival rate for three days post-feeding, after which they recovered to untreated levels. An extrinsic incubation period of fourteen days was used for the model. Age-specific fecundity was not affected by IVM as was previously stated [ 21 ]. | Results
Mosquito survivorship analysis
Figure 1 depicts the percent survivorship of An. gambiae s.s. from all three replicates grouped by treatment and phase. There is an observable reduction in An. gambiae s.s. survivorship after IVM MDA (phase 2) in the treated villages that lasts for six days. The survivorship of An. gambiae s.s. in Ibel during phase 3 is low (Figure 1 ), but this line only represents ten mosquitoes caught from one collection (Additional file 1 ). A total of 1,265 An. gambiae s.s. from three replicates were captured and held for survivorship analysis. The model of estimated mosquito survivorship for An. gambiae s.s. identified a treatment by phase interaction, indicating that the difference between treated and control survivorship depends on phase (F-value = 18.27, P < 0.0001) (Figure 2 ). In follow-up comparisons, treatment at phase 2 significantly differed from control at phase 2 (t-value = 4.01, P = 0.0003), and it also significantly differed from both treatment at phase 1, pre-MDA (t-value = 8.31, P < 0.0001) and treatment at phase 3, seven days and after IVM MDA (t-value = -4.61, P < 0.0001). The conclusion of this analysis is that IVM MDA significantly reduced the survivorship of An. gambiae s.s. for six days past the date of the MDA.
Adequate numbers for survivorship analysis of An. arabiensis were only caught during the third replicate (Damboucoye and Nathia, n = 153). There appears to be a reduction in survivorship of An. arabiensis following IVM MDA (Figure 3 ), but treatment by phase comparisons were not significantly different from each other (F-value = 0.66, P = 0.5332). However, treatment alone was significant (F-value = 7.01, P = 0.0191), therefore the overall survivorship of An. arabiensis was lower in the treated village compared to the control village (Figure 4 ).
Blood meal analysis
Almost all, 97.8% (136/139), of An. gambiae s.s. blood meals that were analysed were from humans. Based on this information it was assumed that almost all of the An. gambiae s.s. held for the study had fed on humans. However, only 75% (24/32) of analysed An. arabiensis blood meals were from humans (Figure 5 ).
Modelling the effect of ivermectin treatment on malaria transmission
Simulations indicate that MDA with IVM can significantly reduce R 0 for a short period of time after drug administration (Figure 6A ). Using the previously stated assumptions and an ideal situation of 100% coverage, a synchronized MDA of IVM results in an approximately 90% reduction in R 0 immediately following MDA (Figure 6A ). After effective drug concentrations are cleared from the blood of the human population, R 0 rebounds to pre-treatment levels or slightly higher if no further MDAs are performed. To keep relative R 0 levels at significantly lower levels, drug treatment must be administered repeatedly. Less frequent treatments allow for periods of control alternated with periods of no control (Figure 6A ).
Due to the fluctuating nature of the control dynamics, where R 0 is changing on a daily basis, it is more informative to compare average relative R 0 between different treatment intervals. This can be easily calculated by summing the area under the curve for both pre and post-treatment scenarios. Under the ideal conditions of 100% coverage with treatment every week, mean R 0 can be reduced by 80%. Lower levels of coverage or less frequent MDAs reduce the efficacy of this strategy, but in all cases MDA resulted in some level of control (Figure 6B ). | Discussion
Previously published laboratory based evidence showed that colonized An. gambiae s.s. is susceptible to IVM at concentrations relevant to human pharmacokinetics after a typical MDA [ 13 ], and that colonized An. gambiae s.s. fed on IVM-treated humans one day post-treatment had reduced survivorship [ 15 ]. The current study now demonstrates that routine MDA of IVM to people significantly reduces the survivorship of wild An. gambiae s.s. for up to six days post MDA. This six day lethal effect is longer than the two days observed from Kobylinski et al . [ 13 ], and this effect occurs despite incomplete MDA coverage in treated villages. Three field replicates were performed over space and time in different villages to make this study a rigorous assessment of the effects of IVM MDA on An. gambiae s.l.
There were no significant differences in An. arabiensis treatment by phase survivorship (F-value = 0.66, P = 0.5332) but this was almost certainly due to sampling, in that adequate numbers (n = 153) for survivorship analysis were only captured in the third replicate MDA. The third replicate MDA was performed in October 2009, toward the end of the rainy season when An. arabiensis is more prevalent [ 20 ]. There was a 38% reduction in mosquito survivorship from phase 2 treatment collections compared to phase 2 control collections (Figure 4 ). The fact that treatment alone was significant (F-value = 7.01, P = 0.0191), means that the overall survivorship of An. arabiensis was lower in the treated village compared to the control village (Figure 4 ). Furthermore, only 75% (24/32) of An. arabiensis blood meals were from humans (Figure 5 ), which reduces the probability that mosquitoes held for survivorship analysis may have ingested an IVM-containing blood meal. Fritz et al [ 26 ] reported that colonized An. gambiae s.s. and colonized An. arabiensis have almost identical susceptibility to IVM. When these data are considered together, it is reasonable to assume that upon further replication wild An. arabiensis will be shown to be as susceptible to IVM MDA as wild An. gambiae s.s.
Based on clinical records, 84.2% (203/241) of people in Damboucoye and 82.1% (311/379) of people in Ndebou were treated with IVM during these two MDAs. Pregnant women and children under 90 cm did not receive the drug, following the manufacturer guidelines. Mosquitoes that were held for survivorship analysis for five days had completely digested their blood meals, which made it impossible to detect IVM from individual mosquitoes. Yet it is impressive that mosquito survival was still significantly reduced despite not knowing whether any one mosquito fed on a treated person. Incomplete coverage may actually be beneficial to the overall concept of repeated IVM MDAs for malaria control in that it may provide a refugia of untreated human hosts for mosquitoes to feed on which could reduce the likelihood of IVM resistance development in the mosquito population.
Of people accounted, 78.2% (903/1,155) utilized ITNs across the four villages surveyed in 2009. Even with high ITN coverage, human blood fed An. gambiae s.s. and An. arabiensis were frequently collected from the inside of huts, demonstrating that ITNs have limitations in preventing Anopheles from feeding on people in huts. Exophagic and exophilic malaria vectors also comprise an important part of the malaria transmission cycle in this study area; Anopheles funestus group mosquitoes are almost twice as likely to blood feed outdoors than indoors [ 20 ], and Anopheles nili tend to be exophilic or immediately exit huts after biting [ 20 , 27 ]. ITNs may reduce malaria transmission by exophagic vectors [ 28 - 30 ], but their primary efficacy is against endophagic vectors. It has also been shown that ITNs may shift vector host seeking times to earlier in the evening when people will not be sleeping under an ITN [ 31 ]. IRS will only affect the survivorship of endophilic vectors that contact the sprayed surfaces, and it is believed that the exophilic portion of the An. gambiae s.l. population led to the failure of IRS to eliminate malaria transmission during the Garki project [ 32 ]. Furthermore, a number of malaria vectors will naturally feed at dusk and dawn, when humans are less likely to be indoors and protected by an ITN or by IRS. Ivermectin MDA may be one of the few methods that can directly target these exophagic, exophilic, and crepuscular-feeding malaria vectors, and should integrate well with the employment of existing in-home control methods like ITNs and IRS.
Ivermectin has a different mode of action from the insecticide classes currently used for ITNs and IRS (i.e., carbamates, pyrethroids, and organochlorines) [ 3 ], in that it agonizes the glutamate-gated chloride anion channels found in invertebrate postsynaptic neurons and neuromuscular junctions [ 33 , 34 ]. This action hyperpolarizes the neurons and muscle fibers, leading to flaccid paralysis and insect death [ 35 - 37 ]. Once or twice yearly IVM MDA has been occurring in this region for over fifteen years [ 17 ], so the fact that a reduction in survivorship of An. gambiae s.s. was still detectable is a promising sign that resistance by Anopheles spp. may be slow to develop against this drug. Furthermore, the novel mode of action of ivermectin compared the currently used insecticides for malaria control should potentially minimize issues of cross-resistance where IVM MDA may be used in combination with IRS and ITNs.
Anopheles gambiae s.s. often requires two blood meals to complete its initial gonotrophic cycle [ 38 , 39 ] and thereafter will often take multiple blood meals per gonotrophic cycle [ 40 - 42 ] and feeds almost exclusively on humans (Figure 5 ) [ 20 ]. These blood feeding characteristics, coupled to the fact that the extrinsic incubation period for Plasmodium spp. is 9-14 days, means that most malaria transmission by An. gambiae s.s. will occur only after Plasmodium parasite-harbouring mosquitoes have taken multiple non-sporozoite transmitting blood meals from humans [ 21 , 43 ]. If human population clusters were simultaneously treated with IVM MDA, then most adult An. gambiae s.s. in the MDA area would imbibe a concentration of IVM that would reduce their survivorship. The predicted effect has the potential to temporarily shift the An. gambiae s.s. population age structure, which would reduce the reservoir of adult sporozoite-transmitting An. gambiae s.s. in the MDA area. The low sporozoite rate in the resulting mosquito population would temporarily reduce the basic reproductive number (R 0 ) of malaria below the base number for approximately eleven days post IVM MDA. Current IVM MDAs for onchocerciasis control in Africa are performed only once or twice per year, and do not always coincide with local malaria transmission seasons. Such current applications would not be expected to lower malaria transmission long enough to see any noticeable reductions of parasite prevalence, intensity or disease in people. Indeed, malaria is hyperendemic in APOC-control areas of south-eastern Senegal despite IVM MDAs for more than 15 years. However, if IVM MDA is administered repeatedly, R 0 can be reduced for an extended period of time.
This model, like all models, makes assumptions that may not be realistic in nature, such as homogeneous mosquito biting, no spatial structure, and the lack of density-dependent effects. However, the model results are conservative, since it only incorporates the direct IVM-related mortality effect observed in the field. Kobylinski et al [ 13 ] demonstrated in the lab that multiple sub-lethal IVM containing blood meals compounds mosquito mortality, but the model assumes complete recovery of surviving mosquitoes three days after ingesting an IVM-containing blood meal. Kobylinski et al [ 13 ] also demonstrated that the re-blood feeding frequency and blood digestion of An. gambiae s.s. are delayed after imbibing relevant IVM concentrations. Finally, Fritz et al [ 26 ] and personal observations have witnessed mosquito knockdown effects immediately after drug ingestion. All of these negative sub-lethal effects would probably lead to a further reduction in An. gambiae s.s. survivorship in the field, due to desiccation, predation, or insufficient nutrition, beyond that of outright mortality induced by IVM. Thus, it may be that the age-structured model is an underestimate of the true effect of IVM MDA on malaria transmission.
The age-structured model predicts that strategically administered, repeated IVM MDAs would achieve sustained reductions in malaria transmission. Such repeated MDAs may only be logistically feasible for areas with seasonal malaria transmission, such as the Sahel, or in areas experiencing malaria epidemics. An expanded IVM MDA regimen fits well with the idea of combating polyparasitism in some of these same communities [ 44 , 45 ]. Malaria and soil-transmitted helminths (STHs - Ascaris lumbricoides, Trichuris trichiura, Strongyloides stercoralis , and hookworms) are co-endemic across much of sub-Saharan Africa [ 46 - 50 ]. The annual/biannual IVM MDAs for onchocerciasis control are not generally sufficient for controlling STHs because of relatively rapid re-infections due to their transmission dynamics [ 51 - 55 ]. Repeated IVM MDAs are likely to result in reductions of the prevalence and intensities of STHs in individuals receiving the drug [ 52 - 59 ], and would increase the personal incentive to participate in such MDAs. The combination of malaria and soil-transmitted helminth infections can exacerbate anaemia, resulting in worsened child development and more adverse pregnancy outcomes than these diseases cause on their own [ 60 - 64 ]. Therefore, the combined effects of IVM MDA on multiple parasites could potentially reduce anaemia in the human population which would lead to an overall improvement in human health beyond what would be expected from malaria control alone.
Future field work will need to be conducted to determine if repeated IVM MDAs can quantifiably reduce malaria transmission and if the model created here is an over or underestimate of what would occur in the field. Important to this future work is determining whether a logistically feasible IVM MDA interval can be devised that would reduce malaria transmission below a critical threshold and not foster IVM resistance in either Anopheles vectors or human nematode parasites. | Conclusions
This study demonstrates that IVM MDA reduces the survivorship of wild-caught An. gambiae s.s. for up to six days post MDA and the modelled field data shows that repeated IVM MDAs should result in a sustained reduction of malaria transmission. Ivermectin MDA would be both a novel insecticide class and delivery method for reducing vector survivorship. It should also affect exophagic, exophilic, and crepuscular-feeding malaria vectors not normally targeted by malaria control measures. Repeated IVM MDAs should concomitantly reduce STH prevalence and intensities, which would further benefit human health beyond that of malaria control alone. | Background
In south-eastern Senegal, malaria and onchocerciasis are co-endemic. Onchocerciasis in this region has been controlled by once or twice yearly mass drug administration (MDA) with ivermectin (IVM) for over fifteen years. Since laboratory-raised Anopheles gambiae s.s. are susceptible to ivermectin at concentrations found in human blood post-ingestion of IVM, it is plausible that a similar effect could be quantified in the field, and that IVM might have benefits as a malaria control tool.
Methods
In 2008 and 2009, wild-caught blood fed An . gambiae s.l. mosquitoes were collected from huts of three pairs of Senegalese villages before and after IVM MDAs. Mosquitoes were held in an insectary to assess their survival rate, subsequently identified to species, and their blood meals were identified. Differences in mosquito survival were statistically analysed using a Glimmix model. Lastly, changes in the daily probability of mosquito survivorship surrounding IVM MDAs were calculated, and these data were inserted into a previously developed, mosquito age-structured model of malaria transmission.
Results
Anopheles gambiae s.s. (P < 0.0001) and Anopheles arabiensis (P = 0.0191) from the treated villages had significantly reduced survival compared to those from control villages. Furthermore, An gambiae s.s. caught 1-6 days after MDA in treated villages had significantly reduced survival compared to control village collections (P = 0.0003), as well as those caught pre-MDA (P < 0.0001) and >7 days post-MDA (P < 0.0001). The daily probability of mosquito survival dropped >10% for the six days following MDA. The mosquito age-structured model of malaria transmission demonstrated that a single IVM MDA would reduce malaria transmission (R o ) below baseline for at least eleven days, and that repeated IVM MDAs would result in a sustained reduction in malaria R o .
Conclusions
Ivermectin MDA significantly reduced the survivorship of An. gambiae s.s. for six days past the date of the MDA, which is sufficient to temporarily reduce malaria transmission. Repeated IVM MDAs could be a novel and integrative malaria control tool in areas with seasonal transmission, and which would have simultaneous impacts on neglected tropical diseases in the same villages. | Competing interests
The authors declare that they have no competing interests.
Authors' contributions
MS, KCK, and BDF designed research; MS, KCK, MG and BDF performed research; BDF contributed reagents; MS, KCK, MDS and BDF coordinated field studies, MS, KCK, PLC, JLR and BDF analysed data/developed models; MS, KCK, JLR and BDF wrote the paper; MS, KCK, MG, PLC, JLR and BDF edited the paper. All authors read and approved the final manuscript.
Supplementary Material | Acknowledgements
We would like to thank Dr. Doudou Sene, head of the medical district of the Kedougou region, Mactar Mansaly, Infirmier d'Etat a Bandafassi, Rigobert Keita and Filly Keita for their collaboration. We also thank the people of Ndebou, Ibel, Boundacoundi, Nathia, and Damboucoye for their hospitality and collaboration. This work was supported by grant AI079528, and by contract N01-AI-25489 from the U.S. National Institutes of Allergy and Infectious Diseases, by the Bill and Melinda Gates Foundation's Grand Challenges Explorations program and by Colorado State University CVMBS Research Council #1686174. | CC BY | no | 2022-01-12 15:21:45 | Malar J. 2010 Dec 20; 9:365 | oa_package/ba/cf/PMC3016374.tar.gz |
PMC3016375 | 21176207 | Background
Traditionally, microscopic examination of stained blood films remains the method of first choice for malaria diagnosis, both in endemic and non-endemic settings but also more recently developed molecular techniques have gained their place in malaria diagnosis, especially in reference centers [ 1 - 4 ]. Real-time PCR assays are particularly attractive because of the short turn-over-time and the avoidance of post-PCR contamination [ 5 , 6 ].
Although PCR is typically performed on whole blood samples [ 2 , 3 , 7 ], malaria diagnosis would benefit from the use of thick blood film (TBF) as a alternative source of DNA in case whole blood samples are not available. Indeed, stained blood films are frequently the only presented specimen for a second opinion in reference laboratories as whole blood samples require demanding storage and transport conditions. Moreover, it is known that species identification by microscopic examination might be difficult, and depends of the quality of the blood film [ 8 , 9 ]. In addition, archived blood film collections can be used for retrospective PCR analysis as demonstrated before [ 10 - 12 ].
The application of PCR on stored blood films was already demonstrated. Several reports indicated poor performance for low parasite densities [ 7 , 10 , 13 ] or interference of the staining [ 7 , 8 , 13 - 15 ]. Recently, a real-time PCR was developed and evaluated on whole blood samples that proved to be excellent in the detection of single and mixed infections [ 16 ] and showed a low detection limit; this incited us to apply this PCR for analysis of TBFs. The present study describes the successful use of Giemsa-stained TBFs for PCR detection and illustrates its use for malaria diagnosis in reference settings in case whole blood samples are not available. | Methods
Laboratory diagnosis of malaria at ITM
Clinical samples derived from patients suspected of malaria presenting at the outpatient clinic of the Institute of Tropical Medicine (ITM) Antwerp, Belgium or were submitted by Belgian laboratories to ITM for confirmation in the scope of the national reference function. Malaria diagnosis at ITM is accredited according to ISO 15189:2007 and done by the combination of standard microscopy, antigen detection and real-time PCR.
TBFs were made with approximately 20 μl venous blood, stained with Giemsa (pH 8.0) and examined by light microscopy using a × 500 magnification. Parasite density was expressed by the number of asexual parasites/μl. Species identification was done by microscopy on May-Grünwald Giemsa-stained thin blood films. After microscopic analysis, immersion oil was removed from the TBFs by xylene. After drying they were stored in a closed box at room temperature.
Antigen detection was done by the SD-FK60 Malaria Ag Pf/pan test (Standard Diagnostics, Hagal-Dong, Korea) for qualitative detection of P. falciparum histidine-rich protein-2 (HRP-2) and pan-species parasite lactate dehydrogenase (pLDH).
From January 2007, all samples that were positive with microscopy or antigen test were prospectively analyzed by real-time PCR on 200 μl of fresh EDTA-anticoagulated whole blood for confirmation or correction of the species identification. Whole blood samples of patients that were diagnosed between January 1997 and December 2006 were retrospectively confirmed or corrected by PCR on frozen samples as part of the validation of the recently developed real-time PCR [ 16 ]. The laboratory malaria diagnosis status was considered negative when microscopy, antigen detection and PCR were negative.
Test samples
From the ITM collection of stored Giemsa-stained TBFs, a panel of malaria positive (n = 115) and malaria negative (n = 20) TBFs was selected. The selection was made so that the panel represented malaria positive TBFs with single infections of one of the four Plasmodium species ( Plasmodium falciparum, Plasmodium vivax, Plasmodium ovale, Plasmodium malariae ) with varying parasite densities (1 to 222,241/μl or only gametocytes) and different storage times (1 month to 12 years) and with mixed infections as determined by PCR on whole blood during laboratory malaria diagnosis (see above) (Table 1 ).
In addition, two thin blood films of P. falciparum and P. vivax that were archived as teaching slides at the ITM for respectively 32 and 42 years were tested to reveal an idea about DNA recovery from very long-term stored blood films.
Sampling and DNA extraction from TBFs
The TBF was split in two equal parts by tracing a straight and transversal line with a sterile scalpel (number 15, Farla Medicals, Antwerp, Belgium). Next, 10 μl of phosphate buffered saline (PBS; 0.02 M, pH 7.4) was dropped onto half of the surface of the TBF and this part was scraped off from the glass slide by making circular movements with the scalpel. The collected material was transferred in a sterile 1.5 ml tube that contained 90 μl PBS. DNA was extracted with the QIAamp DNA mini kit (Qiagen Benelux, Venlo, The Netherlands) according to the manufacturer's instructions but with reduced buffer volumes: only 100 μl of AL lysis buffer, 50 μl of ethanol and 50 μl of AE elution buffer was used. For each TBF a separate scalpel was used.
In addition, to evaluate alternative extraction methods and the influence of the Giemsa staining, unstained and Giemsa-stained TBFs and thin blood films of a P. falciparum sample with 60/μl were extracted according to 1) the Qiagen method described above, 2) the boiling method in Chelex-100 described by Kawamoto et al [ 17 ] and 3) the heating method in water described by Volpini et al [ 12 ].
Real-time PCR on TBF
The 'four-primer' real-time PCR with a non-competitive design was used as described before [ 16 ]. Briefly, four Plasmodium species-specific forward primers and four Plasmodium species-specific probes together with one Plasmodium genus-specific reverse primer were used to target the 18S small subunit rRNA gene of the four Plasmodium species. Two duplex reactions, one to detect P. falciparum and P. vivax and another to detect P. ovale and P. malariae , were run in parallel for 2 min at 95°C followed by 50 cycles of 15 sec at 95°C and 60 sec at 60°C on the SmartCycler II (Cepheid Benelux, Bouwel, Belgium). Five microlitres of DNA was used in each reaction mixture.
Slides that were negative with the Plasmodium real-time PCR were analysed with a human beta-globin (HBB) real-time PCR to control for efficient DNA extraction and to rule out PCR inhibition. The HBB primers (200 μM) as described by Steinau et al [ 18 ] were used together with a Texas-Red labeled probe (400 μM) designed for real-time application (5'-TGCCCTCCCTGCTCCTGGGA-3'). The PCR was run for 15 min at 95°C followed by 50 cycles of 5 sec at 95°C, 20 sec at 60°C and 30 sec at 72°C on the SmartCycler II (Cepheid Benelux).
Analytical sensitivity
To determine the analytical sensitivity, 10-fold serial dilutions were made from a single EDTA-blood sample infected with P. falciparum at a parasite density of 206,100/μl. A TBF was made of each dilution. PCR was performed on all TBF dilutions and on all dilutions of the whole blood. The highest dilution with a positive PCR signal indicated the detection limit.
Reproducibility
To determine the reproducibility of the DNA extraction, eight TBFs were made from a single P. vivax sample and DNA was extracted. Comparison with the reproducibility on whole blood was done on eight fractions of another P. vivax sample with a comparable Ct-value. The subsequent PCR runs on TBF and whole blood were performed in the same run and the variation coefficient (%CV) of the Ct-values for TBF and whole blood were calculated.
Data analysis
Improved DNA detection was determined by subtracting Cycle threshold (Ct)-values measured by the PCR on TBF and on whole blood and is indicated by ΔCt. Statistical differences between logarithmic Ct-values of both specimen types were determined by paired t-test analysis. The mean of all ΔCt-values was calculated together with the ± 95% confidence intervals (±CI 95%) of the mean ΔCt. | Results
Sampling and testing of DNA extraction method
From preliminary test assays (Table 2 ), it is clear that the Qiagen extraction method is superior to the Chelex and water boiling method for the recovery of DNA from blood films. Comparison of P. falciparum and HBB Ct-values obtained with the Qiagen method indicated that no large differences were observed for unstained versus stained TBFs and thin blood films. In contrast, no parasite or human DNA was detected in the samples extracted with the Chelex method. With the water heating method, only human DNA was detected with high Ct-values in unstained blood films. In relation to the lower blood volume applied onto thin blood films, slightly higher Ct-values were seen compared to TBFs. Based on these results, stained TBFs were used in all further assays as specimen type and they were processed by the Qiagen method.
Panel of clinical samples
Table 1 lists the panel of 135 TBFs representing single infections of the four Plasmodium species (n = 110), mixed infections (n = 5), and negative samples (n = 20) and indicates the parasite density levels by categories ranging from 1-100 (n = 30), 101-500 (n = 16), 501-1,000 (n = 16) and more than 1,000 (n = 48) asexual parasites/μl. Five of the samples with single infections contained only gametocytes of P. falciparum (n = 4) or P. vivax (n = 1) (Table 1 ). The malaria positive TBFs (n = 115) were stored for one month to two years (n = 66), 3 to 5 years (n = 15), 6 to 8 years (n = 21) or 9 to 12 years (n = 13).
Diagnostic sensitivity of PCR on TBF
Compared to PCR on whole blood as the reference method, PCR on TBFs identified 109 out of the 110 (99.1%) single Plasmodium infections: all samples containing P. falciparum (n = 50), P. vivax (n = 25) and P. ovale (n = 25) were correctly detected as well as all but one containing P. malariae (n = 10) (Table 3 ). In the latter sample, microscopic examination of only the thin blood film, and not of the TBF, revealed a single schizont and no other asexual parasites. The HBB PCR to control the extraction did not give a signal. Additional analysis on the other half of the TBF gave again a negative result for P. malariae but revealed a HBB Ct-value of 44.43. The TBF was stored for 1 year and 11 months.
Compared to microscopy, PCR on TBF generated three additional species identifications (one P. falciparum , one P. vivax and one P. ovale ) in which microscopy detected Plasmodium parasites without an unambiguous species identification (Table 4 ).
Compared to PCR on whole blood, four of the five mixed-infections were correctly identified with PCR on TBF (one P. vivax/P. malariae co-infection and three P. falciparum/P. malariae co-infections) (Table 3 ). One P. ovale with a Ct-value of 37.48 when detected by PCR on whole blood was missed in a co-infection with P. falciparum and neither diagnosed with microscopy (Table 4 ).
Diagnostic specificity of PCR on TBF
No Plasmodium DNA amplification was seen in all twenty TBFs from samples that were negative with standard microscopy, antigen detection and PCR on whole blood, indicating a diagnostic specificity of 100%. The HBB PCR revealed in all those samples a positive signal.
Analytical sensitivity and reproducibility of PCR results
The detection limit was 0.2 asexual parasites/μl for PCR on TBF compared to 0.02 for PCR on whole blood. Reproducibility testing demonstrated a coefficient of variation of 1.90 for PCR on TBF while for PCR on whole blood a variation of only 0.54 was observed.
Comparison of Ct-values of the PCR on TBF and on whole blood
The Ct-values of each sample measured by PCR on TBF and by PCR on whole blood were significantly different. The mean ΔCt-value was 5.93 (± 0.43) with a maximum difference of 12.57 Ct-values in a P. ovale sample with 480 parasites/μl that was stored for 8 years and 11 months and a minimum difference of 1.79 Ct-values in a P. falciparum sample with 32 parasites/μl that was stored for eight years and eight months. The difference in Ct-value between PCR on TBF and PCR on whole blood was consequently seen for all samples regardless the parasite density or storage time.
Additional PCR analysis demonstrated the ability to detect P. falciparum DNA with a Ct-value of 35.32 and P. vivax DNA with a Ct-value of 39.80 in the two thin blood films that were archived for respectively 32 and 42 years.
Contamination
Of note is that during this entire study, a contamination was observed once during the TBF extraction procedure. One single infection by P. ovale was identified as a mixed infection with P. falciparum by PCR on TBF, probably because of cross-contamination by the preceding TBF containing P. falciparum that was extracted in the same batch. A second extraction on the other part of the TBF confirmed the single infection with P. ovale . | Discussion
In this study, the ability of real-time PCR to amplify DNA extracted from Giemsa-stained TBFs was evaluated. Challenged with a panel of archived clinical TBF samples, the PCR proved accurate in the detection of single and mixed species infections on samples of the four Plasmodium species with varying parasite densities and stored for up to 12 years.
This study is the first to report real-time PCR analysis on TBFs allowing a fast turn-around time and a high diagnostic sensitivity. Previous studies used either conventional or nested PCR assays and focused on a single species, mostly P. falciparum [ 10 , 14 , 19 - 21 ]. Compared to PCR on whole blood, 99.1% single infections of the four Plasmodium species and four of the five mixed infections were detected which is excellent in comparison to previous studies that demonstrated a diagnostic sensitivity of 71% [ 20 ] or 85.6% [ 22 ]. Despite the low blood volume, Giemsa staining and months to years of storage, Plasmodium species identification was possible by PCR on TBFs.
Noteworthy is that also samples with low parasite densities were correctly identified. Earlier PCR studies on TBFs did not include [ 11 , 15 ] or were not able to detect low parasite density samples [ 7 , 20 ]. Detection of low parasite densities is of particular importance since rapid diagnostic tests, frequently used as an adjunct for the diagnosis of malaria, have low sensitivities at parasite densities below 100/μl and 500/μl for P. falciparum and the non- falciparum species respectively [ 23 - 27 ]. Likewise can the detection by PCR on TBF of samples with only gametocytes be considered as an advantage in the non-endemic settings, as patients may still be diagnosed after empiric (self)-treatment.
With PCR on TBFs, only one single infection with P. malariae was missed. This may be explained by sample error: PCR on the corresponding whole blood sample demonstrated the presence of P. malariae DNA and microscopic analysis of the thin blood film, but not of the TBF, revealed only one schizont of P. malariae .
One of the advantages of the Plasmodium real-time PCR on whole blood is its high analytical sensitivity [ 16 ]. Applied on TBF however, the PCR showed an analytical sensitivity which was 10 times lower. This might be explained by differences in sample volume (equivalent to ~10 μl for TBF versus 200 μl for whole blood). Despite this, the actual detection limit of 0.2 asexual parasites/μl is much lower than those previously described for other PCR assays on TBF reporting values of 20, 500 and 3,500/μl respectively [ 7 , 10 , 13 ]. The low sample volume of the TBF probably also explains the higher intra-run variation as compared to whole blood. The lower concentration of extracted DNA from TBFs is also reflected by the higher Ct-values as observed by PCR on TBF. The negative result of the P. ovale in the mixed infection could be explained by this as the amount of extracted DNA from the TBF of this minor species was probably below the limit of detection.
For sampling, half of the TBF surface was used, preserving the other half for additional microscopic or molecular examinations. DNA extraction was done by minor adaptation of the easy format of Qiagen mini-spin columns frequently used for whole blood samples. No influence of the staining was seen with this method. Indeed, column-based extraction methods include washing steps to remove unwanted inhibitory PCR factors. Some studies described the use of a simple boiling method in Chelex or water for DNA extraction [ 12 , 17 , 20 , 22 ] but preliminary results of those methods proved to be not successful in this study. This indicates that highly purified DNA is required for real-time PCR assays and the need to control efficient extraction in negative TBFs by human beta globine PCR. Attention should be paid during manipulations of scraping and collecting TBF material carrying the inherent risk of contamination as experienced once in this study. Likewise, storage and staining conditions, immersion oil and xylene have been described as sources of Plasmodium DNA contamination [ 15 , 20 ].
Despite the excellent performance of the PCR on TBF samples, whole blood samples are the first choice for malaria diagnosis by PCR and the use of TBF samples are only considered when whole blood is not available. This choice is argued by the higher accuracy of the PCR when applied on whole blood, and by the concerns of contamination, despite rigorous procedures.
A limitation of the present study is its retrospective design that did not allow for all samples side-by-side comparison of the Ct-value obtained from stored TBF and from fresh whole blood. However, it is of note that more than 40% of the TBF samples had been stored for more than three years and that PCR was successfully applied on TBF samples stored for up to 12 years. Li et al [ 11 ] demonstrated malaria RNA in blood smears stored up to 20 years and Volpini et al [ 12 ] detected Leishmania DNA in a slide that was stored up to 36 years. Noteworthy, P. falciparum and P. vivax DNA was successfully amplified from two stained teaching slides that were stored for respectively 32 years and 42 years.
Another limitation of the present study is that the real-time PCR assay does not detect Plasmodium knowlesi , a simian malaria parasite that can cause malaria in humans and is mainly distributed in Southeast-Asia [ 28 , 29 ]. Some studies reported the detection of P. knowlesi by PCR in returned travellers [ 30 - 32 ]. The well-defined panel of clinical samples used in this study probably does not contain this rare species. But awareness is needed for travellers returned from P. knowlesi endemic regions that demonstrate microscopy positive results with parasites that resemble P. malariae or P. falciparum morphology and that give PCR negative results. A P. knowlesi -specific probe and forward primer have been recently designed at the laboratory of ITM but the evaluation of the PCR multiplex design to detect this additional species is still under investigation.
What may be the applications of the current PCR on TBF? From the present results, it is clear that PCR on TBF is a reliable alternative in case whole blood is not available, and thus of added value for reference malaria diagnosis. Its detection limit is still below that of microscopy in reference settings, i.e. close to 10-50/μl [ 1 , 2 , 33 ] and that of malaria rapid diagnostic tests which can only be performed on whole blood samples. This means that PCR applied on TBF can confirm or rule out the diagnosis of malaria and can confirm or adjust species identifications made by microscopy alone. For instance, PCR can generate species identification for parasites that are notoriously difficult to distinguish from each other (like P. ovale and P. vivax ) [ 34 , 35 ], and for samples with unambiguous species identification by microscopy because of low parasite densities, poor staining quality, altered parasite morphology due to treatment or for the detection of mixed species infections.
TBF samples have logistic advantages over whole blood. They are frequently stored for reasons of traceability, and they are less demanding for storage and shipment as compared to whole blood [ 36 ]. In that way, applications of PCR may also be foreseen in research settings: TBFs are part of standard laboratory work-up of patients suspected of malaria both in endemic and non-endemic settings and may be used for further work-up, thereby avoiding the need for extra sampling of blood. This may be important especially in vulnerable patients groups (such as children) and in communities who are reluctant to blood sampling [ 18 , 37 ]. In addition, TBFs can easily be sent to reference laboratories for quality control of field study results. | Conclusions
In conclusion, the present study demonstrated that Giemsa-stained TBFs can be a reliable alternative source of DNA through a simple sampling and extraction protocol. Other real-time PCR assays or amplification methods might adapt and use this platform, offering opportunities of studying clinical samples and archived collection samples for epidemiological relationships and resistance markers. | Background
This study describes the use of thick blood films (TBF) as specimens for DNA amplification with the Plasmodium species-specific real-time PCR that was recently validated on whole blood samples.
Methods
The panel of 135 Giemsa-stained clinical TBFs represented single infections of the four Plasmodium species with varying parasite densities or only gametocytes, mixed infections, and negative samples and was stored for up to 12 years. Half of the Giemsa-stained TBF was scraped off by a sterile scalpel and collected into phosphate buffered saline. DNA was extracted with the Qiagen DNA mini kit with minor modifications. DNA was amplified with the 18S rRNA real-time PCR targeting the four Plasmodium species with four species-specific primers and probes in combination with one genus-specific reverse primer. Results of the PCR on TBF were compared to those of the PCR on whole blood and to microscopy.
Results
Correct identification for single species infections was obtained for all TBF samples with Plasmodium falciparum (n = 50), Plasmodium vivax (n = 25), Plasmodium ovale (n = 25) and in all but one samples with Plasmodium malariae (n = 10). Compared to whole blood samples, higher Ct-values were observed by PCR on TBF with a mean difference of 5.93. Four out of five mixed infections were correctly identified with PCR on TBF. None of the negative samples (n = 20) gave a PCR signal. PCR on TBF showed a detection limit of 0.2 asexual parasites/μl compared to 0.02/μl for whole blood. Intra-run variation was higher for PCR on TBF (%CV 1.90) compared to PCR on whole blood (%CV 0.54). Compared to microscopy, PCR on TBF generated three more species identifications in samples containing a single species and detected the same four mixed-infections.
Conclusions
Giemsa-stained TBFs are a reliable source of DNA for Plasmodium real-time PCR analysis, allowing applications in reference and research settings in case whole blood samples are not available. | List of abbreviations
CI: Confidence interval; Ct: Cycle threshold; CV: Coefficient of variation; DNA: Desoxyribonucleotide acid; EDTA: Ethylene diamine tetra-acetic acid; HBB: Human beta-globin; HRP-2: Histidine-rich protein-2; ITM: Institute of Tropical Medicine; PBS: Phospate buffered saline; PCR: Polymerase chain reaction; pLDH: Plasmodium -specific parasite Lactate dehydrogenase; TBF: Thick blood film.
Competing interests
The authors declare that they have no competing interests.
Authors' contributions
LC and JJ designed the study protocol. MvE and EB organized and coordinated the sample collection. LC performed PCR analysis, analysed and interpreted the results. LC, MvE, and JJ drafted the manuscript. LC performed the statistical analysis. All authors read and approved the final manuscript. | Acknowledgements
We would like to thank the staff of the Central Laboratory of Clinical Biology (ITM, Antwerp) for technical support with special thanks to Vicky de Boer en Kathy Demeulemeester for PCR analysis and to Denise Boon, Agnes D'hondt and Pieter Guetens for microscopy and antigen detection. | CC BY | no | 2022-01-12 15:21:45 | Malar J. 2010 Dec 22; 9:370 | oa_package/f6/e6/PMC3016375.tar.gz |
PMC3016376 | 21110860 | Background
The number of patients with chronic pain in Japan is estimated to be approximately 17 million [ 1 ]. Issues concerning the treatment of chronic pain, such as elevating medical expenses, inappropriate treatment measures and deteriorating quality of life for both patients and their families have resulted in immeasurable social loss, accompanied by personal and social factors associated with the aging society [ 2 , 3 ].
Pain of any kind can be a strong impetus for patients to seek medical care as they assume that pain indicates a serious medical matter. Pain is defined by the International Association for the Study of Pain (IASP) as "an unpleasant sensory and emotional experience associated with actual or potential tissue damage, or described in terms of such damage" and regarded as chronic when non-malignant pain persists for more than six months [ 4 ]. Emotional distress related to pain is perhaps the most negative aspect, in addition to physical pain, and must be considered as a multi-dimensional experience that has physiological, affective, cognitive, behavioral and social components for treatment [ 5 ]. In accordance with the multitude of factors associated with pain, a multidisciplinary approach was developed [ 6 ] and demonstrated to be the most efficacious and cost-effective treatment for chronic pain [ 7 - 9 ]. Few practitioners in Japan, however, have adopted this approach, as several issues complicate its implementation, such as conflicts in establishing a multidisciplinary team, under-developed treatment programs, and insufficient medical funding. Under such circumstances a conventional biomedical approach is likely to be effected or even reinforced [ 10 , 11 ].
Most of our chronic pain patients appeared to be exhausted both physically and psychologically when referred to our Department of Psychosomatic Medicine. Patients often complained of dissatisfaction with previous treatments and distrust of medical professionals. Consequently, our physicians often experienced difficulty in establishing therapeutic relationships with these patients at the time of admission. Kenny suggests that a fundamentally complicating factor between a patient and his/her physician may originate from "struggles for legitimating the cause of pain", as either biogenic or psychogenic [ 12 ]. This is particularly important in conventional biomedical settings, where the issue of whether pain is "real or not" is crucial for treatment. Furthermore, patients and doctors challenge each other's credibility in such cases, which undermines the quality of their interactions and causes each other distress [ 12 ]. When 85% of lower back pain cases have no physical basis that can be identified [ 13 ], patients with medically unexplained disorders may have negative experiences during medical encounters. This type of doctor-patient relationship is not therapeutic and causes an additional source of stress[ 12 , 14 ]. It would be of great help for medical professionals to consider psychological aspects of pain treatment for patients with chronic pain, in order to ensure that pain management is both beneficial and therapeutic.
Psychological aspects of chronic pain patients have been the subject of many studies, which have revealed important psychological constructs that can prolong or exacerbate pain, such as pain catastrophizing [ 15 ], fear avoidance [ 16 ], low self-efficacy [ 17 ] and coping strategies [ 18 ]. Not all of these pain-related assessment measures are available in Japan, although several studies have examined personality and mood status of chronic pain patients by use of measures such as the Cornell Medical Index (CMI), the Self-rating Depression Scale (SDS), the Manifest Anxiety Scale (MAS), the Tokyo University Egogram (TEG), and the Minnesota Multiphasic Personality Inventory (MMPI). These findings indicated that the psychological features experienced by chronic pain patients are neurotic, depressive and hypochondriac in nature. Patients also exhibit defensive tendencies. These features result from chronic pain and indicate that psychological aspects are necessary in the treatment regardless of the cause of pain [ 19 , 20 ].
Psychological measures of these studies were obtained by self-reporting questionnaires which focused mainly upon the assessment of psychopathology. However, findings often appeared to be due to the presence of chronic pain and related symptoms rather than psychopathology, and did not indicate the original psychological resources of the patients concerned. In consideration of the complexities of pain experiences, the Rorschach Comprehensive System (CS) could be used to provide multi-dimensional psychological aspects of chronic pain patients. This system investigates both original psychological resources and processes that generate symptoms and/or behaviors as they are similar to those in producing Rorschach responses. Furthermore, respondents' efforts to minimize problems might not bias the results as they are rarely aware of the type of interpretation to be made for their responses [ 21 ]. Using the Rorschach CS, Acklin and Bernat examined chronic low back pain (LBP) patients to address the association between LBP, depression, and alexithymia. These authors found Depression Index (DEPI) to be reduced in LBP patients along with Rorschach features consistent with alexithymia, and a number of similarities to the personality disorders group [ 22 ]. Alexithymia, conceptualized by Sifneos, describes a trait syndrome associated with difficulty identifying and communicating emotions, confusion between emotional and somatic sensations, and impoverishment of fantasy and capacity for symbolic thought [ 23 ].
The Rorschach CS has never been fully explored in Japanese patients suffering from chronic pain, and this study therefore aimed to investigate comprehensive psychological characteristics of Japanese patients with chronic pain. | Methods
Participants
One hundred and three (103) in-patients reporting non-malignant pain for more than six months were admitted to the Department of Psychosomatic Medicine in Kansai Medical University Hospital between January 2006 and June 2008. Twenty-three patients (22%) who were under 20 or over 70 years of age were excluded in order to compare the Rorschach data against control non-patient Japanese adults within the same age range. Nineteen patients (18%) could not be contacted upon their admission for logistical reasons, and ten (10%) were excluded due to physical conditions or as a result of decisions made by their attending physicians. Two patients (2%) refused to participate. After written informed consent had been obtained from the remaining forty nine (49) patients, the Rorschach CS was administered by the first author, a clinical psychologist. One patient was subsequently eliminated due to an insufficient number of responses, as normal protocol tests with less than 14 responses are considered invalid in the CS system [ 21 ], leaving forty eight (48) patients for data analysis.
Table 1 lists the breakdown of demographic variables and pain status. Mean patient age was 43.42 (±14.52, 21-68 yrs) with 14 males (41.07 ± 14.25) and 34 females (44.38 ± 14.72). The number of patients referred from other departments or hospitals was 44 (91.7%) and 4 patients visited the Department of their own accord (8.3%). The average number of hospitals visited prior to admission in our department was 5.13 (±2.75), with a range of 2 to 15. The median for pain duration before taking the Rorschach test was 4.19 years (±3.72 SD), with a range of 0.6 to 16.8 years. Fifteen patients (31%) exhibited mild depressive states, but none of our patients suffered from neurological problems or psychotic symptoms including major depressive episodes according to the DSM-IV criteria [ 24 ].
Psychological measures
The Rorschach Comprehensive System (CS) has standard administration rules and is the most commonly-used scoring system in the world. It has good inter-rater and test-retest reliability with good statistical construct validity [ 21 , 25 ]. In this system, seven major groups of variables, collectively referred to as a 'cluster', are evaluated: 1) control and stress tolerance, 2) information processing, 3) mediation, 4) ideation, 5) affect, 6) self-perception, and 7) interpersonal perception and behavior. It also provides data and reference samples for non-patient controls and for patients within the expected normative range for each variable, which then facilitates interpretation by using the standard deviation formula. Standard deviation indicates what is or is not within the normative limits used as the basis for the predictions.
Scoring procedures
The responses of our chronic pain patients were scored by the first author, a clinical psychologist. Scorings were reviewed by a second clinical psychologist affiliated with a different department and finalized after discrepancies had been discussed. Further consultations were made with a Rorschach expert to finalize data when agreement was not reached between the two local psychologists. Both clinical psychologists were trained in the administration and scoring of the Rorschach CS and have extensive experience with this system.
Statistical analysis
Means and frequency data of all major variables arising from the Rorschach CS were compared between chronic pain patients (hereafter referred to as 'CP patients') and non-patient adults (hereafter referred to as the 'NA group'). The latter group consisted of 200 males (35.26 ± 12.28) and 200 females (35.92 ± 12.02) with the mean age of 35.59 (±12.84) [ 26 ]. In terms of age and education, no significant difference was detected in the demographic data when compared between CP and NA groups. Significant differences were found, however, when performing a Student's t-test, or Welch's t-test when homogeneity of variance was not hypothesized. Frequency data for major variables or indices was examined using the chi-squared test. These were analyzed using SPSS v.11.5. For all analyses, a probability value of p < .05 was considered significant and variables were noted when they deviated from the expected value or range.
Ethics
All procedures were submitted and approved by the Ethical Committee of Kansai Medical University Hospital. | Results
Tables 2 and 3 list the major Rorschach variables for CP patients and for the NA group in terms of: 1) control and stress tolerance, 2) affect, 3) information processing, 4) mediation, 5) ideation, 6) self-perception, and 7) interpersonal perception and behavior. Some variables were plotted in more than one cluster of variables because they related to more than one characteristic. Further descriptions of the technical terms associated with the Rorschach test, their abbreviated form, and their interpretation, are discussed by Exner [ 21 ] and Takahashi [ 26 ].
Control and stress tolerance (Tables 2 & 3 )
Here, we examined the capacity to make decisions and implement specific behaviors that meet the demands of situations which involve the use of resources, stimulus demands, and stress tolerance. The Adjusted D score (AdjD, one's capacity for control), the D score (D, stress tolerance and elements of control), and the Experience Actual score (EA, available resources to make it possible to adjust one's own needs and emotions to match external reality) in CP patients were within the expected range, and no significant inter-group differences were apparent. The Experienced Stimulation score (es, stimulus demands) of CP patients was within the expected range, but was significantly higher than that of the NA group (p < .05, Table 2 ). Three types of coping styles have been identified: (i) introversive--introspection-based; (ii) ambitent--inconsistent or flexible in using emotions or introspection according to situations; and (iii) extratensive--emotion-based. The frequency of introversive style in CP patients was significantly lower (19%) than the NA group (33%; p < .05, Table 3 ). No inter-group difference was found in the frequency of ambitent style between CP patients (56%) and the NA group (53%). The frequency of extratensive style in CP patients was relatively higher (25%) than the NA group (14%; p < .10, Table 3 ).
Control and stress tolerance data suggest that CP patients have originally adequate capacities for control and stress tolerance and utilize appropriate psychological resources as with most adults (EA, AdjD and D). Fewer CP patients exhibited the introversive coping style, implying that fewer CP patients reasoned things through while keeping emotions aside before making decisions or problem-solving. Half of CP patients and half of the NA group were classified as exhibiting the ambitent style, which is an inconsistent or flexible way of using thoughts and emotions according to situations. More CP patients exhibited the extratensive style, suggesting that they tend to invest more of their feelings into decision-making and/or problem-solving processes and are more likely to use interactions with their environment as a source of information and/or gratification.
Affect (Tables 2 & 3 )
Variables relating to affect examine the role of emotions in the psychological function and organization of the person. The frequency of a positive Suicide Constellation (S-CON, self-destructive preoccupation) in CP patients was significantly higher (6%) than the NA group (1%; p < .01, Table 3 ). Lambda (L, economic use of resources) was significantly lower in CP patients than the NA group (p < .01, Table 2 ). The frequency of a positive depression index (DEPI > 4, an implicit depressive mood) in CP patients was significantly higher (35%) than the NA group (20%; p < .05, Table 3 ). SumShading score (sum of C' + V' + T + Y, unusual distress experiences) in CP patients revealed higher deviation from the expected range, and was significantly higher than the NA group (p < .01, Table 2 ). The frequency of FM + m < SumShading, more distress or emotional discomfort, in CP patients was significantly higher (35%) than the NA group (20%; p < .05, Table 3 ). The Achromatic Color variable (SumC', excessive internalization of feelings) in CP patients revealed higher deviation from the expected range, and was significantly higher than that of the NA group (p < .05, Table 2 ). The Diffuse Shading variable (SumY, situational stress-related psychological helplessness) in CP patients revealed higher deviation from the expected value, and was significantly higher than the NA group (p < .01, Table 2 ). Multiple determinants (Blends, psychological complexity) in CP patients revealed higher deviation from the expected value, and was significantly higher than the NA group (p < .05, Table 2 ). The Color-Shading blends (Col-Shading, confusion or ambivalence of feelings) in CP patients revealed slightly higher deviation from the expected range, and was significantly higher than the NA group (p < .01, Table 2 ). With regard to the modulation of emotional discharge, the Form Color response (FC, well-controlled emotional experiences with situation-appropriate expressions) in CP patients was significantly lower than the NA group (p < .01, Table 2 ); whereas the Color Form responses (CF, less restrained forms of affective discharge/expression) in CP patients was significantly greater than the NA group (p < .05, Table 2 ). The frequency of (CF + C) > FC + 2 in CP patients was significantly higher (35%) than the NA group (12%; p < .01, Table 3 ).
In summary, the data relating to affect suggest that CP patients experienced unusually high levels of distress and/or emotional discomfort (FM + m < SumShading), such as self-destructive thoughts (S-CON), depressive mood (DEPI > 4) and a sense of helplessness due to situational stress (SumY). These patients did not use the tactic of psychologically ignoring the complexity and/or ambiguity of a field (L), and their psychological function and processing seemed more complicated and confused by ambivalent feelings (Blends, Col-Shading). Their modulation of emotional discharge was likely to be unstable; in other words, CP patients exhibited a tendency to either excessively internalize their feelings (SumC'), or discharge them expulsively in a more uncontrolled manner (FC, CF, and CF + C > FC + 2).
Cognitive functions (Table 2 & 3 )
Here, we examined three aspects of cognitive function, or a cognitive triad, i.e. (i) information processing; (ii) cognitive mediation; and (iii) ideation, thinking process leading to some form of mental conceptualization of translated information.
Variables of information processing assess mental procedures entailed in the input of information. There were no significant differences in the total number of responses (R) to 10 inkblots between CP patients (22.02 ± 8.45) and the NA group (23.51 ± 6.9), or when considering the Whole response (W, commendable processing effort) and the Hypervigilance Index (HVI, hyperalertness). Common Detail response (D, easy or economical scanning) was significantly lower in CP patients than the NA group (p < .01, Table 2 ). Unusual Detail response (Dd, focus more on minute or unusual features of a new field of information with more processing effort) in CP patients deviated significantly from the expected range, and was significantly higher than the NA group (p < .05, Table 2 ). The Zd value (Zd, efficiency of scanning activity during information processing) was significantly higher in CP patients than the NA group (p < .01, Table 2 ). Perseverations (PSV, difficulty in shifting attention) was slightly higher than expected, and was significantly higher in CP patients than the NA group (p < .05, Table 2 ). No significant inter-group differences were found in terms of Developmental Quality (DQ, quality of processing activity).
In summary, information processing data suggest that CP patients are less likely to use economical scanning to gain new information (D); instead, they focus more on minute or unusual features within a new field of information (Dd). Scanning efficiency and the quality of processing activity of CP patients appears more than adequate compared to the NA group (Zd), but more patients in the CP patient group exhibited a little difficulty in shifting their attention (PSV).
Variables concerning cognitive mediation assess mental operations that translate or identify inputted information. The Perceptual-Thinking Index positive score (PTI > 3, mediational and ideational difficulties) was significantly greater in CP patients than the NA group (p < .01, Table 3 ). XA% (appropriate form use) and WDA% (appropriate form use in common areas) in CP patients was within the expected range, but was significantly lower than the NA group (p < .01, p < .05 respectively, Table 2 ). Form Quality minus responses (X-% & FQ-, distorted form use) deviated significantly from the expected range in CP patients, and was significantly greater than the NA group (p < .01, p < .05 respectively, Table 2 ). Distorted Space response (S-, mediational dysfunction due to negativism or anger) in CP patients was slightly higher than expected, and was significantly higher than the NA group (p < .05, Table 2 ). Appropriate/common good form (X + %) in CP patients was within the expected range, but was significantly lower than the NA group (p < .01, Table 2 ). Less conventional and more idiographic form (Xu%) and Popular response (P, expected or acceptable responses) in CP patients were within the expected range, and no inter-group significant differences were evident.
In summary, cognitive mediation data suggest that with CP patients, mediation was usually appropriate for the situation, or that they exhibited the basic skills necessary to interact successfully with situations around them, although they were less appropriate than those of the NA group (XA%, WDA%). The probability of fewer conventional responses occurring in simple and/or precisely-defined situations with CP patients was low (Xu%, P), even if problems were observed (X + %). However, CP patients exhibited a moderate elevation in the incidence of mediational dysfunction (PTI > 3, X-%, FQ-), when associated directly to unstable affective conditions, particularly relating to possible feelings of negativity or anger (S-).
Variables concerning ideation assess conceptualization of translated inputs. Morbid content (MOR, pessimistic conceptual thinking) in CP patients showed a slightly higher incidence from the expected value, and was significantly greater than that in the NA group (p < .05, Table 2 ). The frequency of Mp > Ma (passive human movement > active human movement, tendency to defensively fantasize about reality) in CP patients was significantly lower (17%) than that in the NA group (35%; p < .05, Table 3 ). The Weighted sum of six special scores (Wsum6: DV, DR, INCOM, FABCOM, ALOG, and CONTAM; difficulties in conceptual thinking and issue of ideational clarity) of CP patients deviated widely from the expected ranges, and was significantly greater than that of the NA group (p < .01, Table 2 ). In terms of the Sum6 special scores, Deviant Verbalization (DV, distorted language use or idiosyncratic modes of expression) in CP patients showed a slightly higher incidence from the expected value, and was significantly higher than the NA group (p < .01, Table 2 ). Deviant Response (DR, indecisiveness or a defensive attempt to detach from the task at hand) in CP patients deviated greatly from the expected value, and was significantly higher than the NA group (p < .01, Table 2 ). Incongruous Combination (INCOM, conceptual failure to discriminate and/or form concrete reasoning) was significantly lower in CP patients than the NA group (p < .01, Table 2 ). Inappropriate logic (ALOG, strained or unconventional reasoning to justify the answer) in CP patients showed a slightly higher incidence from the expected value, and was significantly greater than the NA group (p < 0.5, Table 2 ).
In summary, ideation data suggest that CP patients' conceptual thinking was often distinguished by a moderately pessimistic mindset (MOR), but that they did not defensively substitute fantasy for reason in stressful situations (Mp < Ma). Furthermore, CP patients did not exhibit a conceptual failure with discrimination and/or the inability to use concrete reasoning (INCOM), but they did show cognitive mishaps (Wsum6) with regard to: 1) use of distorted language and/or idiosyncratic modes of expression (DV); 2) indecisiveness and/or a defensive attempt to detach themselves from the task at hand (DR); and/or 3) strained effort or use of unconventional reasoning to justify an answer (ALOG).
Self- perceptions & Interpersonal-perceptions and behavior (Tables 2 & 3 )
Variables on self-perception assess self-image and self-involvement. Vista response (V, less positive introspective behavior) showed a slightly higher incidence from the expected value, and was significantly greater in CP patients than the NA group (p < .05, Table 2 ). Morbid content (MOR, negative self-image in self-perception) in CP patients showed a slightly higher incidence from the expected value, and was significantly greater than the NA group (p < .05, Table 2 ). The frequency of Pure H = 0 (less reality-based perception of self and others) was significantly greater in CP patients than the NA group (p < .01, Table 3 ).
In summary, self-perception data suggest that CP patients' introspective behavior on themselves tends to focus less on their positive sides (V), and that their self-image is likely to be negative (MOR). However, this perception of themselves or others might not always be based on reality (Pure H = 0).
Interpersonal perception and behavior data assess how a person perceives others, and how they will behave in various interpersonal situations. The frequency of positive Coping Deficit Index (CDI > 3, social immaturity or ineptness) showed no inter-group difference between CP patients and the NA group. Texture response (T, needs and openness to close emotional relations) was significantly lower in CP patients than the NA group (p < .01, Table 2 ). Good Human Representational response (GHR, good interpersonal behaviors and their effectiveness) was within the expected range, but was significantly lower in CP patients than the NA group (p < .05, Table 2 ). Poor Human Representational response (PHR, ineffective or maladaptive interpersonal behavior) was also within the expected range, but was significantly greater in CP patients than the NA group (p < .05, Table 2 ). Aggressive response (AG, aggressiveness or competitiveness on interpersonal exchanges) was slightly greater than the expected value, and significantly higher in CP patients than the NA group (p < .01, Table 2 ). Personal response (PER, way of reassuring oneself or warding off challenges from the examiner) was higher than the expected value and significantly greater in CP patients than the NA group (p < .01, Table 2 ).
In summary, interpersonal perception and behavior data indicate that CP patients are socially mature or developed, and that they have originally adequate interpersonal skills (CDI). However, they are generally not open to close emotional interactions, and can be apprehensive (T). They tended to perceive interpersonal exchanges as aggressive or competitive (AG) with less reality-based evidence (PureH = 0), and were warded off challenges (PER), which then led to less effective and/or maladaptive interpersonal behavior (GHR, PHR). | Discussion
When pain becomes persistent, it often accompanies various types of aversive emotional and cognitive effects, which are likely to lead to deterioration of private and social lives. Rudy et al suggest that the characterization of pain as only a sensory discomfort of the pain-specific part of the body is not sufficient without inclusive perspective of emotional states and accompanying thoughts as the central role for clinically important pain control strategies [ 27 ]. The purpose of the present study was to assess comprehensive psychological functions or states of Japanese CP patients using the Rorschach CS. Our findings showed that patients originally had adequate psychological resources with capacities for control, stress tolerance (EA, AdjD) and adequate interpersonal skills (CDI). However, results also suggested that our CP patients exhibited some psychological issues including: 1) emotional issues, 2) cognitive dysfunction, and 3) defensive interpersonal relationships.
Emotional issues
Affective states of CP patients revealed that they experienced highly unusual distress or discomfort (SumShading, FM + m < SumShading), including situational stress-related psychological helplessness (Y), self-destructive preoccupation (S-CON) and implicit depressive mood (DEPI). When we consider the IASP definition of pain, our present findings demonstrate that the pain experience is accompanied by various types of emotional distress in addition to pain specific discomfort [ 4 ]. Whether negative emotions are processed as causes or consequences of pain remains controversial, but can be clarified by conceptualizing both pain and emotion as multi-dimensional and sometimes overlapping processes with reciprocal influences upon each other [ 28 ]. All CP patients suffered from pain for more than six months. As for persistent pain, by examining functional relations between catastrophic thinking and the disability, Sullivan et al found that the chronicity of pain is an important moderator of psychological vulnerability for pain-related disability. The longer the duration of pain persists, an increasingly apparent sense of helplessness would make patients come to perceive themselves as being unable to manage pain effectively [ 29 ]. As shown by the significant value of Y in CP patients, persistent pain often disrupts customary private and social lives, and despondency and a sense of hopelessness become a likely outcome.
Perception of control has an important relationship with affective distress [ 30 ]. Patients who perceive their pain as an 'unexplainable mystery' may devalue their coping abilities and are less likely to rate their coping strategies as effective in controlling and reducing pain [ 31 ]. CP patients revealed significantly reduced proportions of introversive coping style, relatively high extratensive style, and almost identical ambitent style. These results suggested that more CP patients with an extratensive style tend to rely predominantly upon feelings or external feedback for reassurance rather than considering issues carefully before making decisions. Consequently, the impact of emotion is likely to make ideation more complex and illogical than the introversive style, which leads patients to trust their internal evaluations more than external feedback and to avoid being overly influenced by emotions [ 21 ]. Approximately half of patients suffering from CP exhibiting an ambitent style are flexible in using both thoughts and feelings simultaneously according to the situation [ 32 ]; however, with its less consistent and inefficient style, feelings may become overly influential in thinking, especially in an unfamiliar situation [ 21 ]. While any illness could make people feel vulnerable and less in control than usual, most CP patients exhibiting extratensive or ambitent coping styles are suggested to become susceptible to abnormal pain-related situations, thus rendering them emotionally vulnerable. This would make their coping styles more ineffective and result in a vicious circle.
Causal relationships between chronic pain and depression have been the subject of much debate. None of CP patients in the present study met the criteria for a major depressive episode, although DEPI positive frequency was significant compared to the NA group. Rudy et al found that when patients perceived a persistent pain as far beyond control, then they were more likely to increase affective distress, and proposed that the psychological mediators might be in the development of depression secondary to chronic pain [ 27 ]. A further research study by Ohayon et al found that a painful physical condition persisting for longer than 6 months could contribute to the prolongation of a depressive episode, and recommended evaluation of depression for patients with chronic pain [ 33 ].
As for emotional discharge, our CP patients showed excessive internalization or inhibition of emotions (C'), and were expressive in an uncontrolled manner (FC, CF, CF + C > FC + 2). Their emotions were not well controlled and expressed in extreme manners. This could be explained by the emotional states of CP patients showing significant low L, high Blends and Col-Shading, indicating that patients were suffering psychological complexity such as confusion and ambivalent feelings that they were not able to articulate properly. Also, emotions involving diffuse bodily states are often chaotic and change rapidly such that the language available to describe emotional states tends to be vague or amorphous [ 28 ]. The high value of C' is likely to occur in response to several factors. First, our CP patients were so confused by emotional distress that they preferred to avoid dealing directly with their feelings or that they did not trust their ability to control them. A second factor might be the patients' senses of insecurity about sharing their feelings with others or displaying them openly. However, the high incidence of C' is thought to relate to irritating feelings due to the inhibition or excessive internalization of emotion [ 21 ], and the affect is likely to bring about adverse health outcomes [ 34 ]. Furthermore, even suppression of unwanted thoughts or catastrophizing thoughts has been reported to heighten pain experiences [ 35 , 36 ]. These results suggest that consideration should be afforded to provide CP patients with the opportunity to be listened to and express their feelings regarding pain experiences in a safe and accepted relationship.
With regards to alexithymia, Acklin and Bernat proposed the trait for this condition in patients with low back pain based upon results from ambitent coping style and seven Rorschach constrictedness variables, including low R, low M, low WsumC, low FC, low Blends, high Lambda, and low EA [ 22 ]. The results of the present study did not fulfill the above condition except for high frequency of ambitent style and low FC, suggesting that our CP patients were not regarded as alexithymic, but further investigations are needed to be sure.
Cognitive issues
Three cognitive functions, the 'Cognitive Triad' have been defined as: 1) information processing, 2) cognitive mediation, and 3) ideation. These are collective functions and reflect a continuous or circular process to form the basis of all deliberate and/or meaningful behaviors [ 21 ].
Information processing results demonstrated that our CP patients actively processed information effectively (D), and their scanning efficiency (Zd) and quality of processing activities (DQ +) were usually more adequate than the NA group. However, CP patients were likely to focus more upon minute or unusual features of a new field of information (Dd), and had a tendency to be inflexible in shifting their attention (PSV). This is reminiscent of the features of chronic pain patients who tend to be attentive to their pain and their complaints sometimes are felt as obstinate. Their focus on pain might lead to increased pain stimuli. Tracy et al reported that increased attention to pain increased pain intensity, whereas significant pain reduction was observed during distraction [ 37 ].
Cognitive mediations evaluate how the inputted image is identified or translated, focusing the extent to which the person acknowledges external reality when making decisions, as opposed to being influenced by psychological status. Cognitive mediations of our CP patients were less appropriate (XA%, WDA%) than the NA group, but the probability of a less conventional response (P, Xu%) occurring was minimal in simple and/or precisely defined situations. However, their mediation showed moderate dysfunction (X-%, FQ-, S-) with deviations from expected values. The significant incidence of these minus responses in CP patients indicated that some of their personal aspects caused the stimulus field to be disregarded and replaced by internal aspects projected into the response. Minus responses could be provoked by many reasons such as ideational sets, preoccupations and emotional elements, and all kinds of emotions can interfere with mediation [ 21 ]. As indicated by the high frequency of S-, some negative feelings such as anger were included and were likely to promote mediational dysfunction in CP patients.
Ideation focuses upon the characteristics of thinking, the quality and clarity of ideation, the frequency with which associated aspects manifest, and the manner in which it is used. Results from our CP patients suggested difficulties in conceptual thinking or an issue of ideational clarity as shown in the highly significant values of MOR and WSum6. MOR, a moderate pessimistic set, suggested that CP patients anticipate gloomy outcomes for their efforts, and that this negative perception is likely to be reinforced and interact with a sense of helplessness, as indicated by emotional issues. In addition, when the highly significant value of ALOG, which represents a tendency of unconventional or strained reasoning to justify the answer, is applied to pain and pain-related situations, it is likely to promote a type of catastrophic pain thinking and therefore contribute to a more intense pain experience and increased emotional distress [ 15 ]. This may also relate to negative expectations concerning the patients' own abilities which are likely to result in pain-related dysfunctional behaviors, such as fear of movement [ 38 ]. CP patients also showed significantly elevated values of DV and DR, distorted language use and indecisiveness or poor judgment, which often reflect a type of cognitive carelessness. These ideational dysfunctions suggest that our CP patients experienced some difficulty in formulating or expressing aspects of thinking in a less clear and less sophisticated manner, which is likely to impede their abilities to communicate clearly.
In a medical setting, the National Institute for Japanese Language found that most Japanese patients had the experience of being puzzled by unfamiliar words in conversations with physicians [ 39 ]. In doctor-patient inter-relationships, the ability to communicate effectively is crucial for patients to obtain the true benefits of medical advances and for physicians to achieve professional satisfaction [ 40 ]. In consideration of such cognitive dysfunctions in CP patients, as Pilowsky recommended, it would be helpful for them to be given clear explanations concerning the mechanisms and reactions of pain using readily understandable terminology and to provide ample opportunities for discussion and questions, which may need to be provided repeatedly [ 41 ].
Interpersonal relationships
First, it is useful to point out that our CDI negative results (Table 3 ) suggest that CP patients were not socially immature and that they had adequate interpersonal skills. However, our self-perception findings suggest that CP patients focused on their less positive aspects (MOR, V). A significantly high value of Y, sense of helplessness, could contribute to lower self-efficacy, which is closely related to the sense of control over aversive stimulation. Arnstein et al identified self-efficacy as a significant predictor of the extent to which patients with chronic pain become disabled and/or depressed [ 17 ].
Our self-perception data contribute to our understanding of interpersonal perceptions and behaviors. Significantly low T and high PER values suggest that CP patients tended to be overly concerned with personal space and that they were much more cautious or defensive about creating or maintaining close emotional ties with others. Significantly high AG values showed that CP patients tended to perceive interpersonal exchanges as aggressive or competitive without reality-based evidence. Thus, their interpersonal behavior might not always be effective and adaptive. These findings indicate that CP patients appeared to lack emotionally close relationships and to be isolated. This situation is likely to make chronic pain patients focus more upon pain and their misery, as Turk and Flor found in many chronic pain patients whose preoccupation with their own bodies would lead to increased awareness and overestimation of sensory information [ 6 ].
Rorschach CS data compiled from our CP patients were examined in terms of emotional, cognitive and interpersonal aspects, as well as their original psychological resources. Findings suggest that pain and pain-related situations were mostly viewed as major stressors in patients' lives that triggered a certain degree of emotional distress, cognitive dysfunction and maladaptive interpersonal relationships. Our results did not indicate psychopathology, a finding consistent with previous studies by Gatchel & Okifuji and Maruta [ 7 , 42 ]. Physicians have been reported to show a tendency to prioritize the assessment of physical damage or disease over emotional reactions or the psychological wellbeing of patients with chronic pain [ 28 ]. However, persistent pain is highly disconcerting for patients with little knowledge about physical functions or psychological approaches. Consequently, consideration of psychological assessment and some intervention as part of the stress management of pain is likely to be helpful for patients with chronic pain in order to maintain their daily lives as much as possible, regardless of pain [ 43 - 45 ].
Our present study has several limitations. The chronic pain patients investigated in this study represented one extreme of the total chronic pain patient population and should not be viewed as being representative of the Japanese chronic pain population in general. Our patients had a long history of pain. Consequently, Rorschach findings may be related to other dimensions of psychological adaptation to disease and health-related quality of life. Our study involved a female to male ratio of 2:1 and therefore psychological characteristics might be more reflective of female patients. Further studies should be designed to take this into account. In the present study, comparisons were made to normative data from non-patient Japanese adults. Future investigations have already been implemented using control data. Other types of psychological assessment measures specific to pain could be added as a test battery in order to further our understanding of the more specific features of patients with chronic pain. Interactions with patients and medical professionals were not examined in this study and future research will be needed to help the patient-doctor relationship become more productive and therapeutic. | Conclusions
A comprehensive series of psychological characteristics determined from Japanese patients with non-malignant chronic pain were examined and compared to non-patient Japanese adults using the Rorschach CS. Results showed that these patients demonstrated high emotional distress, moderate cognitive dysfunction, and ineffective interpersonal interactions. This was in spite of the fact that they originally exhibited adequate psychological resources with capacities for control, stress tolerance and adequate interpersonal skills. In consideration with the multi-dimensional experience of pain, we recommend that medical professionals incorporate some degree of psychological intervention into conventional pain treatment regimens. | Background
The increasing number of patients with chronic pain in Japan has become a major issue in terms of the patient's quality of life, medical costs, and related social problems. Pain is a multi-dimensional experience with physiological, affective, cognitive, behavioral and social components, and recommended to be managed via a combination of bio-psycho-social aspects. However, a biomedical approach is still the dominant method of pain treatment in Japan. The current study aimed to evaluate comprehensive psychological functions and processes in Japanese chronic pain patients.
Methods
The Rorschach Comprehensive System was administered to 49 in-patients with non-malignant chronic pain. Major variables and frequencies from the test were then compared to normative data from non-patient Japanese adults by way of the t-test and chi-square test.
Results
Patients exhibited high levels of emotional distress with a sense of helplessness with regard to situational stress, confusion, and ambivalent feelings. These emotions were managed by the patients in an inappropriate manner. Cognitive functions resulted in moderate dysfunction in all stages. Information processing tended to focus upon minute features in an inflexible manner. Mediational dysfunction was likely to occur with unstable affective conditions. Ideation was marked by pessimistic and less effective thinking. Since patients exhibited negative self-perception, their interpersonal relationship skills tended to be ineffective. Originally, our patients displayed average psychological resources for control, stress tolerance, and social skills for interpersonal relationships. However, patient coping styles were either situation- or emotion-dependent, and patients were more likely to exhibit emotional instability influenced by external stimuli, resulting in increased vulnerability to pain.
Conclusions
Data gathered from the Rorschach test suggested psychological approaches to support chronic pain patients that are likely to be highly beneficial, and we thus recommend their incorporation into the course of current pain treatments. | Competing interests
The authors declare that they have no competing interests.
Authors' contributions
KY conceptualized and designed the study, collected and analyzed the data, interpreted the results, and drafted the manuscript. KK and MF helped in drafting the manuscript. MY helped in the Rorschach analysis while AT and IB helped with the statistical analysis. HM, YM, TA and YN provided overall advice. All authors read and approved the final manuscript. | Acknowledgements
This study was funded by Grants-in-Aid for Scientific Research Nos. 14A-9 and 17A-3 from the Ministry of Health, Welfare, and Labor of Japan. We would like to express our gratitude to the physicians in charge for their cooperation in this study. Many thanks are also extended to Chiaki Shikata for help in data input, Hiroko Ochi for overall assistance, and all chronic pain patients who participated in this study in spite of their painful conditions. | CC BY | no | 2022-01-12 15:21:45 | Biopsychosoc Med. 2010 Nov 26; 4:20 | oa_package/d5/79/PMC3016376.tar.gz |
PMC3016377 | 21134279 | Background
Tuberculosis (TB) is the leading cause of mortality among people living with HIV worldwide [ 1 , 2 ]. Drug-resistant TB has emerged as an important global threat to public health. Although previously considered uncommon in high HIV prevalence settings, there has been a 3-4 fold increase in multidrug-resistant (MDR)-TB prevalence in southern Africa over the past decade [ 3 , 4 ]. In addition, extensively drug-resistant (XDR)-TB has been reported from all countries in southern Africa. MDR and XDR-TB are associated with a much higher mortality than drug-susceptible TB [ 5 ], especially among HIV co-infected persons [ 6 ].
Prompt diagnosis and treatment are essential to improve drug-resistant TB outcomes, but TB diagnosis in patients with HIV co-infection is challenging, particularly in resource-limited settings [ 6 ]. HIV-infected TB patients have higher rates of extrapulmonary disease, atypical clinical presentations, and normal chest radiographs [ 7 - 10 ]. With the emergence of MDR and XDR-TB in HIV-infected populations worldwide [ 3 ], it is therefore likely that there will be a consequent rise in extrapulmonary MDR and XDR-TB disease [ 11 ]. The diagnosis of drug-resistant TB requires isolation of an organism, or DNA in the case of molecular tests, thus vigorous efforts to obtain a specimen that may yield Mycobacterium tuberculosis are needed in settings with a high prevalence of drug resistance.
Mycobacteremia with drug-susceptible TB was described in reports from HIV-infected patients early in the epidemic of HIV from the United States [ 12 , 13 ]. Yet the yield of blood cultures in detecting M. tuberculosis can vary between 2%-64% depending on the population of study and suspicion for extrapulmonary TB [ 14 - 19 ]. In addition, HIV-infected patients may be predisposed to other bacterial and fungal bloodstream infections that clinically mimic TB, leading to delays in diagnosis or over-treatment of TB. In sub-Saharan Africa, M. tuberculosis bacteremia has been documented in patients with blood stream infections from a referral hospital in Tanzania, in patients with cough from Botswana, and as a critical etiology of sepsis among HIV-infected patients from Uganda, but drug-susceptibility testing (DST) was not performed [ 20 - 22 ]. To date, no study has reported the yield of blood cultures for the detection of MDR and XDR-TB.
Over 650 patients with MDR and XDR-TB have been identified from the rural Tugela Ferry area of KwaZulu-Natal, South Africa, where HIV co-infection rates exceed 90% [ 5 , 23 ]. Mycobacterial blood cultures have been used routinely in Tugela Ferry since 2006 in an attempt to improve case detection of drug-resistant TB. We sought to quantify the yield of blood cultures for MDR and XDR-TB, compare the yield to that of sputum culture, and identify risk factors for M. tuberculosis bacteremia in this population in order to guide clinical practice and public health policy. | Methods
Setting
The Church of Scotland Hospital (COSH) in Tugela Ferry is a 355-bed facility serving a population of 200,000 Zulu people. The local incidence of TB is estimated at 1,100 per 100,000 population, with approximately 80% of TB cases being HIV coinfected[ 24 ]. Onsite diagnostics include smear microscopy for acid-fact bacilli of sputum and cerebrospinal fluid specimens. Sputum and other non-sputum fluid specimens requiring culture and DST are sent to the provincial TB referral laboratory in Durban (approximately 180 kilometers away) on a daily basis. Since June 2005, all TB suspects presenting to COSH are requested to give two 'spot' sputum specimens, one for onsite smear microscopy, and one for mycobacterial culture and DST.
Study design
We performed a retrospective, cross-sectional study of all drug-resistant TB suspects in whom at least one mycobacterial blood culture was sent from September 1 st , 2006 to December 31 st , 2008. Clinicians defined a drug-resistant TB suspect based on the presence of TB symptoms (e.g., cough, nightsweats, weight loss) with one or more of the following additional criteria: advanced HIV/AIDS, a prior history of TB treatment, or persistent symptoms despite one month or more of drug-susceptible TB treatment. If a sputum culture was available it was included for analysis if collected within two weeks before or after the date of collection of the blood culture.
Medical chart review was performed to obtain demographic, clinical and microbiological information. Specific data extracted included age, gender, HIV status, receipt and duration of antiretroviral treatment, CD4 cell count (cells/mm 3 ) prior to blood culture collection, history of TB treatment, TB treatment status at time of blood culture collection, and physician's comments of signs of extrapulmonary TB.
Definitions and outcome measures
Patients were categorized as being extrapulmonary TB suspects if a physician had documented specific extrapulmonary organ involvement that was suggestive of TB (e.g., pericardial effusion or abdominal lymphadenopathy on ultrasound) or if the physician documented a suspicion for extrapulmonary TB in the chart. MDR-TB was defined as resistance to at least isoniazid and rifampicin, while XDR-TB was defined as resistance to at least isoniazid, rifampicin, kanamycin and ofloxacin [ 25 ]. Susceptibility testing to other second-line TB drugs was not routinely done.
The primary outcome was the yield of blood cultures for drug-resistant TB, defined as the proportion of blood cultures that were positive for MDR or XDR-TB. Secondary outcomes included: 1) risk factors of M. tuberculosis bacteremia, and 2) comparison of blood culture to sputum culture for additive yield of blood in detection of M. tuberculosis and drug resistance.
Laboratory methods
All patients had 5 ml of blood collected and inoculated into mycobacterial blood culture bottles (BACTEC MycoF-lytic) and placed in a darkened storage container at room temperature prior to transport to the provincial referral laboratory. Blood culture bottles were cultured using the automated BACTEC 9240 system in which specimens are continuously monitored for growth up to 42 days [ 26 ]. All positive cultures were confirmed by niacin accumulation and nitrate reductase methods. DST was performed on all specimens positive for M. tuberculosis using the 1% proportional method [ 27 ] on Middlebrook 7H11 agar to: isoniazid (critical concentration, 0.2 μg/ml), rifampicin (1 μg/ml), ethambutol (7.5 μg/ml), ofloxacin (2 μg/ml), kanamycin (6 μg/ml) and streptomycin (2 μg/ml). Non-tuberculous mycobacteria (NTM) were not further speciated.
Sputum samples were refrigerated before and during transport to the provincial TB referral laboratory. Upon receipt, the specimen was digested and decontaminated with the N-acetyl-L-cysteine-sodium hydroxide method and smears were prepared for auramine staining. The remainder of the deposit was transferred for liquid culture in the automated BACTEC MGIT 960 system. DST was completed on all positive specimens after secondary inoculation on to Middlebrook 7H11 agar and using the 1% proportional method for the drugs as described with blood specimens.
Statistical analysis
Yield of M. tuberculosis detected by blood and sputum was calculated using simple frequencies and proportions. Demographic and clinical characteristics were compared with the chi-square statistic or the Mann-Whitney U test for non-parametric data. Bivariate and multivariate logistic regression were employed to determine risk factors for M. tuberculosis bacteremia. All tests for significance were 2-sided with a p-value < 0.05 considered significant. For variables with >10% missing data, tests of interaction were performed when appropriate. The multivariate model included any variable with p-value < 0.1 in bivariate analysis and any pertinent clinical and demographic characteristics. All analysis was performed using SPSS Statistics 17.0 software.
Ethical considerations
The study was approved by the biomedical research ethics committees of the University of KwaZulu-Natal, Albert Einstein College of Medicine, and Yale University. | Results
One-hundred thirty patients suspected of drug-resistant TB had mycobacterial blood cultures performed during the study period and were included for analysis. All patients had only one blood culture specimen. There were 73 males (56%); the median age was 31.5 years (Interquartile range [IQR] 27-38) and 8 (6%) patients were less than 12 years of age (Table 1 ). HIV-infection was confirmed in 114 (88%) patients. The CD4 cell count was available in 63 (55%) HIV-infected patients, with a median cell count of 100 cells/mm 3 (IQR 28-190). Fifty-three (46%) HIV-infected patients were on antiretroviral therapy at the time of blood culture collection. The median duration of antiretroviral therapy for patients on treatment was 15.4 weeks (IQR 4.7-31.0).
Of the 130 patients, 88 (68%) had no prior history of TB, although 89 (69%) patients were failing drug-susceptible TB treatment at the time of blood culture collection. The median duration of TB treatment for these patients was 8.0 weeks (IQR 4.0-20.0). Forty-five (35%) patients were suspected to have extrapulmonary TB.
Overall yield of mycobacterial blood cultures
Of 130 blood culture specimens, M. tuberculosis was isolated in 41 (32%) (Table 2 ). Of the 41 blood cultures that yielded M. tuberculosis , 28 (68%) were drug-resistant TB. Specifically, 20 (49%) specimens were XDR-TB and 8 (20%) were MDR-TB (Table 2 ). Among the blood cultures positive for XDR-TB, 18 (90%) of 20 were resistant to all six drugs tested.
Of the remaining positive blood cultures, NTM were found in 3 (6%) specimens and Cryptococcus species was found in 3 (6%) specimens. In the six study patients with Cryptoccocus species or NTM detected in the blood, five were currently receiving first-line TB therapy at the time of blood culture collection yet none of the patients had culture documentation of M. tuberculosis .
Risk factors for M. tuberculosis bacteremia
There were significantly more patients with M. tuberculosis bacteremia who were extrapulmonary TB suspects, OR 2.1 [0.97-4.5; p = 0.06]; adjusted OR 2.3 [1.0-5.4; p = 0.05]. Among HIV-infected patients, those on antiretroviral therapy for any duration at the time of blood culture collection were significantly less likely to have M. tuberculosis bacteremia, OR 0.29 [95% CI, 0.12-0.71; p = 0.005]; adjusted OR 0.22 [0.08-0.58; p = 0.002] (Table 1 ). Though age was not a significant risk factor, it was found that M. tuberculosis was cultured from the blood in patients as young as 8 years (MDR-TB) and as old as 62 years (XDR-TB).
Comparison of blood to sputum cultures for MDR-TB and XDR-TB yield
Of the 41 patients with M. tuberculosis bacteremia, there were 23 patients that also had a sputum sample collected for comparison (Table 3 ). With two patients, the sputum sample was negative but the blood cultures revealed XDR-TB in one and MDR-TB in the other. DST was not completed in one patient's sputum sample but the blood culture revealed XDR-TB. Among all patients in whom DST was completed in both the blood and sputum sample, the DST results were identical. Considering the 21 patients where sputum cultures were either negative, DST was incomplete, or sputum was not collected, there were 9 (43%) blood cultures that diagnosed drug-susceptible TB and 12 (57%) that diagnosed MDR or XDR-TB (Table 3 ). Despite extrapulmonary TB suspects being at higher risk of M. tuberculosis bacteremia, in patients with both a blood and a sputum culture positive for M. tuberculosis , only 24% were suspected of extrapulmonary TB. | Discussion
We found that among a predominantly HIV-infected population of patients suspected of drug-resistant TB, MDR-TB and XDR-TB were isolated in nearly 70% of all positive M. tuberculosis blood cultures. Importantly, among patients with MDR or XDR-TB bacteremia, in over half of those in whom a sputum culture was unavailable, the blood culture was the only means of drug-resistant TB diagnosis. Bacteremia with XDR-TB was more common than MDR-TB, but reflective of community trends from sputum diagnosis in Tugela Ferry [ 28 ].
Current guidelines suggest the use of mycobacterial blood cultures may be beneficial in suspected extrapulmonary TB, but do not address the use in all HIV-infected persons or those suspected of drug-resistant TB [ 29 ]. A recent comprehensive screening study of HIV-infected ambulatory persons from Southeast Asia found only a 5% incremental yield of blood cultures for TB diagnosis among those with two negative sputum smears; DST results were not provided [ 19 ]. In contrast, the results from our study population are likely reflective of advanced immunosuppression, prolonged TB illness prior to blood culture collection, and the high pretest suspicion of drug-resistant TB. The additive yield of blood cultures is likely to vary in other regions with differing disease epidemiology. Nonetheless, these results suggest that M. tuberculosis bacteremia is likely to be present in drug-resistant TB suspects at higher rates than clinically suspected. Thus, we feel that these results are generalizable to other populations in sub-Saharan Africa where TB/HIV co-infection rates are high and the incidence of drug-resistant TB may be increasing.
The bulk of the additive yield for MDR and XDR-TB in blood compared to sputum cultures was found in patients that did not have a sputum sample collected. The most common reason to not have a sputum sample collected in this hospital is the patient's inability to expectorate due to an absence of cough or marked physical disability; however, due to the retrospective nature of this study, we cannot confirm the reasoning for an individual patient. Standard of practice in other settings is to collect two or more sputum samples for microscopy and/or TB culture as a means of increasing yield. Further prospective study is warranted to determine how multiple sputum samples would affect the comparative yield to blood culture in similar populations with advanced HIV. Blood is an easily accessible fluid and carries the additional advantage of not requiring cold storage for transport. Additionally, the cost of analyzing a mycobacterial blood culture with the National Health Services Laboratory in South Africa is no more expensive than MGIT analysis of a sputum specimen.
In this study, the majority of patients with M. tuberculosis bacteremia were not suspected to have extrapulmonary TB. Mycobacterial culture of lymph node aspirates and pleural fluid were available to clinicians during this study period [ 11 ], yet in only three patients was an aspirate performed and all were concordant with blood culture results. Indeed, the minority of patients with positive blood cultures and sputum cultures for M. tuberculosis were suspected of extrapulmonary TB. Thus, our findings suggest that many patients with pulmonary TB in this setting may also harbor unrecognized M. tuberculosis bacteremia. Detection of otherwise occult M. tuberculosis bacteremia regardless of DST, in a patient without suspected extrapulmonary TB may prompt a more exhaustive search for an extrapulmonary focus which could alter treatment and carry important implications for monitoring and clinical outcome.
Patients on antiretroviral medication at the time of blood culture collection were significantly less likely to have M. tuberculosis bacteremia. Earlier studies of M. tuberculosis bacteremia in similar populations in Africa were carried out prior to widespread availability of antiretrovirals and therefore this association could not have been documented until now [ 20 , 21 ]. Our findings lend further support to the growing body of evidence for early initiation of antiretrovirals in the treatment of TB and HIV co-infected patients [ 30 ]. Notably, the median duration of antiretroviral use in our study population was 15 weeks, a reasonable timeframe to present with immune reconstitution inflammatory syndrome (IRIS). We suspect that some patients on antiretroviral therapy who were culture-negative for TB may have actually presented with IRIS, a condition which may share signs and symptoms with TB and drug-resistant TB; however complete follow-up data were not available for confirmation. Interestingly, there was no difference in CD4 count among patients with and without M. tuberculosis bacteremia. One explanation is, in accord with national guidelines, the CD4 count is only checked twice annually; thus the CD4 count recorded may be falsely low for those patients that initiated antiretroviral therapy within the prior six months. Alternatively, in some patients early restoration of lymphocyte function may precede restoration of total lymphocyte count.
One of the primary limitations of the study, given its retrospective design, was that the decision of blood culture collection was dependent upon the attending clinician and therefore additional patients suspected of drug-resistant TB may not have had blood cultures sent and were not included in the study. Additional factors that influenced the decision to pursue the investigation for M. tuberculosis bacteremia may not have been captured. It is also a possibility that blood cultures may have been preferentially pursued in patients in whom a diagnosis was not as readily made by sputum analysis. Therefore, only prospective study of simultaneous blood and rigorous collection of multiple sputum samples in all drug-resistant TB suspects would allow determination of a true incremental yield in this setting. | Conclusions
In summary, mycobacterial blood cultures diagnosed MDR and XDR-TB in a substantial number of patients predominately HIV-infected and suspected of drug-resistant TB from rural South Africa. Bacteremia with drug-susceptible and drug-resistant TB was not restricted to patients suspected of extrapulmonary TB, as many patients with sputum culture confirmed pulmonary TB also had M. tuberculosis bacteremia. The adjunctive use of mycobacterial blood cultures should be considered in all patients suspected of drug-resistant TB, particularly in those unable to expectorate. In many regions of Africa and the developing world, culture and DST are not routinely performed for the diagnosis of TB despite the inferior sensitivity of routine sputum microscopy and the inability of microscopy to detect drug-resistant TB. Expanded access to culture and DST of sputum in South Africa has been projected to save 47,955 lives and avert 7,721 new MDR-TB cases over the next 10 years [ 31 ]. Our finding that a significant proportion of drug-resistant TB suspects had MDR-TB or XDR-TB bacteremia underscores the need for more widespread use of culture and DST for both sputum and blood specimens. | Background
The yield of mycobacterial blood cultures for multidrug-resistant (MDR) and extensively drug-resistant tuberculosis (XDR-TB) among drug-resistant TB suspects has not been described.
Methods
We performed a retrospective, cross-sectional analysis to determine the yield of mycobacterial blood cultures for MDR-TB and XDR-TB among patients suspected of drug-resistant TB from rural South Africa. Secondary outcomes included risk factors of Mycobacterium tuberculosis bacteremia and the additive yield of mycobacterial blood cultures compared to sputum culture.
Results
From 9/1/2006 to 12/31/2008, 130 patients suspected of drug-resistant TB were evaluated with mycobacterial blood culture. Each patient had a single mycobacterial blood culture with 41 (32%) positive for M. tuberculosis , of which 20 (49%) were XDR-TB and 8 (20%) were MDR-TB. One hundred fourteen (88%) patients were known to be HIV-infected. Patients on antiretroviral therapy were significantly less likely to have a positive blood culture for M. tuberculosis (p = 0.002). The diagnosis of MDR or XDR-TB was made by blood culture alone in 12 patients.
Conclusions
Mycobacterial blood cultures provided an additive yield for diagnosis of drug-resistant TB in patients with HIV from rural South Africa. The use of mycobacterial blood cultures should be considered in all patients suspected of drug-resistant TB in similar settings. | Competing interests
The authors declare that they have no competing interests.
Authors' contributions
SKH conceived of the study design, performed data analysis and drafted the manuscript. TAT, NRG and SS participated in study design, data analysis and critical revision of the manuscript. FJE, APM and PB participated in study design, acquired patient related data, and provided critical revision of the manuscript. YC and LR performed specimen processing, culture and drug-susceptibility testing. UL and GF participated in study design and critical revision of the manuscript. All authors read and approved the final manuscript.
Pre-publication history
The pre-publication history for this paper can be accessed here:
http://www.biomedcentral.com/1471-2334/10/344/prepub | Acknowledgements
SKH was supported by the Burroughs Wellcome Fund and the American Society of Tropical Medicine and Hygiene during the study period. NRG, NSS and GF are supported by the Doris Duke Charitable Foundation, Howard Hughes Medical Institute and the US President's Emergency Plan for AIDS Relief; GF is also supported by The Gilead Foundation and the Irene Diamond Fund. Funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. | CC BY | no | 2022-01-12 15:21:45 | BMC Infect Dis. 2010 Dec 6; 10:344 | oa_package/d9/e9/PMC3016377.tar.gz |
PMC3016378 | 21143955 | Background
Tuberculosis (TB) is still a major cause of morbidity and mortality throughout the world. There is an estimated global incidence of 8.8 million new cases, with a total of 1.6 million deaths [ 1 ]. Indeed, individuals infected with human immunodeficiency virus (HIV) have an increased risk of progression to active TB following Mycobacterium tuberculosis infection of 5-10% per year [ 2 ]. The detection and treatment of active TB is crucial to control the global TB epidemic. Therefore, targeting and treating infected persons with high risk of disease reactivation is a key strategy for an effective control of the spread of TB.
Nevertheless, the diagnosis of latent tuberculosis infection (LTBI) is complicated due to the lack of a gold standard test. Tuberculin skin test (TST) has been used since the last century for diagnosing LTBI. TST measures a cell-mediated immunity as the form of a delayed-type hypersensitivity response to the purified protein derivative (PPD) [ 3 ]. The main drawback of the TST is its lack of specificity. PPD contains more than 200 antigens that are shared among other mycobacteria such as the Bacille Calmette-Guérin (BCG) vaccine strain and many non-tuberculous mycobacteria (NTM), consequently, false positive TST results can occur due to antigenic cross-reactivity [ 4 ]. In addition, the sensitivity of TST is reduced in HIV-positive patients because of false negative results, as a result of cutaneous anergy [ 5 ].
New in vitro T-cell based assays for the diagnosis of LTBI are now available. These assays measure the IFN-gamma (IFN-γ) released by sensitized T cells after specific M. tuberculosis antigen stimulation. These specific antigens are early secreted antigenic target 6 (ESAT-6) and culture filtrate protein 10 (CFP-10), which are encoded in the region of difference 1 (RD1) segment of M. tuberculosis genome [ 6 ], and TB7.7, encoded in RD11 segment [ 7 ].
Nowadays, there are two commercial available IFN-γ T-cell based assays: QuantiFERON-TB GOLD In Tube (QFN-G-IT, Cellestis Limited, Carnegie, Australia) and T-SPOT.TB (Oxford Immunotec Limited, Abingdon, UK). Both assays are approved from the U.S. Food and Drug Administration (FDA) as an aid for diagnosing LTBI. QFN-G-IT test stimulates whole-blood with ESAT-6, CFP-10 and TB7.7 in the same tube, and measures the concentration of IFN-γ in supernatants with an enzyme-linked-immunosorbent assay (ELISA). On the other hand, T-SPOT.TB assay stimulates isolated peripheral blood mononuclear cells (PBMCs) with ESAT-6 and CFP-10 separately, and detects number of IFN-γ producing T cells by means of an enzyme-linked immunospot assay (ELISPOT).
Promising results have been published with IFN-γ assays in the diagnosis of LTBI [ 8 - 13 ] and active TB [ 14 - 18 ]. Moreover, in the last years some studies have studied IFN-γ tests in HIV-infected population [ 19 - 23 ]. However, only few studies comparing the performance of both T-SPOT.TB and QFN assays in the same HIV population have been conducted [ 19 , 24 - 27 ]. However, more studies comparing T-SPOT.TB and QFN-G-IT with TST are required in order to better understand the role of IFN-γ assays in the diagnosis of LTBI in this kind of population, analyzing the impact of the degree of immunosuppression on the antigen-specific T-cell responses.
So, in the present study, we compared the utility of T-SPOT.TB, QFN-G-IT and TST for the diagnosis of LTBI in the same HIV population, and evaluated the influence of CD4 cell count on the different tests performance. | Methods
Study setting and patient recruitment
From January 2006 through November 2009, HIV-positive adults attending to the Hospital Universitari Germans Trias i Pujol and Hospital Universitari Mútua Terrassa were enrolled for ongoing studies of LTBI. The estimated TB community incidence of TB is 23.2/100.000 habitants and the HIV prevalence among those active TB patients is 8.5% [ 28 ].
Patients were consecutively recruited, and were enrolled during the course of the routine examinations. Each participant gave written informed consent before blood sampling. Ethics Committees of Hospital Universitari Germans Trias i Pujol and Hospital Universitari Mútua Terrassa approved the study. Information on the following variables was collected completing a detailed questionnaire: age, gender, BCG vaccination, prior TST (date and result), TB contact, history of prior active TB, chest radiography and other medical conditions. In our study, only participants with BCG scars were considered BCG vaccinated. In LTBI patients, active TB was excluded by clinical and radiologic examination. None of the patients included in this study had active TB. Patients were tested during the routine examination with the TST. Blood sampling of IFN-γ assays was performed before TST application. Patients with a previous documented positive TST were excluded.
HIV testing and lymphocyte count
HIV testing was performed in all subjects. Blood samples were taken for HIV serology (ELISA and Western-Blot). CD4 and CD8 cell count were performed on blood samples from all HIV-positive patients.
Tuberculin skin test
Two intradermal tuberculin units of PPD RT23 Tween 80 (Statens Serum Institut, Copenhagen, Denmark) were used to perform the TST, using the Mantoux method. Induration was measured 48-72 h after the application, and the size of the induration was interpreted by trained personnel. According to our national guidelines, an induration equal or higher than 5 mm was considered positive [ 29 ].
QuantiFERON-TB GOLD In Tube
A total of three tubes of one millilitre each: nil control, positive control (phytohaemagglutinin [PHA]) and TB-specific antigens were drawn by venopuncture from each patient. The tubes were incubated overnight at 37°C, and after incubation plasmas were separated by centrifugation. The production of IFN-γ in whole-blood supernatant was determined by an ELISA.
Raw optical densities were interpreted by using specific software provided by the manufacturer. The result obtained by the nil control was subtracted from the positive control and the antigen-stimulated samples. The cut-off value for a positive test was at least 0.35 IU/mL of IFN-γ in the sample after stimulation with the specific antigens, regardless of the result of the positive control. The result of the test was considered indeterminate if the antigen-stimulated sample was negative and if the value of the positive control was less than 0.5 IU/mL after subtraction of the value of the nil control; and/or if the negative control was higher than 8.0 IU/mL.
T-SPOT.TB
Eight millilitres of blood were drawn for the isolation of PBMCs in a vaccutainer CPT tube (Beckton Dickinson Diagnostics, Franklin Lakes, NJ). The isolated PBMCs were washed twice by centrifugation with RPMI medium (Invitrogen, Auckland, N.Z.), and later resuspended in AIM-V medium (Invitrogen, Auckland, N.Z.). Finally, viable cells were counted with an inverted microscope using the tripan blue method.
IFN-γ producing T cells were detected by an enzyme-linked immunospot assay (ELISPOT). The test was performed according the manufacturer's instructions. Each subject requires four wells precoated with a monoclonal antibody to IFN-γ. In the first well, cells were incubated with medium alone (control negative), in the second one with PHA (control positive), in the third one with ESAT-6 (Panel A), and in the last one with CFP-10 (Panel B). The assay requires a total of 250,000 cells per well.
On T-SPOT.TB, spots were scored using an automated AID ELISPOT plate reader (Lector AID Elispots, Autoimmun Diagnostiks GMBH, Germany). All readings were also manually verified. Each spot represents the footprint of a cytokine secreting cell and the number of spots obtained provides a measurement of frequency of M. tuberculosis sensitized cells. Subjects were considered positive if there was a positive response to one or both of the antigen panels. Test wells were scored as positive if they contained at least six spot-forming cells more than the nil control well and this number was at least twice the number of the nil control well. The result was considered indeterminate if the response to both antigen panels were negative and if the number of spots in the control positive well was less than 20. In addition, the immunoresponse was also considered indeterminate if the number of spots in the negative control was greater than 10.
Statistical methods
Concordance between both tests was assessed using Cohen's Kappa (κ) coefficient. κ values below 0.40 indicate weak correlation, values of 0.41-0.60 indicate good agreement and values above 0.60 indicate strong agreement. Comparison of the number of spots and the IFN-γ released was performed by Mann-Whitney U test analysis. Differences were considered significant when P values were less than 0.05. All analyses were made with SPSS statistical software for Windows (SPSS version 15.0; SPSS Inc., Chicago; IL, USA). Graphical representation is based on GraphPad Prism version 4 (GraphPad Software, Inc, Dan Diego, CA). | Results
Patient characteristics
We studied 75 HIV-positive patients who were screened for LTBI. Mean CD4 and CD8 cell counts ± standard deviation were 461.29 ± 307.49 cells/μl and 899.33 ± 649.94 cells/μl respectively. The main demographic characteristics of patients included in the study are summarized in Table 1 .
Diagnostic tests performance
The overall number of positive results in HIV-positive individuals screened for LTBI was 7/75 (9.3%), 5/75 (6.7%) and 9/75 (12%) using T-SPOT.TB, QFN-G-IT and TST respectively. There were not significant differences in the percentage of positive results between the three tests. We obtained two indeterminate results, both by T-SPOT.TB and QFN-G-IT, due to an insufficient response to PHA and M. tuberculosis specific antigens. In these two cases, TST was negative. Global agreement between T-SPOT.TB and QFN-G-IT was 89% (κ = 0.275; standard error [SE] = 0.184). The overall agreement of T-SPOT.TB and QFN-G-IT with TST was 80.8% (κ = 0.019; SE = 0.123) and 89% (κ = 0.373; SE = 0.173), respectively.
In non BCG-vaccinated patients, QFN-G-IT and TST were positive in 5/67 (7.5%), and T-SPOT.TB in 7/67 (10.4%) of the cases. In BCG-vaccinated patients both IFN-γ assays were negative, but in contrast, we obtained 4/8 (50%) of positive results with TST. The difference between the results obtained by TST in non BCG-vaccinated and BCG-vaccinated was statistically significant (p = 0.006). Furthermore, the differences obtained in the number of positive results between TST and both IFN-γ assays in BCG vaccinated patients were also significant (95% Confidence interval = 3-97%, p = 0.046), however, the confidence interval is very wide given the small number of patients. The number of positive results and the agreement between the assays regarding BCG-vaccination status are shown in Tables 2 and 3 .
Influence of CD4 cell count
We analyzed the possible impact of CD4 cell count on T cell responses, stratifying patients into two groups: 20 patients with < 200 CD4 cells/μl and 55 patients with >200 CD4 cells/μl. We found that number of responder T cells to specific M. tuberculosis antigens detected by T-SPOT.TB and the IFN-γ released in QFN-G-IT was lower in HIV-positive patients with CD4 cell counts < 200 than >200 cells/μl but not statistically significant, as shown in Figure 1 . In addition, we studied the PHA T cell responses on QFN-G-IT according CD4 T cells and the differences between < 200 and >200 cell counts were nearly significant (Figure 2 ). On the other hand, it was impossible to asses the number of responder T cells after PHA stimulation on T-SPOT.TB due to saturation in the control positive well.
The proportion of positive results obtained by T-SPOT.TB and QFN-G-IT were lower in HIV patients with a CD4 cell count < 200 than above >200 cells/μl. In patients with a CD4 cell count below 200, we only obtained an only one (5%) positive result with T-SPOT.TB, that corresponded to one patient with 39 CD4 cells/μl. QFN-G-IT and TST were negative in all cases. In contrast, percentages of positive results in patients with a CD4 cell count above 200 were 10.9% (6/55), 9.1% (5/55) and 16.4% (9/55) with T-SPOT.TB, QFN-G-IT and TST, respectively. Differences in positive results regarding CD4 cell count were not significant for any tests (T-SPOT.TB, QFN-G-IT and TST: p = 0.313, p = 0.123 and p = 0.055, respectively).
The concentration of IFN-γ released in QFN-G-IT and the number of responder ESAT-6 and CFP-10 specific T cells detected by T-SPOT.TB was not correlated with number of circulating CD4 T cells (Spearman's rho [SR] = 0.221, p = 0.056; SR = 0.028, p = 0.813 and SR = 0.013, p = 0.910, respectively), as shown in Figure 3 . | Discussion
Several authors have studied the responses on T-cell based assays in HIV individuals for the detection of LTBI and active TB. It has been demonstrated in the majority of the studies, that IFN-γ assays have higher number of positive results than TST and a poor agreement with it [ 19 - 21 , 23 , 30 - 34 ]. Nevertheless, only a few studies have performed a direct comparison of T-SPOT.TB, QFN-G-IT and TST to ascertain LTBI in HIV-positive individuals [ 24 - 27 ].
Talati et al [ 25 ], compared T-SPOT.TB, QFN-G-IT and TST in 336 HIV-infected persons. They found a low prevalence of LTBI with the three diagnostic tests: 7 (2.1%) had a positive TST, 9 (2.7%) a positive QFN-G-IT and 14 (4.2%) a positive T-SPOT.TB. Overall, agreement between the three diagnostic tests was poor. Furthermore, Richeldi et al [ 24 ], performed simultaneously T-SPOT.TB, QFN-G-IT and TST in 116 chronically HIV-infected individuals. They identified a low percentage of individuals as LTBI and also a slight agreement between T-SPOT.TB and TST or QFN-G-IT. Rivas I [ 27 ] compared TST and both IFN-γ tests in 139 drug and alcohol abusers, 31% of them being HIV-infected patients. The authors did not found statistically significant associations between HIV serostatus and in vitro tests or TST. However, percentages of positive results obtained by T-SPOT.TB and QFN-G-IT in HIV-positive patients were higher for patients with a CD4 count >350 cells/μl than < 350 cells/μl (28.6% and 39.3% versus 20% and 10%, respectively).
On the other hand, the evaluation of both T-cell based assays and the TST, in patients with HIV-infection, for the immunodiagnosis of LTBI has been also recently described in a high TB-incidence country. In this sense, Leidl et al [ 26 ] enrolled 109 individuals in Uganda with a new diagnosis of HIV-1 infection, and observed that global frequencies of positive results for TST, T-SPOT.TB and QFN-G-IT were 47.2%, 54% and 67.9% respectively. Although there are few differences between numbers of positive results in these previous findings, our results are consistent with those reported in the referred studies, where the commercial IFN-γ tests reached similar number of positive results. Furthermore, according to these studies we observed that concordances between three diagnostic tests were poor.
Intriguingly, we have observed poor agreement between both IFN-γ tests. From the 7 cases with a positive T-SPOT.TB result and the 5 cases with positive QFN-G-IT results, both tests were positive simultaneously in only 2 cases. Diagnostic agreement between tests was moderate (κ = 0.40-0.65). Similar results were described by Richeldi et al [ 24 ]. They observed that the agreement decreased in the HIV group when T-SPOT.TB was compared with either TST (κ = 0.16) or QFT-IT (κ = 0.19). In addition they reported highly discordant results (those clearly negative with one IFN-γ assay and clearly positive with another) in all groups of immunosuppressed patients. The analysis of these discordant results needs to be researched further.
It is poorly understood the impact of the HIV-infection in the immune response of LTBI and vice versa. We have observed, as other authors [ 24 ], the presence of discordant results between the TST and the IFN-γ tests, and also between both IFN-γ tests (T-SPOT.TB and QFN-G-IT). It is not clear enough the reasons for these kinds of results. In some cases it could be given by BCG vaccination, or by previous NTM infections (discordances between TST and IFN-γ tests). In other cases, it might be due to the different methodologies (TST vs QFN-G-IT vs T-SPOT.TB). In fact, maybe, the discordant results demonstrate some different immune responses [ 35 ], but this hypothesis has not been yet fully explored. In any case, given that a gold standard for LTBI does not exist, it is not possible to know, in case of discordant result, which test gives the true result. So, do we recommend doing TST and IFN-γ tests to all HIV patients? We think that we have to do more diagnostic effort in patients with a high risk of developing active TB if they are infected. Probably we have to use all the tests available in severe immunosuppresed patients, and maybe it is not so necessary in HIV patients with a conserved number of CD4. Therefore, the use of IFN-γ assays in combination with TST could be beneficial for diagnosing LTBI in HIV population severely immunosuppressed. However, probably our results did not provide definite data for supporting these comments.
We only detected two indeterminate results (2.7%) by either IFN-γ assays. This data is consistent with the majority of studies which show low rates of indeterminate results in HIV-infected patients without active TB [ 22 ]. One of the indeterminate cases obtained in our study corresponded with a low circulating CD4 cell count patient (103 cells/μl). It has been described that low CD4 cell counts are associated with IFN-γ assays indeterminate results for the diagnosis of LTBI and active TB [ 23 , 25 , 30 , 31 , 33 , 36 , 37 ]. However, in our study, 19 of 20 patients with < 200 CD4 cells/μl obtained a valid result. This data differ from other studies where high percentages of commercial IFN-γ tests or in-house ELISPOT assays indeterminate results were found in HIV-infected patients with low CD4 count cells in LTBI screening studies [ 23 , 25 ]. Nevertheless, studies of HIV-infected patients with active TB, generally report higher proportion of indeterminate results [ 36 - 38 ].
Raby et al [ 37 ] reported that low CD4 cell counts were associated with both indeterminate and negative QFN-G-IT results. On the contrary, low CD8 cell counts (median 369 cells/μl) were only associated with indeterminate results. They proposed that CD4+ and CD8+ cells could respond to PHA, unfortunately, the MCH class II overlapping peptides used in QFN-G-IT are only restricted to CD4+ cells. Consequently, those patients with high/normal CD8 cell counts (median 999 cells/μl) and low CD4+ cells produced a positive response to PHA but a negative response to M. tuberculosis specific antigens. In our study, only one of the two indeterminate results corresponded with a patient with a CD4 cell count < 200 cells/μl and a CD8 cell count < 600 cells/μl (103 and 568 cells/μl, respectively).
Regarding the differences in the IFN-γ assays results in HIV-positive individuals with a CD4 cell count < 200 respect those with a CD4 cell count >200, in our study all tests performed poorly in HIV-infected patients with CD4 cell count < 200. The fact that T-SPOT.TB obtained a positive result in one individual with a CD4 cell count < 200 (39 cells/μl), but none for QFN-G-IT, it is not sufficient to drawn conclusions. Converse et al [ 39 ], assessed the effect of HIV immunosuppression on QFN-G, and found that when immunosuppression increased, QFN-G sensitivity decreased, and only 30% (10/23) of HIV-seropositive persons with < 200 CD4 cells were positive.
Some studies have evaluated IFN-γ assays in HIV patients with active TB, and the influence of CD4 cell count. Aabye et al [ 36 ], reported that QFN-G-IT sensitivity in HIV-positive patients with active TB increased with high CD4 cell counts. Additionally, there are studies that have determined the diagnostic accuracy of T-cell based assays assessing the ratio of quantitative response of ESAT-6 and CFP-10 to CD4 T cell count, and improving the diagnosis of active TB [ 40 , 41 ]. No HIV-infected patient diagnosed of active TB was included in our study.
Furthermore, we observed that numbers of ESAT-6 and CFP-10 specific T cells in T-SPOT.TB, and concentration of IFN-γ in QFN-G-IT remained constant among patients with different levels of immunosuppression. Our results differ with those obtained in a recent study conducted by Leidl et al [ 26 ], where the correlation of the number of CD4 T cells with the IFN-γ released in QFN-G-IT was positive (Spearman's rho = 0.38; p = 0.0001), and constant with the number of ESAT-6 and CFP-10 specific T cells in T-SPOT.TB (Spearman's rho = 0.03; p = 0.77 and Spearman's rho = 0.13; p = 0.21, respectively).
Regarding the BCG-vaccination status, our results evidence that T-cell based assays are less influenced by BCG-vaccination than TST. In addition, we have found negative IFN-γ assays results among 2 non BCG-vaccinated HIV-infected individuals with a positive TST. A possible explanation to these discordant results could be a consequence of a previous NTM sensitization. In fact, in our experience, the utilization of IFN-γ tests could reduce the false diagnosis of LTBI in patients with a NTM sensitization [ 35 , 42 ]. In our study it was impossible to test in vitro NTM sensitins given that we didn't have more PBMCs stored from these patients. However, it is not clear enough the safety of not treating BCG and non BCG-vaccinated patients with a positive TST and negative T-cell based assays in this kind of population, especially in patients with more severe immunosuppression.
The main drawback of our study needs to be reported. Even though we have compared T-SPOT.TB, QFN-G-IT and TST in the same population of HIV-infected individuals, the number of patients, especially those with CD4 cell counts < 200, is limited. Furthermore, we did not detect any significant differences in the overall percentages of positive results between the three tests. However, our results reported in this study are consistent to add valuable data about the utility of the IFN-γ tests in the diagnosis of LTBI in HIV-infected patients, and the influence of the number of CD4 in the results. More studies comparing T-SPOT.TB and QFN-G-IT with TST are required to determine the role of IFN-γ assays for the diagnosis of LTBI in HIV-positive patients. | Conclusions
In conclusion, IFN-γ tests have the benefit over TST that are less influenced by BCG-vaccination, consequently they are more specific than TST. The use of IFN-γ assays in combination with TST could be a helpful method for diagnosing LTBI in HIV population. Our study suggests that IFN-γ assays are influenced with level of immunosuppression. Further studies are required for understanding the meaning of the discrepancies between both IFN-γ tests. | Background
Individuals infected with human immunodeficiency virus (HIV) have an increased risk of progression to active tuberculosis following Mycobacterium tuberculosis infection. The objective of the study was to determine IFN-γ responses for the detection of latent tuberculosis infection (LTBI) with QuantiFERON-TB GOLD In Tube (QFT-G-IT) and T-SPOT.TB in HIV patients, and evaluate the influence of CD4 cell count on tests performance.
Methods
We studied 75 HIV patients enrolled for ongoing studies of LTBI with T-SPOT.TB, QFN-G-IT and TST. Mean CD4 cell counts ± standard deviation was 461.29 ± 307.49 cells/μl. Eight patients had a BCG scar.
Results
T-SPOT.TB, QFN-G-IT and TST were positive in 7 (9.3%), 5 (6.7%) and 9 (12%) cases, respectively. Global agreement between QFN-G-IT and T-SPOT.TB was 89% (κ = 0.275). The overall agreement of T-SPOT.TB and QFN-G-IT with TST was 80.8% (κ = 0.019) and 89% (κ = 0.373), respectively. We have found negative IFN-γ assays results among 2 BCG-vaccinated HIV-infected individuals with a positive TST. In non BCG-vaccinated patients, QFN-G-IT and TST were positive in 5 cases (7.5%) and T-SPOT.TB in 7 (10.4%). In contrast, in BCG-vaccinated patients, only TST was positive in 4/8 (50%) of the cases. The differences obtained in the number of positive results between TST and both IFN-γ assays in BCG vaccinated patients were significant (95% CI 3-97%, p = 0.046), however, the confidence interval is very wide given the small number of patients. In patients with CD4< 200, we obtained only one (5%) positive result with T-SPOT.TB; however, QFN-G-IT and TST were negative in all cases. On the contrary, percentages of positive results in patients with CD4> 200 were 10.9% (6/55), 9.1% (5/55) and 16.4% (9/55) with T-SPOT.TB, QFN-G-IT and TST, respectively.
Conclusions
IFN-γ tests have the benefit over TST that are less influenced by BCG vaccination, consequently they are more specific than TST. Although our number of patients with advance immunosuppression is limited, our study suggests that IFN-γ assays are influenced with level of immunosuppression. The use of IFN-γ assays could be a helpful method for diagnosing LTBI in HIV population. | Competing interests
None of the investigators have any financial interest in or a financial conflict with the subject matter or materials discussed in this manuscript. None of the Scientific Societies, neither Inverness Medical Ibérica SAU (Barcelona, Spain), Cellestis (Carnegie, Australia) or Oxford Immunotec (Abingdon, UK) had a role in the study design, conduct, collection, management, analysis, or interpretation of the data, or preparation, review, or approval of the manuscript.
Authors' contributions
Conceived and designed the experiments: IL, XM-L, VA and JD. Performed the experiments: IL, XM-L, RF, AL, JP, CT, JL, CP and EC. Analyzed the data: IL, XM-L, RF, AL, JP, CT, JL, CP, EC, VA and JD. Contributed reagents/materials/analysis tools: XM-L, CT, JL, CP and JD. Wrote the paper: IL, XM-L and JD. All authors read and approved the final manuscript.
Pre-publication history
The pre-publication history for this paper can be accessed here:
http://www.biomedcentral.com/1471-2334/10/348/prepub | Acknowledgements
We thank the nurse staff of the Hospital Universitari Mutua Terrassa; and Carmen Ramil, Lucia Haba, Miguel Ángel Cuesta, and Miguel Pérez from Hospital Universitari Germans Trias i Pujol for technical assistance. JD is a researcher funded from the Miguel Servet programme of the Instituto de Salud Carlos III (Spain). We thank Oriol Martos for his kind technical assistance. | CC BY | no | 2022-01-12 15:21:45 | BMC Infect Dis. 2010 Dec 10; 10:348 | oa_package/48/70/PMC3016378.tar.gz |
PMC3016379 | 21171982 | Background
Hepatitis B viral infections continue to be a serious global health problem and are a cause of concern for public health authorities [ 1 ]. The virus is estimated to have infected two billion people around the world, of whom approximately 360 million are chronically infected. These chronically infected individuals are at increased risk of developing serious illness, which may progress to liver cirrhosis and hepatocellular carcinoma (HCC), that account for an estimated annual 500,000-700,000 deaths worldwide [ 1 ]. A number of studies conducted in different countries have confirmed that universal immunisation of infants and/or adolescents is the most efficient method of reducing the disease burden of hepatitis B infection [ 2 - 5 ]. Taking into consideration the morbidity and mortality associated with viral hepatitis worldwide, the World Health Organization (WHO) recommended in 1992 that vaccination against hepatitis B should be included into the national immunisation schedules of all countries worldwide by 1997 [ 1 ].
The three-dose schedule of hepatitis B vaccination has been the standard immunisation schedule of choice. In countries with an adolescent hepatitis B immunisation programme, the completion rates, however, for the three-dose schedule appear to be lower than expected in certain target populations, such as adolescents [ 6 , 7 ]. In addition, when compared to the option of a two-dose schedule, the three-dose schedule puts a heavier burden on the healthcare system in terms of the implementation and organisation of vaccination programmes. Hence, there has been a growing interest among public healthcare authorities and vaccine manufacturers in identifying a suitable two-dose immunisation schedule that is more convenient for use in adolescents to ensure higher completion rates [ 7 - 9 ].
A two-dose schedule (0, 6 months) of a hepatitis B vaccine ( Engerix -BTM: Adult formulation, GlaxoSmithKline [GSK] Biologicals, Belgium) has been approved for use in European adolescents and is also one of the recommended schedules for vaccination of adolescents aged 11-15 years in Australia [ 10 ], United States and Canada. In addition, a three-dose schedule of the Paediatric formulation of this vaccine is recommended for use in children and young adults aged <20 years. A previous study in children and adolescents has demonstrated equivalence between a two-dose primary vaccination schedule of the Adult formulation and a three-dose schedule of the Paediatric formulation of this vaccine in terms of seroprotection against hepatitis B infection [ 8 , 11 , 12 ]. Considering that the risk of acquiring hepatitis B infections is higher during early adulthood due to various lifestyle-related exposure [ 13 ], it is critical to assess the long-term persistence of vaccine-induced immunity in young adults who have been vaccinated with hepatitis B vaccine in their childhood.
The present study is a long-term follow-up to a primary study that has confirmed the non-inferiority of a two-dose schedule of the Adult formulation of this hepatitis B vaccine versus a three-dose schedule of the Paediatric formulation, when comparing the anti-HBs seroprotection rates and anti-HBs antibody GMCs at Month 7 [ 4 ]. This follow-up study evaluated the five year persistence of antibodies against hepatitis B surface (anti-HBs) antigens in adolescents who received the two-dose regimen of this hepatitis vaccine compared to those who received the three-dose regimen, and the ability of these subjects to mount an anamnestic response to a challenge dose of hepatitis B vaccine given five years after completion of primary immunisation. | Methods
Study design and subjects
In 2001, healthy adolescents aged between 11 and 15 years were enrolled into a single-blind, randomised, multi-country study conducted in Belgium, Australia and Ukraine. The subjects (randomisation blocking scheme 2:1) received either two doses of Engerix -BTM Adult formulation (20 μg of recombinant hepatitis B surface antigen [HBsAg], thiomersal-free formulation) following a 0, 6 months schedule [Group HBV_2D] or three doses of Engerix -BTM Paediatric formulation (10 μg of recombinant HBsAg, preservative-free formulation) following a 0, 1, 6 months schedule [Group HBV_3D]. Group HBV_2D additionally received an injection of physiological saline as placebo at second vaccination time point (Month 1) to maintain the blinding. The vaccines were administered as deep intramuscular injections (needle-length: 25 mm; gauge: 23) in the deltoid region of the arm [ 4 ].
These subjects were then followed up for the next five years (until Year 5 time point) with pre-defined annual visits to evaluate the persistence of anti-HBs antibodies induced by the two schedules of the hepatitis B vaccine [NCT00343915]. Subjects who had completed the primary vaccination course of hepatitis B vaccines and met the eligibility criteria for the challenge phase were administered a challenge dose of hepatitis B vaccine (10 μg of recombinant HBsAg, preservative-free formulation) 72-78 months later, were evaluated one month later for immune memory to the HBs antigen [NCT00524576]. Subjects were excluded from the challenge phase if they had used any investigational product within 30 days preceding the hepatitis B challenge dose or received/planned to receive any vaccines unforeseen by the protocol within 30 days preceding or post-hepatitis B challenge dose or received an additional dose of hepatitis B vaccine between the primary and challenge phases; subjects were also excluded if they had confirmed of suspected diagnosis of immunosuppressive or immunodeficient condition. Pregnant or lactating female subjects were also excluded.
The study was conducted respecting the Good Clinical Practice (GCP) guidelines and Declaration of Helsinki. Two of the four centres were eliminated from the evaluation of anamnestic response to the challenge dose; the investigator from the study centre was not confident that their team would be able to recruit a sufficient number of subjects and therefore did not participate in this phase of the study; while the subjects at another study centre were excluded from the primary analysis due to GCP non-compliance.
The study protocol was approved by the independent ethics committees of the Children's Hospital at Westmead, Antwerp University Hospital, UZA Wilrijkstraat, HopitaloFacultaire de I'UCL and State Entreprise Centre of Immunobiological Preparation; written informed consent was obtained from the parents/guardians of adolescent subjects and from subjects who were above 18 years of age before conducting any study-related procedure.
Serological assessment
Anti-HBs antibody concentrations were measured in the serum samples collected at the first two follow-up visits (at Years 2 and 3 time points) using a commercial enzyme immunoassay (AUSAB EIA/Abbott; cut-off: ≥3.3 mIU/mL). For the subsequent visits (at Years 4 and 5 time points) and the challenge visits, an in-house quantitative enzyme-linked immunosorbent assay (cut-off: ≥3.3 mIU/mL) was used; this assay was equivalent to the previously available commercial assay and was fully validated by calibrating against the first International Reference Preparation for anti-hepatitis B immunoglobulin [ 14 ]. All evaluations were done at the Central laboratory, GSK Biologicals, Belgium, except for the last visit (at Year 5 and challenge visits), for which the evaluations were conducted at the CEVAC laboratory, Ghent University and Hospital, Ghent, Belgium.
The percentage of subjects with anti-HBs antibody concentrations ≥10 mIU/mL, GMCs, GMC evolution at all follow-up time points was tabulated with 95% CI.
Assessment of safety
Safety assessment of the challenge dose included prospective reporting of solicited local and general adverse events and serious adverse events (SAEs). The adverse events were graded on a three-point scale, with those adverse events that hampered normal daily activities being graded as Grade 3 symptoms.
Statistical analyses
No separate sample size calculations were conducted for the long-term follow-up phase of the study. All subjects who participated in the primary study were invited to participate in the subsequent long-term follow-up time points, subject to them meeting the inclusion and exclusion criteria.
The primary analyses of immunogenicity at the long-term follow-up time points (all four centres) were performed on the long-term ATP cohort for immunogenicity, while for the challenge phase, the analyses were performed on the ATP cohort for immunogenicity. Assessments of safety for the challenge phase were performed on the total vaccinated cohort (TVC).
The long-term ATP cohort for analyses of immunogenicity included those subjects who were part of the ATP cohort for immunogenicity analyses in the primary study, with available results from the long-term follow-up time points and without any protocol violations. The ATP immunogenicity cohort for the challenge phase included those subjects who were protocol-compliant and for whom post-challenge dose data was available.
An anamnestic response to the challenge dose was defined taking into consideration the pre-challenge dose serostatus of subjects; for seropositive subjects, an anamnestic response was defined as a four-fold increase in anti-HBs antibody concentration, and for seronegative subjects, as an anti-HBs antibody concentrations ≥10 mIU/mL post-challenge.
GMCs were calculated by taking the anti-log of the mean of log-transformed anti-HBs antibody concentration values.
All statistical analyses were performed using Statistical Analysis System (SAS) version 9.1 and StatXact-7 on SAS. | Results
Study population
The five years follow-up period concluded in January 2008 and a total of 234 subjects were available for this follow-up visit. The challenge phase which concluded in May 2008 included a total of 144 subjects. The number of subjects who participated at each time point during the long-term follow-up and the challenge phase is presented along with the reasons for non-participation of other subjects in Figure 1 .
The demographic characteristics in both groups were similar at the time of recruitment for the primary vaccination phase, at the Year 5 time point as well as at the time of the challenge dose. In addition, there was no difference in demographic characteristics across the centres in the three countries, at study start. Five years after the primary immunisation, the overall mean age was 18.5 ± 1.44 years (range: 16-21 years) and 51.7% of subjects were females; all subjects except two (99.1%) were of Caucasian origin. At the time of the challenge dose, mean age was 19.5 ± 1.30 years (range: 17-22 years); 50.7% of subjects were females and all subjects except three (97.9%) were of Caucasian origin. The exclusion of study centres in the challenge phase did not have any impact on the demographic profile of the population, as compared to the primary phase where 50.3% of subjects were females and all except 11 subjects (97.1%) were of Caucasian origin.
Immunogenicity
Antibody persistence
For the long-term follow-up period, the primary analyses of immunogenicity (long-term ATP cohort for immunogenicity) were conducted on subjects from all centres, while for the challenge phase, the primary analyses of immunogenicity and safety were performed on subjects from two out of the four centres (ATP cohort for immunogenicity).
Five years after the primary immunisation, 79.5% (95% confidence interval [CI]: 71.7 - 86.1) of subjects in the HBV_2D group and 91.4% (95% CI: 82.3 - 96.8) of subjects in the HBV_3D group had anti-HBs antibody concentrations ≥10 mIU/mL (overlapping 95% confidence intervals) (Figure 2 ). The anti-HBs antibody GMCs (calculated on seropositive subjects) at all follow-up time points appeared to be higher in subjects in the HBV_3D group compared to those in the HBV_2D group (non-overlapping 95% confidence intervals). The rate of decrease of antibody concentrations was similar in both groups, as shown by the parallel slopes of the GMC kinetic curve (Figure 3A ).
Immune memory
One month after the challenge dose, all subjects (100%) in both groups had anti-HBs antibody concentrations ≥10 mIU/mL and a similar proportion of subjects in both groups had anti-HBs antibody concentration ≥100 mIU/mL (HBV_2D: 94.3% [95% CI: 84.3 - 98.8]; HBV_3D: 95.2% [95% CI: 76.2 - 99.9]).
All subjects (100%) in both groups mounted an anamnestic response to the challenge dose of hepatitis B vaccine irrespective of their pre-challenge dose serostatus. All subjects who had diminishing levels of anti-HBs antibodies before the challenge dose (<3.3 mIU/mL or 3.3-10 mIU/mL [pooled: seven and six subjects, respectively]) showed anti-HBs antibody concentrations ≥10 mIU/mL after the challenge dose. In comparison, subjects who had robust pre-challenge anti-HBs antibodies concentrations (≥10 mIU/mL and ≥100 mIU/mL [pooled: 35 and 26 subjects, respectively]) continued to have similarly high anti-HBs antibody levels post-challenge dose (Table 1 ). In both groups, there was a large increase in the GMCs one month post-challenge dose (HBV_2D: 6214.1 mIU/mL [108-fold increase, approximately]; HBV_3D: 16564.3 mIU/mL [95-fold increase, approximately]) (Figure 3B ).
As a post-hoc analyses, to ensure that the subjects participating in the challenge phase of the study were representative of the initial population that was enrolled in the primary study, the post-primary immune response in subjects included in the ATP cohort for immunogenicity in the challenge phase was compared to that in excluded subjects and found to be comparable irrespective of whether they received the two-dose or three-dose schedules (overall percentage of subjects with anti-HBs antibody concentration ≥10 mIU/mL participants: 97.1% [95% CI: 90.1 - 99.7]; non-participants: 97.2% [95% CI: 94.5 - 98.8]; anti-HBs antibody GMCs: 3037.2 mIU/mL and 3939.8 mIU/mL, respectively).
Safety and reactogenicity
The challenge dose was generally well-tolerated. Pain at the site of injection and fatigue were the most frequently reported solicited local and general symptoms, respectively (33.8% of subjects reported each). None of the subjects reported local symptoms of Grade 3 intensity, and one subject (1.3%) reported a general symptom, fatigue, of Grade 3 intensity. Five subjects reported unsolicited symptoms that were considered by the investigator to be vaccine-related. These symptoms were vision impairment (diagnosed to be transient and due to fatigue), injection site paraesthesia, myalgia, allergic dermatitis and rash (one subject each). One subject reported Grade 3 unsolicited symptoms, myalgia that was considered by the investigator to be vaccine-related. All these adverse events had resolved by the end of the study. No SAEs were reported following the challenge dose. | Discussion
The primary study had established that the immunogenicity of a two-dose regimen of the Adult formulation of hepatitis B vaccine was non-inferior to that of a three-dose regimen of the Paediatric formulation. Overall, both regimens had comparable safety profiles [ 4 ]. In order to address concerns regarding the possibility of waning immune memory against the hepatitis B antigen over time, it was essential to evaluate the duration of persistent immunity following the two-dose hepatitis B primary immunisation.
Studies assessing the immunogenicity of two- and three-dose primary vaccination against hepatitis B in adolescents have reported seroprotection rates between 93.4% and 99.5% and anti-HBs antibody GMCs up to 4155 mIU/mL [ 12 , 13 , 15 ]. Data on the long-term persistence of anti-HBs antibodies in adolescents following a three-dose schedule as obtained from two published long-term follow-up studies had established that 94.1% of adolescents retained anti-HBs antibody concentrations ≥10 mIU/mL for at least five years, while 91.2% of adolescents retained anti-HBs antibody concentrations ≥10 mIU/mL up to 10 years after completion of the primary vaccination schedule [ 13 , 16 ]. The findings from the present study are in line with these previous long-term follow-up studies. In this study, the anti-HBs antibodies persisted for at least five years after primary vaccination, irrespective of whether the subjects received the two- or three-dose schedules. At the end of the long-term follow-up period, 79.5% and 91.4% subjects in the HBV_2D and HBV_3D groups, respectively showed anti-HBs antibody concentrations ≥10 mIU/mL.
The anti-HBs antibody GMC evolution observed in the present study is similar to observations from previous studies [ 2 , 11 ], where the GMCs declined rapidly in the first year after primary vaccination, followed by a more gradual decrease over the subsequent years. The anti-HBs antibody GMC observed in the HBV_3D group was comparatively higher than that observed in the HBV_2D group, at all follow-up time points. However, the evolution of anti-HBs antibody GMC observed throughout the follow-up period was similar in both groups. Of note, the fold increase in anti-HBs antibody GMC following the challenge dose was slightly higher in the HBV_2D group than in the HBV_3D group (108-fold and 95-fold, respectively). Thus, it is evident that the subjects in both groups, irrespective of their anti-HBs antibody levels prior to the challenge dose had sufficient immune memory to mount an effective anamnestic response to the challenge dose administered five years after completion of the primary vaccination course.
These results are consistent with previous long-term studies with two- and three-dose schedules of the hepatitis B study vaccine which have reported that five to ten years after primary vaccination, between 81.0% and 99% of children and adults had anti-HBs antibody concentrations ≥10 mIU/mL [ 17 - 19 ]. In addition, the fact that all subjects in the present study could mount an anamnestic response to the challenge dose indicates strong immune memory against the hepatitis B vaccine antigen.
A potential weakness of this study is that the sample size calculation was based on the objective of the primary study (to compare the immune response induced by the Adult and Paediatric formulations of the hepatitis B study vaccine following a two-dose or three-dose primary vaccination course) and hence did not account for the attrition of subjects over a period of five years. However, the population at the start of the study and the population followed up were comparable in terms of anti-HBs response to primary vaccination and demographic characteristics, as evident from the post-hoc analysis. Therefore there was no bias in the selection of the final study cohort.
The observations from this study are in line with previous reports that the decrease in anti-HBs antibody concentrations to even undetectable levels does not necessarily indicate loss of protection in the long-term and that immunological memory can outlast the loss of antibodies [ 1 , 20 ]. The data from two separate studies in infants and adolescents that evaluated persistence of anti-HBs antibodies five and ten years after primary vaccination have further established these observations [ 13 , 21 ]. The fact that the vaccinees with undetectable levels of anti-HBs antibodies or waning antibody levels responded with an anamnestic response to the challenge dose indicate that there is currently no evidence that booster dose of hepatitis B vaccine is required after a successful primary vaccination [ 13 , 21 ].
A two-dose schedule of GSK Biologicals' Engerix -BTM Adult (anti-HBsAg content: 20 μg), which has a good safety and immunogenicity profile and is generally well-tolerated, is therefore a suitable alternative to the standard three-dose schedule of the Paediatric formulation (for adolescents aged 11+) and may facilitate higher immunisation completion rates [ 21 - 24 ] with the reduction in the required number of injections and clinical visits. In addition, a catch-up regimen of a two-dose schedule in older adolescents susceptible to the disease and in whom compliance with a three-dose Paediatric dosing schedule is in doubt, may also be used in order to improve population-based immunity [ 21 ]. | Conclusions
The two-dose schedule of the Adult formulation of hepatitis B vaccine when administered to adolescents induced persistence of detectable anti-HBs antibodies for at least five years after completion of the primary vaccination schedule. The strong anamnestic response following the challenge dose regardless of the priming schedules provides the evidence of strong immunological memory for at least five years following vaccination. | Background
The standard three-dose schedule of hepatitis B vaccines is frequently not completed, especially in adolescents. A primary study has confirmed the equivalence of a two-dose schedule of an Adult formulation of hepatitis B vaccine [Group HBV_2D] to a three-dose schedule of a Paediatric formulation in adolescents (11-15 years) [Group HBV_3D]. This follow-up study evaluated the five year persistence of antibody response and immune memory against the hepatitis B surface (anti-HBs) antigens five years after completion of primary vaccination.
Methods
A total of 234 subjects returned at the Year 5 time point, of which 144 subjects received a challenge dose of hepatitis B vaccine. Blood samples were collected yearly and pre- and post-challenge dose to assess anti-HBs antibody concentrations.
Results
At the end of five years, 79.5% (95% confidence interval [CI]: 71.7 - 86.1) and 91.4% (95% CI: 82.3 - 96.8) of subjects who received the two-dose and three-dose schedules, respectively had anti-HBs antibody concentrations ≥10 mIU/mL. Post-challenge dose, all subjects had anti-HBs antibody concentration ≥10 mIU/mL and >94% subjects had anti-HBs antibody concentration ≥100 mIU/mL. All subjects mounted a rapid anamnestic response to the challenge dose. Overall, the challenge dose was well-tolerated.
Conclusion
The two-dose schedule of hepatitis B vaccine confers long-term immunogenicity and shows evidence of immune memory for at least five years following vaccination.
Trial registration
Clinical Trials NCT00343915, NCT00524576 | List of Abbreviations
HCC: Hepatocellular carcinoma; WHO: World Health Organization; GSK: GlaxoSmithKline; anti-HBs: Antibodies against hepatitis B surface antigens; CEVAC: Centre for Evaluation of Vaccines; SAE: Serious adverse events; ATP: According to protocol; TVC: Total vaccinated cohort; SAS: Statistical Analysis System.
Competing interests
PVD acts as chief and principal investigator for clinical trials conducted on behalf of the University of Antwerp, for which the University obtains research grants from vaccine manufacturers; speaker's fees for presentations on vaccines are paid directly to an educational fund held by the University of Antwerp. AM, IM and AK declare to have no conflict of interest. RB declares that occasionally, organisations such as CSL, Roche, Sanofi Pasteur, GSK and Wyeth Lederle have provided funding to RB to attend and present at scientific meetings; RB also has received financial support from various organisations to conduct research - any funding received is directed to a research account at The Children's Hospital at Westmead and is not personally accepted by RB. SK, AB, SN, ML and JJ are currently employed at GSK Biologicals; AB, SN, ML and JJ also have stock ownership at GSK Biologicals.
Authors' contributions
PVD participated in the design of the study, acquisition of the data and interpretation of the data; AM, IM, AK, RB, SK, AB, SN, ML and JJ have been involved in the acquisition of data and interpretation of the data. SK performed the statistical analysis. All authors were involved in critical review and have commented on the draft manuscripts; and have read and approved the final manuscript.
Pre-publication history
The pre-publication history for this paper can be accessed here:
http://www.biomedcentral.com/1471-2334/10/357/prepub | Acknowledgements
The authors would like to acknowledge all families for their valued participation in this study, Elizabeth Clarke and Laura Rost (both research nurses from The Children's Hospital, Australia), study nurses Annelies Van Winckel, Annemie Bruynseels, Aline Bontenackel and Serge Broodhaers, along with the study physicians: Dr Froukje Kafeja (from Vaccine & Infectious Disease Institute, Centre for the Evaluation of Vaccination, Antwerpen, Belgium) and Dr. Marie Van Der Wielen (previously employed by the University of Antwerp and currently employed at GSK Biologicals); laboratory personnel at CEVAC laboratories; Ghent University and Hospital, Ghent, Belgium for serological analyses. The authors also thank Priya D' Crasta for statistical support, Avishek Pal for medical writing and Manjula K for publication co-ordination during the preparation of this manuscript (all employed by GSK Biologicals).
Trademark Statement
Engerix -B is a trademark of GlaxoSmithKline group of companies.
Role of Funding Source
GlaxoSmithKline Biologicals was the funding source and was involved in all stages of the study conduct and analysis. GlaxoSmithKline Biologicals was also responsible for costs associated with the development and the publishing of the present manuscript. | CC BY | no | 2022-01-12 15:21:45 | BMC Infect Dis. 2010 Dec 20; 10:357 | oa_package/0a/bb/PMC3016379.tar.gz |
PMC3016380 | 21172014 | Background
Obesity rates are high among employed adults, have shown consistent increase over the past few decades and vary by occupational group [ 1 ]. Most adults spend 8 to 12 hours a day at work, which likely affects other domains in their lives such as self-care behavior and physical health [ 2 , 3 ]. It is therefore important to determine the aspects of employment that influence body weight and weight-related behaviors. The exact mechanisms that underlie the relationship between work and obesity are unclear, but may include physical and social aspects of the work environment and structural aspects of work, such as the number of hours worked per week. Given the potentially significant impact of work on obesity, understanding the nature of this relationship could inform the development of effective prevention and treatment interventions.
Previous research has shown inconsistent associations between the number of hours worked per week and weight, dietary intake, food habits, and physical activity, with associations often differing by gender. Much of the literature suggests that longer work hours are associated with higher Body Mass Index (BMI) [ 4 - 8 ], poorer dietary habits [ 9 - 12 ] and reduced leisure time physical activity [ 13 - 16 ], with associations occurring often among men and not women [ 5 - 7 , 13 ]. However not all research supports these findings; several studies report no association between hours worked per week and weight, food choice, or physical activity [ 3 , 7 , 17 - 19 ] or report associations only among women [ 8 , 9 ]. More research is needed to clarify the relationship between work hours, weight, and weight-related behaviors [ 11 , 20 ] and the potential gender differences in these associations [ 21 ]. In addition, few studies have examined these relationships within single occupational groups, which may be important in understanding work-related factors that impact weight in occupations with high obesity prevalence. Further, understanding the importance of work-related factors, such as work hours, may assist in the design of worksite interventions and policies addressing obesity [ 1 , 22 ].
The purpose of the current study was to examine associations between work hours, food intake, physical activity and perception of ability to eat healthy at work in a sample of transit workers, composed primarily of bus operators. In a study of major U.S. occupational groups, individuals employed as motor vehicle operators (which includes bus drivers) had the highest prevalence of obesity, with estimates at 31.7% for men and 31.0% for women [ 1 ]. In addition to high rates of obesity, transit workers suffer from stressful conditions of work. Along with being a largely sedentary job [ 23 , 24 ], structural characteristics such as shift patterns and work hours poses challenges for consistent engagement in healthy weight management behaviors [ 25 ]. Long and irregular hours are common, creating difficulties for family meals [ 23 ], constraining time available for meal preparation and exercise [ 22 ], and potentially limiting access to healthy foods [ 26 ]. While previous research has examined associations between long work hours and outcomes such as back and neck pain [ 27 ], occupational fatigue [ 28 , 29 ], and psychological health [ 30 ] in transportation workers little to no research has examined associations with weight and weight-related behaviors in this occupational group. The present study examined associations between work hours and behavioral variables and among men and women as prior research has identified gender differences between work hours, health behaviors and obesity [ 7 , 9 , 31 ]. It was hypothesized that that work hours would be associated positively with BMI, poorer dietary habits, lower physical activity, and greater perceived difficulty of eating healthy at work. | Methods
Sample and procedures
Data for this study were collected as part of a worksite environmental obesity prevention intervention among Metro Transit workers (the Route H study). Details of the study design are published elsewhere [ 32 , 33 ]. The study was a group-randomized trial conducted with four transit garages. Interventions components targeted the garage food and physical activity environment and included increasing availability of healthy foods at the worksite, improvements in the fitness facilities and increasing the availability of group-based programs to promote healthful eating and physical activity behaviors. Measurements were collected at baseline and after the 18-month intervention ended. Baseline measurements only were analyzed for the present study.
All employees at the four garages were invited to complete survey instruments and have height and weight measured by research staff. The average individual participation rate across the garages was 78% (69% to 84%). The sample for the present study was comprised of 1086 workers (854 men; 232 women) who completed the baseline survey and answered the work hours question. Participants received a $20 incentive for completing the survey and height and weight measurements.
This study was approved by the University of Minnesota Institutional Research Board Human Subjects Protection Program.
Measures
Work Hours
Work hours were measured using a single item that queried the number of hours worked at Metro Transit per week on average. Response options were 0 - 10 hours; >10 - 19 hours; 20 - 29 hours; 30 - 39 hours; 40 - 49 hours; or ≥ 50 hours per week . For analysis purposes, categories were collapsed into < 40 hours , 40 - 49 hours , and ≥ 50 hours per week.
Body Mass Index (BMI)
Body weight was measured in street clothing using an electronic, calibrated scale by trained research staff. Height was measured using a portable stadiometer. Two separate measurements were taken and the average of the two values were used to calculate BMI for analysis purposes. BMI was calculated as weight (kg)/height (m 2 ).
Food Intake
Food intake was measured using a self-report food frequency questionnaire. The food frequency instrument measured a subset of foods targeted by the intervention and was adapted from two existing instruments with established validity [ 34 ]. Participants reported weekly frequency and serving size of foods consumed in the past month. For the current study, intake of fruits and vegetables (4 items), sweets (6 items), salty snacks (4 items) and sugar sweetened beverages (SSB; 2 items) was examined. Fruits and vegetable items were 100% juice, fruit, lettuce salad, and other vegetables (all raw and cooked). Sweet foods consisted of food items such as ice cream, cookies, doughnuts, sweet muffins and chocolate candy. Salty snacks were food items such as chips, popcorn, and french fries; SSB were defined as fruit drinks and soft drinks (regular, non-diet). Summary scores for food categories (e.g., sweets) were calculated by multiplying the frequency of consumption of each food item (e.g., chocolate candy) per week by the usual portion size, dividing by seven to yield portion size per day and summing across items. One item queried frequency of consuming breakfast, lunch, or dinner at fast food places (e.g., McDonald's, Kentucky Fried Chicken) over the past month. Response options were never, 1-3 times last month, 1-2 times per week, 3-4 times per week, 5-6 times per week, and ≥7 times per week . For analysis purposes, responses were categorized as no fast food restaurant use , or ≥ 1-3 times per month .
As vending machines were the only source of food that could be purchased onsite at the garages [ 35 ], we assessed frequency of access to the four types of vending machines available: snack, cold food, hot beverage, and cold beverage. The vending machines consisted primarily of food items that were considered less healthful. Across the study garages, between 21% and 37% of the vending items met healthful criteria of being low calorie (i.e., ≤ 400 kcals for entrees; ≤ 150 kcals for snacks and sweets; ≤ 50 kcals for cold beverages; ≤ 120 kcals for milk), low sugar (i.e., ≤ 35% by weight for entrees, snacks, sweets; nuts, seeds, mints, and gum were sugar-free), and lowfat (i.e., ≤ 30% total kcals for snacks, entrees, sweets, cold beverages) and most items were high calorie [ 36 ]. Frequency was assessed during the past month, with response options being never, 1-3 times last month, 1-2 times per week, 3-4 times per week, 5-6 times per week , and ≥7 times per week . For analysis purposes, responses were categorized as ≥ 3-4 times/week vs. 1-2 times/week or less for snack, hot and cold beverage machines, and ≥ 2-3 times/month vs. 1 time/month or less for cold food machines.
Physical Activity
A self-report measure was used to assess leisure time physical activity (LTPA) [ 37 , 38 ]. Participants reported the number of times per week they engaged in strenuous, moderate, and mild leisure time activity for more than 10 minutes. Additionally, participants reported their frequency of sweat-inducing exercise episodes during a 7-day period (never, sometimes, often). The number of moderate and strenuous episodes were combined and dichotomized into ≥ 3 times per week vs. <3 times per week . The number of mild episodes was also dichotomized into ≥ 3 times per week vs. <3 times per week . Finally, sweat-inducing exercise was dichotomized into sometimes/often vs. never .
Perceived worksite environment related to healthful food choices
Two single-item measures assessed perception of ease of eating healthy at work. Items were "At my workplace it is easy to eat a healthy diet" and "It's hard for me to get fruits and vegetables when I'm at work". The response scale was a five-point Likert scale ranging from strongly agree to strongly disagree . For analysis purposes, the items were recoded to a dichotomous response: agree or strongly agree vs. neutral, disagree, or strongly disagree (hard to get fruits and vegetables) and disagree or disagree strongly vs. neutral, agree, or strongly agree (easy to eat a healthy diet).
Demographics
Demographic variables were self-reported and included age, gender, race (coded white vs. other), education (coded high school or less, some college, college degree or higher), marital status (coded married or partnered vs. not), annual household income (coded as ≥$50001 vs. < $50001), years worked at Metro Transit (coded as up to 5 years, > 5 - 15 years, > 15 years), and job position (coded as bus operator vs. other). Smoking status was measured using three questions. Current smokers were those who reported having smoked at least 100 cigarettes in their lifetime and had smoked a cigarette in the past seven days.
Statistical analysis
Analyses were conducted using SPSS Version 16.0 (SPSS for Windows, Rel. 16.0.1. 2007. Chicago: SPSS, Inc). Frequencies and means were calculated for each variable in the full sample. With the exception of age and BMI, all variables were coded as categorical due to their non-normal distributions. Chi-square analyses were conducted to examine bivariate associations between hours worked per week and the categorical sociodemographic variables. Comparisons of age differences by work hour category were examined using one-way ANOVA. Multivariate logistic regression analyses were conducted in the full sample and gender-stratified to examine associations between work hours and the dichotomized behavioral variables. Multivariate linear regression analyses were conducted to examine associations between work hours and BMI, the only continuous dependent variable. Models were first examined with adjustment for basic demographic variables age, gender, and race (Model 1). Then a second model was examined with full adjustment for sociodemographic and other work-related variables (job position, education, income, race, marital status, and tenure at Metro Transit). Control variables were selected because they were associated with hours worked per week in bivariate analyses (years worked at Metro Transit, job position) or have shown associations with weight status or behavioral variables in the literature [ 39 - 41 ].
Models were examined to determine if the overall work hour variable was statistically significant and p-values are presented for both Model 1 (base model) and Model 2 (fully adjusted model). Crude prevalence of transit workers reporting endorsement of the behavioral outcome variables are presented, with asterisks denoting significant differences (p < 0.05) in work hour categories from the reference category (< 40 hours per week) in the fully adjusted models. Due to the small number of women reporting use of snack and cold food vending machines in the 50 or more hour per week category, this category was combined with the 40-49 hours category for those analyses. | Results
Demographics, Work, and Eating and Physical Activity Behavior Variables
Demographic variables, by hours worked per week, are presented in Table 1 . Most of the sample (72%) worked 40 - 49 hours per week. The sample was primarily male and nearly two-thirds were white. About half the sample had completed at least some college and just over half (58%) had an annual household income $50,001/year or greater. Thirty-eight percent of the sample had worked at Metro Transit between five and 15 years. Nearly three-quarters of the sample were bus operators. With the exception of age and smoking status, significant differences by work hour category were observed for all other sociodemographic and work-related variables. Of note, high percentages of women and non-whites worked less than 40 hours per week, as well as those working at Metro Transit less than five years.
Obesity prevalence was high in the full sample, over half were classified as obese (BMI ≥ 30 kg/m 2 ) and average BMI was 32.3 kg/m 2 (Table 2 ). Over half of the sample reported eating a half serving or more of sweets per day, while less than half reported consuming that serving size of salty snacks (40%) and sugar sweetened beverages (SSB) (38%). Less than half (44%) of the sample consumed 2 or more servings of fruits and vegetables per day. Frequency of fast food consumption was modest; 77% of Metro transit workers reported eating at a fast food restaurant 1 - 3 times or more in the past month. Approximately one third of Metro Transit workers reported using work vending machines 3 or more times per week. Self-reported leisure-time physical activity was fairly high, over half of the transit workers reported engaging in mild or moderate/strenuous exercise 3 or more times per week and 75% reported engaging in physical activity long enough to work up a sweat sometimes/often in a 7-day period. Nearly 65% of the sample found it difficult to eat healthy at work and just over half agreed it was difficult to get fruits and vegetables at work.
Work Hours, Food Intake and Perceived Work Environment
Table 2 presents average BMI and crude prevalence of transit workers reporting frequency of food intake, physical activity, and perceptions of the work environment by work hour categories in the full sample. Associations existed between the number of hours worked per week and BMI, frequency of use of cold beverage, snack vending, and cold food vending machines; p-values are presented for the fully adjusted models only. BMI was highest among those who worked 50 or more hours per week and was significantly higher than those working less than 40 hours per week (34.4 v. 32.2 kg/m 2 , p = 0.03). Compared to transit workers working less than 40 hours per week, those working 50 or more hours per week made more frequent purchases from vending machines: snack (44% v. 25%, p = 0.002), cold beverage (43% v. 26%, p = 0.004), and cold food (40% v. 24%, p = 0.001). Cold food (30% v. 24%, p = 0.03) and cold beverage (35% v. 26%, p = 0.03) vending machine use was more frequent among those working 40 - 49 hours per week compared to less than 40 hours per week.
Transit workers working 50 or more hours per week were more likely to have a higher intake of salty snacks (48% v. 34%, p = 0.03 in Model 1), but the overall work hour variable was not statistically significant after adjustment. None of the other food frequency variables, leisure- time physical activity, or perceived ease of eating healthy at work was associated with the number of hours worked per week.
Gender differences in work hours, food intake and perceived work environment
Generally, BMI, food intake, and perceptions of the work environment differed by gender. Average BMI was higher among females than males (33.4 kg/m 2 v. 32.1 kg/m 2 , p = 0.03). Compared with men, women reported higher intake of fruits and vegetables (53% v. 41%, p = 0.001) and less frequent use of cold food (21% v. 32%, p = 0.001) and hot beverage vending machines (24% v. 34%, p = 0.004).
Table 3 presents average BMI and crude prevalence of transit workers reporting frequency of food intake and perceptions of the work environment by work hour categories, stratified by gender. Results among males were analogous to the results in the full sample. Average BMI was highest among those who worked 50 or more hours per week and remained significantly higher than those working less than 40 hours per week (34.3 kg/m 2 v. 31.4 kg/m 2 , p = 0.02) after full adjustment. Compared to working less than 40 hours per week, men working 50 or more hours per week more frequently used cold beverage (48% v. 26%, p = 0.001) and cold food (47% v. 28%, p < 0.001) vending machines. Further, males working 40 - 49 hours (33% v. 22%, p = 0.04) and 50 or more hours (52% v. 22%, p < 0.001) per week used snack vending machines more frequently. Similar to the full sample, salty snack intake was highest in the 50 or more hours per week category, but the work hour variable was not significant in the fully adjusted model. No other food frequency or physical activity variables were associated with number of hours worked per week. With the exception of fruit and vegetable intake and cold food vending machine use, associations were not found between work hours and BMI, food frequency, or physical activity variables among females. Fruit and vegetable intake was highest among females working 50 or more hours per week (86% v. 56%, p = 0.04) compared to those working less than 40 hours per week. When analyzing fruits and vegetable items individually (i.e., fruit juice, fruit, lettuce salad, other vegetables) consumption of 'fruit' and 'lettuce salad' showed significant associations with number of hours worked per week (data not shown). Frequency of use of cold food vending machines was higher among women working 40 or more hours per week (23% v. 15%, p = 0.04) compared to those working less than 40 hours per week. | Discussion
The findings of this study showed that long work hours were associated with high BMI and less healthful food habits in male transit workers; to a lesser extent, both healthful and less healthful food habits were associated with long work hours in female transit workers. Males working 50 or more hours per week had higher BMI and reported greater use of cold beverage, snack, and cold food vending machines than those working less than 40 hours per week. These associations were present after adjustment for both sociodemographic and work-related factors that are likely to account for differences in food habits, such as age, race, education, income and job position. In contrast, the number of hours worked per week was not associated with BMI and showed very few associations with food habits among female transit workers. Women working at least 40 hours per week more frequently used cold food vending machines. Surprisingly, those working 50 or more hours per week were most likely to consume 2 or more daily servings of fruits and vegetables. These results may indicate that the length of the working day may increase reliance on foods that are available at workplace facilities, particularly among male transit workers.
Aspects of the current study findings are consistent with prior research. As in other studies, obesity prevalence was high in this group of transit workers [ 1 , 42 , 43 ] and we identified associations between work hours and BMI among men only [ 5 - 7 ]. Further, work hours were associated with eating behavior between men and women differently [ 9 , 10 ]. In the present study, men working longer hours were more heavily dependent than were women on convenience foods purchased from the garage vending machines. Previous research indicates that men are more likely than women to purchase lunch at work and are more likely to report convenience as influencing food choice for lunch [ 44 ]. The gender difference in the association between work hours and vending machine use may be due to job position of the transit workers. A higher percentage of females were bus operators (85% vs. 70%, p < 0.001), thus may have had limited exposure to garage vending machines as their time at the worksite may be brief. Longer work hours, particularly for roles that might require more time at the garage (i.e., bus maintenance, management) may create opportunity for frequent use of onsite food sources. Female transit workers generally used vending machines less frequently than males (i.e., cold food, hot beverage).
Contrary to our expectations, female transit workers who worked 50 or more hours per were more likely to consume 2 or more servings of fruit and vegetables per day compared to those working less than 40 hours per week. The reasons for this association are not clear. Poorer dietary intake has been associated with longer work hours among women [ 9 ] and sociodemographic variables which may have helped explain the association (e.g., education, income, marital status) [ 45 - 48 ] did not vary significantly across work hour categories among females in our sample. Income was the only sociodemographic variable that was significantly associated with fruit and vegetable consumption; women having a household income of $50001 or more were more likely to report consuming 2 or more servings of fruits and vegetables per day (data not shown). The association between work hours and fruit and vegetable consumption may be due to obesity prevalence in females working 50 or more hours per week. This group of transit workers had the highest average BMI (34.9 kg/m 2 ) in the sample, thus are likely to be attempting to lose weight [ 49 ]. Such individuals (i.e., overweight, obese) have been found to report higher consumption of fruits and vegetables [ 50 ] perhaps accounting for the prevalence in this work hour category. Finally, our finding may be due to the small number of women in this highest work hour category (≥ 50 hours per week). While 86% of women reported consuming 2 or more daily servings of fruits and vegetables, this represented 12 of 14 women. Hence, our findings need replication with a larger sample of female transit workers.
While the current study results indicate that transit workers who work long hours more frequently purchase foods from vending machines, other measures of poor eating behavior were not associated with hours worked, such as consuming fast foods, sweet foods, and sugar sweetened beverages. Further, work hours were not associated with reported leisure-time physical activity, which differs from the results of previous studies [ 14 - 16 , 51 ]. Although the relationship between hours worked per week and exercise has not been established consistently [ 18 ], bus operators report that more leisure time exercise and shortening the length of work days as among the most important ways to improve their health and work environment [ 52 ]. Additional research is needed to clarify the specific weight-related behaviors that are most disrupted by long work hours in transit workers. Future studies should examine variables such as meal frequency and meal timing, which have been associated with overweight and obesity [ 53 ]. As BMI was significantly higher among women than men, and over 60% of female transit workers were obese, further work should also focus on weight management in this population.
The findings of this study have implications for worksite intervention design and policy. Our results underscore the importance of worksite food environments containing healthy food choices that are readily accessible. In the Route H study, none of the garages had cafeterias onsite and only one garage had restaurants in the immediate area [ 36 ]. Thus vending machines were largely the only source of food available [ 35 ]. Transit workers who worked longer hours may have used vending machines regularly due to lack of other alternatives and/or time, given bus operators do not get scheduled meal breaks. Prior research suggests that workers frequently consume food from worksite facilities and that food offerings at the worksite are related to diet quality [ 54 ]. The limited food environment coupled with long working hours may be important contributing factors in the development and maintenance of obesity. Finally, in order to inform policy interventions, more research is needed to explore strategies such as flexible work arrangements and expanding driver control over work scheduling [ 23 , 55 ], given that our data suggests that long work hours carry consequences for weight and weight-related behavior in this occupational group.
The present study had several strengths and limitations. Strengths include a large mixed-gender sample of transit workers and several measures of dietary intake, eating behavior, and leisure time physical activity. This study is limited in its use of a cross-sectional design; no inferences can be made regarding causality. Additionally, our primary measures of dietary intake and physical activity were self-report, which may have been inaccurately reported [ 56 , 57 ]. The small sample of women may have limited our power in testing the gender-stratified associations. Only 6% of female transit workers were in the highest work hour category, potentially providing insufficient power to detect associations. Lower number of women working overtime has been found in other studies [ 58 ]. The current study's sample consisted primarily of bus operators (72%), but included small numbers of management and bus maintenance staff. Thus results reflect associations for transit workers across job positions, thus may not reflect the experience of solely bus operators. Our research contains a single work-related factor, the number of hours worked per week. Future research should expand measurement of work-related factors by including variables such as work shift, job stress, and job flexibility, which may be associated with health related behavior [ 22 ]. Finally, obesity prevalence was high in this sample, 56% of the transit workers were considered obese (BMI ≥ 30 kg/m 2 ). Although high obesity prevalence is to be expected in a sample of transportation workers [ 1 ], this high prevalence may impact dietary and physical activity behaviors [ 49 , 50 ]. As the survey response rate was high in this study (78%), results may generalize well to transportation workers, but may have limited generalizability outside this occupational group. | Conclusion
Longer work hours were associated with BMI, fruit and vegetable intake, and frequent use of garage vending machines in a sample of Metro Transit workers. Males who worked the longest hours (50 or more hours per week) had the highest BMI and were the most frequent users of vending machines at garage worksites. Longer work hours were associated with increased fruit and vegetable intake among women as well as more frequent use of cold food vending machines. Long work hours may increase dependence upon food availability at the worksite among transit workers, underscoring the importance of providing healthy food choices at the transit garages. | Background
Associations between hours worked per week and Body Mass Index (BMI), food intake, physical activity, and perceptions of eating healthy at work were examined in a sample of transit workers.
Methods
Survey data were collected from 1086 transit workers. Participants reported hours worked per week, food choices, leisure-time physical activity and perceptions of the work environment with regard to healthy eating. Height and weight were measured for each participant. Multivariate linear and logistic regressions were conducted to examine associations between work hours and behavioral variables. Associations were examined in the full sample and stratified by gender.
Results
Transit workers working in the highest work hour categories had higher BMI and poorer dietary habits, with results differing by gender. Working 50 or more hours per week was associated with higher BMI among men but not women. Additionally, working 50 or more hours per week was significantly associated with higher frequency of accessing cold beverage, cold food, and snack vending machines among men. Working 40 or more hours per week was associated with higher frequency of accessing cold food vending machines among women. Reported frequency of fruit and vegetable intake was highest among women working 50 or more hours per week. Intake of sweets, sugar sweetened beverages, and fast food did not vary with work hours in men or women. Physical activity and perception of ease of eating healthy at work were not associated with work hours in men or women.
Conclusions
Long work hours were associated with more frequent use of garage vending machines and higher BMI in transit workers, with associations found primarily among men. Long work hours may increase dependence upon food availability at the worksite, which highlights the importance of availability of healthy food choices. | Competing interests
The authors declare that they have no competing interests.
Authors' contributions
SAF, LJF, TT and PJH contributed to the design of the study. KHE formulated the research question, conducted data analysis, and wrote the manuscript. PJH provided guidance with the statistical analysis of the data. All authors assisted with revising the manuscript. All authors read and approved the final manuscript. | Acknowledgements
This research was funded with a grant from the National Institutes of Health NIH R01 HL 079478. Additional support for the current study was provided by the Regional Postdoctoral Training Grant In Eating Disorders Research (NIH 5T32MH082761-02). | CC BY | no | 2022-01-12 15:21:45 | Int J Behav Nutr Phys Act. 2010 Dec 20; 7:91 | oa_package/13/1d/PMC3016380.tar.gz |
PMC3016381 | 21162754 | Background
Formalin-fixed paraffin-embedded (FFPE) tissue samples are routinely used for diagnosis of disease. There is increasing interest in extracting RNA from these samples as the large numbers of archival FFPE samples constitute an invaluable resource for the investigation of diagnostic, prognostic or predictive disease associated alterations in gene expression (reviewed in [ 1 ]). Furthermore, these often represent the only diagnostic material available.
However, RNA extracted from FFPE specimens is extensively degraded and chemically modified, which compromises its use in PCR based applications (reviewed in [ 2 ]). Degradation of RNA is influenced by the time and storage conditions between sample collection and formalin fixation, the fixation process, and the conditions and length of the subsequent storage [ 3 ]. Chemical modifications of RNA are caused by formaldehyde and result in the addition of mono-methylol groups to RNA bases and subsequently in the formation of methylene bridges between RNA bases [ 4 ], and RNA-protein cross-links [ 5 ]. The addition of the mono-methylol group is in principal reversible, but a considerable amount is still present after RNA purification [ 4 ].
Whereas the extent of chemical modifications of the RNA initially limits the PCR amplification size, with time RNA degradation becomes more important in determining the size of amplifiable PCR fragments [ 3 ]. The RNA fragment sizes from FFPE tissue are usually less than 300 bp and may be less than 100 bp [ 3 , 6 - 8 ].
The quality of the extracted RNA is a critical factor for both microarray based and reverse transcription - quantitative real-time PCR (RT-qPCR) gene expression experiments. Microarray based approaches are more sensitive to RNA degradation and chemical modifications [ 2 ], which potentially influence the gene expression data [ 9 ].
RT-qPCR assays are less affected by RNA degradation and chemical modifications especially when the RT-qPCR amplicon size is less than 100 bp [ 10 ]. It is nevertheless an essential prerequisite to characterise the quality of the extracted RNA prior to its use in a gene expression study to evaluate its suitability for the planned application and to minimise data misinterpretation [ 11 , 12 ].
RNA quality is often defined in terms of RNA purity and RNA integrity. RNA purity is spectrophotometrically determined by the A 260 /A 280 and A 260 /A 230 absorbance ratios [ 13 - 15 ]. Both ratios are used to evaluate the level of contaminants such as proteins and residual organic compounds present in a RNA sample. These values provide no information about RNA degradation and amplifiable PCR amplicon sizes.
RNA integrity evaluates the level of RNA degradation and several methods have been developed for RNA integrity assessment. Two of the most commonly used approaches are suitable for assessing moderately degraded RNA but not highly degraded RNA.
The first approach investigates the ratio between the 28S and 18S ribosomal RNA bands and presumes that the integrity of ribosomal RNA reflects the integrity of mRNA [ 16 , 17 ]. The typical RNA fragment sizes of less than 300 bp make this methodology unsuitable for highly degraded RNA from FFPE samples [ 8 ]. Furthermore, this approach does not take chemical modifications of the RNA into account and provides no information on how the extracted RNA performs in RT-PCR.
The second approach has been adopted from a common practice in microarray experiments [ 18 ] and determines the mRNA integrity by utilising RT-qPCR assays to assess the 3':5' ratio of a gene target sequence, such as GAPDH [ 19 ] or ACTB [ 12 ]. These assays need to reach a considerable fragment size (up to 1.2 kb) to calculate the 3':5' ratio, which makes this methodology unsuitable for highly degraded RNA from FFPE samples. Moreover, the 3':5' assays utilise an oligo-dT primer for cDNA preparation, which precludes the cDNA obtained from being used in the majority of RT-PCR assays, an important factor when the amount of RNA is often limited.
The third approach assesses a range of fragment sizes generated by (multiplexed) endpoint RT-PCR assays for certain reference (housekeeping) genes, such as G6PD , TBP , HPRT and ACTB [ 3 , 20 - 22 ]. This approach takes both fragmentation and chemical modifications of the RNA into account and determines the PCR amplicon sizes, which might be obtained in a PCR based application. When the PCR amplicon sizes cover a range from less than 100 bp to several hundred base pairs, this methodology is suitable to assess the extent of the degradation of RNA extracted from FFPE tissues.
In this study, we sought to combine the best features of assays from the third approach to develop a readily performable endpoint RT-PCR assay to assess RNA extracted from FFPE samples for mRNA integrity and RNA performance in RT-qPCR assays.
The assay utilises the TBP (TATA box binding protein) reference gene mRNA as the target sequence. TBP has been shown to be relatively stably expressed in a range of tissues [ 23 ] and various tumour types (e.g. [ 24 - 29 ]). Four different amplicon sizes are amplified in parallel in a single tube from cDNA (multiplexed endpoint RT-PCR assay) to minimise the consumption of what are often limited amounts of RNA. The amplicon sizes chosen cover a range of up to 300 bp and are therefore tailored to the fragment size limitations typically observed for RNA extracted from FFPE samples and PCR amplicon sizes usually used for the vast majority of RT-qPCR assays.
Our improved multiplex endpoint RT-PCR assay is a robust and convenient method which overcomes the limitations of current approaches for the quality assessment of RNA extracted from FFPE specimens. The assay has been thoroughly validated by assessing the quality of 180 RNA samples extracted from FFPE tissues with an RT-qPCR assay. | Methods
Archival FFPE tissue samples
Thirty breast tumour FFPE blocks (ten each from the years 1998, 2003 and 2008) were retrieved from the archives of the Department of Pathology at the Peter MacCallum Cancer Centre. The institutional ethics committee approved the study (Approval number: 03/90).
Histology procedures
The bench surface, the manual rotary microtome Leica RM2235 (Leica Microsystems), the equipments and glass slides were cleaned with RNaseZap (Ambion, Life Technologies, Austin, TX) prior to use according to the manufacturer's directions. Diethyl pyrocarbonate (DEPC) (Sigma, St. Louis, MO) treated water (0.1%, v/v) was used throughout the histology procedures.
For haematoxylin and eosin (H&E) staining, a 3 μm section from each FFPE block was stained to identify the tumour enriched area for needle microdissection. From each FFPE block, one to five 7 μm sections were prepared and mounted on the glass slides. The number of sections used in microdissection was determined by the content of tumour cells in the tumour enriched marked areas therefore ensuring a adequate amount of tumour cells to be microdissected for each sample. The content of tumour cells in all cases was assessed by a pathologist and was in the range of 40 to 90%. After baking the sections for five minutes on a hotplate at 70°C, the sections were deparaffinised in three changes of xylene for two minutes each and were taken to water by three changes of 100% ethanol for two minutes each and DEPC water for two minutes. Subsequently, the sections were stained with 0.5% methyl green to assist with needle microdissection. FFPE blocks were sectioned freshly just prior to needle microdissection and subsequent total RNA extraction to minimise RNA degradation after sectioning.
RNA extraction and complementary DNA (cDNA) preparation
Two protocols were used to extract total RNA from FFPE samples. The first protocol utilised the High Pure FFPE RNA Micro Kit (Roche, Mannheim, Germany) and the second protocol utilised TRIzol reagent. The latter protocol was adapted as described from a recently published protocol [ 8 ]. The experimental procedures were performed in triplicates, each single experiment on a different day.
In the first protocol referred to as protocol 1 (P1), needle microdissected tissue from each sample was transferred immediately to a 1.5 mL Eppendorf tube containing 60 μL tissue lysis buffer provided by the High Pure FFPE RNA Micro Kit. Total RNA was extracted according to the manufacturer's protocol, eluted in 20 μL elution buffer and stored at -80°C.
In the second protocol referred to as protocol 2 (P2), needle microdissected tissue from each sample was immediately transferred to a 1.5 mL Eppendorf tube containing 260 μL of the lysis buffer from the Agencourt FormaPure Kit (Beckman Coulter, Beverly, MA) and incubated at 70°C for one hour. Subsequently the tube was cooled to room temperature, 20 μL proteinase K provided with the FormaPure Kit was added and incubated at 55°C for one hour. The reaction mixture was cooled to room temperature, 500 μL TRIzol (Gibco BRL, Life Technologies, Grand Island, NY) and 100 μL chloroform were added, vortexed thoroughly and incubated for three minutes. After centrifugation of the tube at 12,000 g for 14 minutes at 4°C the upper aqueous layer was immediately transferred into a new 1.5 mL Eppendorf tube. An equal volume of 2-propanol was added to the aqueous layer, vortexed thoroughly and incubated overnight at -20°C. The sample was centrifuged at 12,000 g for 10 minutes at 4°C. The supernatant was removed. The RNA pellet was washed with 1 mL 70% ethanol (4°C), vortexed thoroughly and centrifuged at 12,000 g for five minutes at 4°C. The supernatant was removed and the 70% ethanol washing step was repeated. The supernatant was removed again and the RNA pellet was dried for 15 minutes at room temperature and resuspended in 30 μL RNase free water. Residual genomic DNA co-extracted with total RNA was digested by DNase treatment employing the Turbo DNA-Free kit (Ambion) according to the manufacturer's instructions in a final volume of 50 μL. The extracted total RNA was stored at -80°C.
Total RNA concentrations were measured by the NanoDrop ND-2000 spectrophotometer (NanoDrop Technologies, Thermo Fisher Scientific, Wilmington, DE) after DNase treatment for each sample derived by each protocol in duplicate. Mean values were used to calculate the total RNA input for cDNA synthesis. RNA purity was estimated by the absorbance ratio A 260 /A 280 .
One hundred fifty and 75 ng respectively of total RNA was reverse transcribed using the Superscript III Reverse Transcriptase (Invitrogen, Life Technologies, Carlsbad, CA) with 250 ng random hexamers (Pharmacia, Uppsala, Sweden) according to the manufacturer's instructions in the presence of 20U RNase inhibitor (Roche) in a final volume of 20 μL. The mixture was incubated for one hour at 55°C. The resulting cDNA was stored at -20°C.
Multiplex endpoint RT-PCR
Presence or absence of various cDNA fragments synthesized from RNA extracted from the FFPE samples was determined by a multiplex endpoint RT-PCR assay using the TBP mRNA (NM_003194) as the target sequence (Figure 2 ). The assay conditions were optimised on a cDNA mixture prepared from RNA extracted from different cell lines.
PCR was performed on the Veriti 96-well Thermal Cycler (Applied Biosystems, Life Technologies, Foster City, CA) in a Thermo-Fast 96 PCR Detection Plate MkII (ABgene, Thermo Fisher Scientific, Epsom, United Kingdom) with a final reaction volume of 20 μL, containing 350 nmol/L each of the primers 92-F: 5'-GGATAAGAGAGCCACGAACC-3' and 92-R: 5'-TGCCAGTCTGGACTGTTCTT-3', 550 nmol/L each of the primers 161-F: 5'-GGGCACCACTCCACTGTAT-3' and 161-R: 5'-CACGAAGTGCAATGGTCTTT-3', 250 nmol/L each of the primers 252-F: 5'-GGGAGCTGTGATGTGAAGTTT-3' and 252-R: 5'-TGAGAGCCATTACGTCGTCT-3', 100 nmol/L each of the primers 300-F: 5'-GGCGGAAGTGACATTATCAA-3' and 300-R: 5'-CAGGCTGTTGTTCTGATCCA-3' (GeneWorks, Adelaide, Australia), 200 μmol/L of each dNTP, 0.5U of HotStarTaq DNA Polymerase (Qiagen, Hilden, Germany) in 1× of the supplied PCR buffer containing 2.5 mmol/L MgCl 2 and 1 μL (if 150 ng of total RNA was used for cDNA preparation) and 2 μL (if 75 ng of total RNA was used for cDNA preparation) of undiluted cDNA as template respectively. The initial denaturation (95°C, 15 minutes) was followed by 40 cycles of 30 seconds at 95°C, 30 seconds at 60°C and 30 seconds at 72°C, and a final extension step at 72°C for seven minutes. Genomic DNA extracted from peripheral blood from normal individuals (2 ng/μL) was used as negative control and a cDNA mixture prepared from total RNA of different cell lines served as a positive control. Each sample was analysed once.
The PCR products were evaluated for band abundance and size by agarose gel electrophoresis. The samples were run on a 2% (w/v) agarose gel in a 1× TBE Buffer system and stained with ethidium bromide. The wells were loaded with 20 μL of the PCR product mixture with 5 μL 5× loading dye. One μL pUC19/ HpaII DNA Molecular Weight Marker (GeneWorks) was run alongside the PCR products to determine their size.
Reverse transcription - quantitative real-time PCR (RT-qPCR)
PCR was performed on the LightCycler 480 Instrument (Roche). Resulting data were analysed and quantified with the LightCycler 480 software release 1.5.0 (Roche), utilising the second derivative maximum method [ 42 ]. The calculated C P (Crossing Point) value corresponds to the current recommended term C q (Quantification Cycle) value [ 33 ].
PCR was performed in LightCycler 480 Multiwell Plate 384 plates (Roche) in a final reaction volume of 10 μL using the TBP mRNA (NM_003194) as the target sequence (Figure 2 ). The TBP RT-qPCR assay was designed and optimised as described previously [ 32 ]. 300 nmol/L of the forward primer RT-qPCR-F: 5'-GAACATCATGGATCAGAACAACA-3' and 200 nmol/L of the reverse primer RT-qPCR-R: 5'-ATAGGGATTCCGGGAGTCAT-3' (GeneWorks) were mixed in 1× LightCycler 480 Probes Master (Roche) containing 100 nmol/L of the human Universal Probe Library probe #87 (Roche), and 1 μL (if 150 ng of total RNA was used for cDNA preparation) or 2 μL (if 75 ng of total RNA was used for cDNA preparation) of undiluted cDNA as template respectively. The initial denaturation (95°C, 10 minutes) was followed by 45 cycles of 10 seconds at 95°C, 30 seconds at 60°C, and a final cooling step at 40°C for 30 seconds. Each sample was analysed in duplicate.
Statistical analysis
Statistical analyses were performed using GraphPad Prism version 5.03 for Windows (GraphPad Software, San Diego, CA, http://www.graphpad.com ). Where appropriate, data are presented as the mean ± standard deviation (SD). Nonparametric correlations between a longer amplicon size in the multiplex endpoint RT-PCR assay and the appropriate C q value obtained in RT-qPCR were evaluated by calculating the Spearman correlation coefficient. A two-tailed P -value (calculated by Gaussian approximation) of <0.05 was considered to be statistically significant for each correlation. | Results
Assessment of total RNA yield and purity using optical density
The amount of total RNA extracted for each of the samples was measured by a NanoDrop ND-2000 spectrophotometer and is shown for each sample replicate in Figure 1 . The estimated mean total RNA extracted for protocol 1 was 2.23 μg (111.5 ng/μL) with values ranging from 0.24 to 14.24 μg (12.0 to 712.0 ng/μL). For protocol 2, the estimated mean concentration was 2.76 μg (55.1 ng/μL) with values ranging from 0.12 to 3.61 μg (2.4 to 72.1 ng/μL).
The corresponding absorbance ratio A 260 /A 280 for each sample replicate is shown in Figure 1 . The mean A 260 /A 280 ratio for protocol 1 was 1.8 ± 0.2 and for protocol 2 1.7 ± 0.1. The desired ratio A 260 /A 280 is in the range of 1.7 to 2.1 and is dependent on the extraction conditions [ 30 ].
The use of the multiplex endpoint RT-PCR for assessment of mRNA integrity
The integrity of the mRNA extracted from each sample was assessed by the multiplex endpoint RT-PCR assay using the TBP (TATA box binding protein) mRNA (NM_003194) as the target sequence (Figure 2 ). The assay was designed to amplify four amplicons of 92, 161, 252 and 300 bp in parallel in a single tube (Figure 3 ).
The amplicons were designed to be amplified from cDNA but not genomic DNA (Figure 3 ). This is of particular importance for assessing cDNA synthesised from total RNA, which has not been treated with DNase. We used primer pairs where the primers were in exons spanning one or more introns (Figure 2 ). Furthermore, the primer locations were chosen to exclude known polymorphic sites from the primer binding sites.
The results for protocols 1 and 2 are summarised in Figures 4 and 5 for each sample replicate. In general, the least degraded RNA was extracted from the year 2003 samples for both protocols (Figures 4 and 5 ). In addition, the vast majority of sample replicates of both protocols amplified the 161 bp and 252 bp PCR fragments (Figures 4 and 5 , Table 1 ).
Assessment of mRNA performance by RT-qPCR
The performance of the mRNA extracted from each sample was assessed by a RT-qPCR assay also using the TBP mRNA as the target sequence (Figure 2 ). The RT-qPCR assay was designed and optimised to be monitored using hydrolysis probes from the Universal Probe Library (UPL) library in combination with gene-specific primers [ 31 , 32 ]. The assay amplifies an amplicon of 87 bp from cDNA using an intron-spanning primer pair. The quantification cycle (C q ) [ 33 ] values obtained for each sample replicate in the RT-qPCR assay were used to estimate the quantity of amplifiable template.
Each sample replicate was classified into one of three different groups based on the C q values measured (C q ≤ 32.0; 32.1 ≤ C q ≤ 35.0; 35.1 ≤ C q ≤ 39.9) (Table 2 ). The C q value is inversely proportional to the number of amplifiable templates. A C q value of 35 is generally considered as the limit for the detection of a single copy template [ 34 , 35 ]. A C q value above 35 thus represents less than one copy template present and can be considered as background noise of the RT-qPCR assay [ 35 ]. A theoretical C q value of 32 represents approximately 10 copies of the target template.
Eighty seven percent (78/90) of samples extracted by protocol 1 and 80% (72/90) of samples isolated by protocol 2 showed C q values less than 35.1 for the TBP RT-qPCR assay (Figures 4 and 5 , Table 2 ). Therefore, cDNA prepared from RNA extracted with protocol 1 might had slightly more amplifiable templates per sample volume than cDNA prepared from total RNA extracted with protocol 2.
Comparison of the multiplex endpoint RT-PCR and RT-qPCR assays
The results from the multiplex endpoint RT-PCR assay were validated by the results obtained from the RT-qPCR assay and allowed the identification of FFPE samples with an adequate RNA quality.
Almost all the samples (135/139) that amplified the PCR fragments more than 92 bp in the multiplex endpoint RT-PCR assay amplified the 87 bp product at C q values less than 35.1 in the RT-qPCR assay consistent with a greater amount of amplifiable template (Figures 4 and 5 , Table 1 ).
Six percent (10/180) of the samples did not amplify in the multiplex endpoint RT-PCR assay (Table 1 ). Consistent with this, the absence of the 92 bp amplicon was associated with late amplification in the RT-qPCR assay with a C q value later than 35.0 (Figures 4 and 5 , Table 1 ).
In some cases, stochastic effects due to the very small number of available templates, can give inconsistent results; e.g. for one of the replicates of sample P2-08-08, the 92 bp amplicon did not amplify in the multiplex endpoint RT-PCR assay but showed a band for the 161 bp product (Figure 5 ). The other replicates showed a 92 bp band and band sizes up to the 161 bp amplicon, respectively. All three replicates showed C q values in the RT-qPCR assay above 35.0 (Figure 5 ) indicating that on average less than one copy of template was present.
We saw no FFPE samples that gave robust amplification for the 300 bp amplicon. Forty six percent (41/90) of the samples prepared by protocol 1 amplified all the amplicons up to 252 bp in the multiplex endpoint RT-PCR assay (Figure 4 ). These samples showed a mean C q value in the RT-qPCR assay of 31.5 ± 1.0 with values ranging from 30.0 to 33.7. For protocol 2, 33% (30/90) of the samples amplified the amplicons up to 252 bp and showed a mean C q value of 31.7 ± 0.8 with values ranging from 31.7 to 33.3 (Figure 5 ).
Correlations between a longer amplicon size in the multiplex endpoint RT-PCR assay and the appropriate C q value obtained in RT-qPCR were evaluated by calculating the Spearman correlation coefficient. We found an inverse correlation for both parameters for protocols 1 and 2 [ r (P1) = -0.75, CI (95%) = -0.83 to -0.65 ( P < 0.0001, n = 90) and r (P2) = -0.80, CI (95%) = -0.86 to -0.70 ( P < 0.0001, n = 90), respectively]. Thus, a longer fragment size in the multiplex endpoint RT-PCR assay was correlated with a smaller C q value in the RT-qPCR assay for both total RNA extraction protocols (Figure 6 ).
Interestingly, both the multiplex endpoint RT-PCR and the RT-qPCR assay show that the samples prepared from 2003 performed best during PCR amplification. All 60 samples from 2003 extracted by protocols 1 and 2 amplified PCR amplicons up to 161 bp in the multiplex endpoint RT-PCR assay (Figures 4 and 5 ). Furthermore, all of these samples also amplified the 87 bp amplicon with C q values less than 35.1 in the RT-qPCR assay (Figures 4 and 5 ). The lack of correlation between age of the FFPE samples and RNA quality has been reported previously [ 36 ]. This may be explained by a difference in the processing of the samples, such as pre-fixation time, fixation time and storage conditions over the course of time [ 3 , 21 ]. The variation in PCR fragment sizes present in each sample replicate was also the least in the samples prepared in year 2003 compared to samples prepared from the years 1998 and 2008, consistent with a minimisation of stochastic effects caused by greater amounts of template. The effect of sample age needs further investigation. | Discussion
There has been an increasing demand for reliable methods and protocols for the extraction of RNA from FFPE tissue sections. The performance of RNA preparation protocols has improved over the years and specific protocols for certain RNA downstream applications have been developed (reviewed in [ 1 , 2 ]).
Despite the improvements in methodologies for extracting RNA from FFPE specimens, one significant and challenging problem still remains. The quality of the extracted RNA is compromised and the degree of RNA degradation and the extent of chemical modifications of the RNA may limit its use in downstream applications. It is therefore critical to assess the RNA quality at the cDNA stage to identify the RNA preparations which are suitable for a particular RT-qPCR analysis.
The multiplex endpoint RT-PCR approach used in this study has been developed and optimised to assess RNA extracted from FFPE specimens for its use in RT-qPCR assays. The assay is sensitive to RNA degradation as well as chemical modifications, which both determine the length of an amplicon during PCR amplification. This methodology is similar in principle to one that has been used extensively in our laboratory for the assessment of DNA from FFPE samples [ 37 ].
The choice of the target gene mRNA is of particular importance. It is desirable, that the target gene mRNA is ubiquitously expressed across most cell types and that the level of gene expression shows a similar magnitude among the samples. We selected TBP for our multiplex endpoint RT-PCR assay, because it was shown to be relatively stably expressed in a range of tumour types such as bladder cancer [ 24 ], renal cell carcinoma [ 25 ], hepatocellular carcinoma [ 26 ], glioma [ 27 , 29 ] and breast cancer [ 28 ]. TBP is expressed at moderate [ 27 , 29 , 38 ] to low levels [ 23 ]. The choice of a highly expressed target gene mRNA might result in an increased false negative rate for less abundant mRNAs in gene expression studies, due to an insufficient amount of starting material for PCR amplification. This is supported by a study, which showed that only a limited proportion of the RNA extracted from FFPE tissues is actually accessible for cDNA synthesis [ 39 ].
We also designed our primers to avoid underlying polymorphic sites. Target gene mRNA from highly polymorphic genes perhaps should be avoided. The primers in a previous approach which used the G6PD gene actually overlie potential polymorphic sites [ 20 ]. Primer binding sites which contain polymorphic sites will lead to impaired or even absent PCR amplification of the mismatched alleles.
Our multiplex endpoint RT-PCR approach utilises random primers to synthesise cDNA. Thus after the cDNA is assessed, the same cDNA can subsequently be used in the final gene expression experiment(s). This then also controls for the cDNA synthesis step.
The knowledge about which fragment sizes can be amplified during a RT-qPCR assay helps to identify samples suitable for gene expression analysis and which amplicon sizes can be used for an RT-qPCR assay design. The range of PCR amplicon sizes covered by our assay takes into account the RNA sizes normally obtained from FFPE. The upper size range is interrogated by three PCR amplicon sizes (161, 252 and 300 bp), and helps to identify samples suitable for sometimes more demanding applications, such as discrimination between gene splice variants. The 92 bp PCR fragment helps to identify samples suitable for FFPE-friendly RT-qPCR assays which we normally design with amplicon sizes less than 90 bp.
The good agreement between the results observed for the multiplex endpoint RT-PCR assay and the RT-qPCR results (Figure 6 ) shows that this assay can identify FFPE samples suitable for gene expression studies. The ten samples, which did not amplify in the multiplex endpoint RT-PCR assay amplified all very late in the RT-qPCR assay and would normally have been excluded from further analysis. The 71 samples, which showed amplification of all the amplicon sizes up to 252 bp in the multiplex endpoint RT-PCR assay amplified with moderate C q values in the RT-qPCR assay and are probably the best suited samples to deliver reliable gene expression data for most genes of interest in gene expression analysis. | Conclusions
There is an increasing demand for the use of RNA from FFPE both in research and in molecular diagnostic applications such as the Onco type DX test [ 6 , 40 , 41 ]. This is accompanied by a need for reliable methodologies to assess the quality of FFPE derived RNA. Most of the approaches currently used for RNA quality control are not suitable or have a limited usage for the assessment of such challenging material.
The validated quality control multiplex endpoint RT-PCR assay presented here overcomes the limitations of current approaches and is a robust method and well suited for determining the quality of a RNA preparation, especially for FFPE derived RNA. In addition, the assay can be used for routine quality control assessment of cDNA synthesis. The assay is also applicable for comparing or refining methodologies for RNA extraction and cDNA synthesis. Finally, the approach is cost effective and only requires equipment which is widely available. | Background
RNA extracted from formalin-fixed paraffin-embedded (FFPE) samples is chemically modified and degraded, which compromises its use in gene expression studies. Most of the current approaches for RNA quality assessment are not suitable for FFPE derived RNA.
Results
We have developed a single-tube multiplex endpoint RT-PCR assay specifically designed to evaluate RNA extracted from FFPE tissues for mRNA integrity and performance in reverse transcription - quantitative real-time PCR (RT-qPCR) assays. This single-tube quality control (QC) assay minimises the amount of RNA used in quality control. mRNA integrity and the suitability of RNA for RT-PCR is evaluated by the multiplex endpoint RT-PCR assay using the TBP gene mRNA as the target sequence. The RT-PCR amplicon sizes, 92, 161, 252 and 300 bp, cover a range of amplicon sizes suitable for a wide range of RT-qPCR assays. The QC assay was used to evaluate RNA prepared by two different protocols for extracting total RNA from needle microdissected FFPE breast tumour samples. The amplification products were analysed by gel electrophoresis where the spectrum of amplicon sizes indicated the level of RNA degradation and thus the suitability of the RNA for PCR. The ability of the multiplex endpoint RT-PCR QC assay to identify FFPE samples with an adequate RNA quality was validated by examining the C q values of an RT-qPCR assay with an 87 bp amplicon.
Conclusions
The multiplex endpoint RT-PCR assay is well suited for the determination of the quality of FFPE derived RNAs, to identify which RT-PCR assays they are suitable for, and is also applicable to assess non-FFPE RNA for gene expression studies. Furthermore, the assay can also be used for the evaluation of RNA extraction protocols from FFPE samples. | Competing interests
The authors declare that they have no competing interests.
Authors' contributions
EAT, TM and DJB carried out the experiments. TM, EAT and AD conceived the experiments and analysed the data. SBF, AD and TM initiated the project and supervised the work. All authors contributed important ideas throughout the project and were involved in the writing of the manuscript. All authors have read and approved the manuscript. | Acknowledgements
The authors thank Max Yan for excellent pathological assistance, Sue Sturrock and Raul Sestoso for technical assistance and specimen retrieval and Christoph Bock for helpful comments. This research was supported by the Victorian Breast Cancer Research Consortium and a grant to AD from Susan G. Komen for the Cure. | CC BY | no | 2022-01-12 15:21:45 | BMC Biotechnol. 2010 Dec 17; 10:89 | oa_package/bc/a7/PMC3016381.tar.gz |
PMC3016382 | 21182794 | Introduction
Eating disorders (EDs) are a group of psychiatric disorders with a lifelong course and considerable morbidity and mortality. In the Diagnostic and Statistical Manual of Mental Disorders, fourth edition (DSM-IV), EDs include anorexia nervosa (AN), bulimia nervosa (BN), and eating disorders not otherwise specified (EDNOS). The aetiology of EDs remains unknown.
There has been increasing interest in the study of the neuropsychological functioning of people with EDs for the last three decades. One of the reasons for this is to gain a better understanding of the aetiology and maintenance of these disorders and to explore ways of improving available treatments [ 1 ].
Although there are more than 100 papers on neuropsychology and brain imaging in EDs, in previous systematic reviews conducted by Roberts et al. [ 2 ] and Lopez et al. [ 3 ] on set shifting and central coherence in EDs, it was demonstrated that there are a limited number of neuropsychological studies on BN and those recovered from the disorder. Most studies have explored neuropsychological functioning in AN. Because there is more available research in neuropsychology in AN, the present systematic review focused on AN only.
AN is the most severe ED affecting mainly, but not only, young women, and has the highest rate of mortality linked to a psychiatric disorder, due to high levels of medical complications and suicide in chronic patients [ 4 ].
Currently, psychological therapy is the main therapeutic intervention recommended for the treatment of AN (see, for example, [ 5 ]). However, outcomes are far from satisfactory with only 50% of cases reaching recovery in adult populations [ 4 , 6 ]. It is possible that factors such as cognitive functioning, usually limited during the acute phase of AN [ 7 ], influence treatment utilisation and outcome. For instance, most available therapies require the patient to have some level of insight and verbal ability; factors that might be related to optimal intellectual level [ 8 ].
The intelligence quotient (IQ) represents a composite score on a variety of tests designed to measure a hypothesised general ability or intelligence [ 9 ]. It has been hypothesised that people with AN have a higher IQ level than the general population based on clinical and school performance observations. There is a suggestion, however, that higher perfectionism, but not higher IQ, would explain the better performance at school in this group [ 10 , 11 ].
As neuropsychological studies have become more popular in AN, there are several studies that have measured intelligence as part of their assessments. However, there are a limited number of studies looking specifically at IQ levels in people with AN. We were not able to find any systematic review summarising research on intelligence in AN. Therefore, the question about IQ in AN remains unanswered. It is expected that examining the available literature in this area would be helpful in providing information about intellectual functioning in AN, investigating how comparable neuropsychological studies from different academic groups are in the context of IQ estimation, helping to clarify what the most appropriate IQ measure would be for future studies, and exploring any association between severity of illness (measured by Body Mass Index (BMI)) and IQ levels.
With these questions in mind, a systematic review of the literature and meta-analysis with the available data on IQ in AN were conducted to address the hypothesis that people with AN show superior scores on well validated IQ tests in comparison with the average IQ of the normative population (norm). This hypothesis is based on the common but conflicting assumption that high intelligence is a trait among people with AN [ 12 , 13 ].
We have also predicted that people with a past history of AN, now recovered, would demonstrate higher IQ scores than those who are in the acute phase of AN and the normative population. We based this hypothesis on the fact that it was shown that IQ predicts termination from treatment; that is, patients with a higher IQ are more likely to remain in psychological treatment [ 14 ]. | Methods
Search procedure and data extraction
This review follows the Preferred Reporting Items for Systematic reviews and Meta-Analyses (PRISMA) statement for meta-analysis [ 15 ]. The following electronic databases were used to identify relevant papers for inclusion in this review: Medline, Embase, Psych Info and ISI Web of Science. A first search was conducted in September 2008 and subsequently updated in March 2009.
A broad search was first run on the literature using the terms 'intelligence quotient', 'IQ', 'intelligence', 'neuropsychological assessment', 'neuropsychology', 'cognition', and 'eating disorders' (including 'anorexia', 'bulimia', 'EDNOS', 'recovered anorexia', 'recovered bulimia'). After failing to obtain a substantial number of studies in bulimia nervosa (BN) or recovered AN that included estimated IQ data this search was narrowed to studies that included samples with AN only.
In this manner, the search was conducted by two independent researchers searching for published studies on the basis of the following inclusion criteria:
• Participants. Studies including subjects diagnosed with AN and a sample greater than 10 participants.
• IQ measures. Studies using well known measures to estimate IQ. Specifically, studies including the National Adult Reading Test (NART) and Wechsler scales (Wechsler Adult Intelligence Scale (WAIS), Wechsler Intelligence Scale for Children (WISC), short version of the WAIS (Wechsler Abbreviated Scale of Intelligence; WASI) and the German version of WISC (Hamburg-Wechsler-Intelligenztest für Kinder; HAWIK).
• Data. Studies reporting at least full scale IQ data.
• Language. Studies published in English.
Results from these searches were merged for higher reliability. Following the initial identification of relevant published articles, all citations were then obtained. Further relevant references cited in the retrieved papers were pursued.
Instruments
The NART [ 16 ] is a word-reading test (50 short words of irregular pronunciation) widely used in research and clinical practice as an estimate of premorbid intellectual ability [ 17 ]. It has high construct validity as a measure of general intelligence and high levels of inter-rater and test-retest reliability.
The Wechsler Intelligence Scales provide a current estimation of IQ in the adult (WAIS) and child (WISC) versions. These scales are composed of comprehensive intellectual batteries that alternate verbal with visual-perceptual or construction (performance) tests as standard procedure. The full scale IQ test is broken down into subscales comprising verbal (information, digit span, vocabulary, arithmetic, comprehension, similarities) and performance scales (picture completion, picture arrangement, block design, object assembly and digit symbol).
It has been shown that NART and WAIS performance correlate strongly [ 18 ]. For instance, correlations between the NART IQ estimates and the WAIS and revised WAIS (WAIS-R) British version are in the range of 0.72 [ 9 ].
In both measures, for every age group, a norm of 100 corresponds to the average and 15 to the standard deviation. This permits direct comparison between individual scores with the normative data from the same age range.
Data synthesis
Meta-analyses were carried out using Stata V. 9.1 (Stata, College Station, TX, USA) using the user-contributed commands for meta-analyses: 'metan' [ 19 ], 'metainf' [ 20 ], 'metabias' [ 21 ] and 'metatrim' [ 22 ].
In order to estimate whether the IQ of those suffering from AN differs from the normative population, the data provided by each study was compared with the mean and SD from the normative population, known to be a mean of 100 and standard deviation of 15 (Lezak et al. [ 9 ]). The differences in mean IQ scores were standardised by dividing the difference of IQ of patients with an eating disorder and the norm group by the standard deviation of the norm group [ 23 ], which is equivalent to Cohen's or Glass's d. The standard error of the effect size was calculated by .
Because the mean and the standard deviation of the norm group are regarded as known (based on a large sample size), a bias correction of the standard error is not necessary. The effect sizes and standard errors of the studies were then pooled using random-effect models, which allowed us to model possible study-to-study variation of effect sizes [ 24 ].
Meta-analyses were preformed separately for each of the two instruments (NART and Wechsler's derived tests) using the user-contributed Stata command 'metan'. The standardised effect sizes were also back transformed into IQ score differences.
The results of the meta-analyses are reported as Forest plots. Forest plots display the results of the meta-analyses in graphical format (see Figures 1 and 2 ). These graphs represent the variation between the results of the various studies and an estimate of the overall effect size of all the studies together considering the data available for each study included in the meta-analysis [ 25 ]. Each line of the Forest plot represents an individual study/comparison. The position of the square in relation to the vertical axis represents the point estimate of the results of a particular study; specifically it shows how the effect size of the study varies from zero. The size of the square shows the weighed individual contribution of the study to the meta-analysis and it is proportional to the sample size of the study. The horizontal line through the square represents the 95% confidence interval (CI) of the effect size. The overall estimate from the meta-analysis and its CI are displayed at the bottom of the plot, represented as a diamond.
Homogeneity between the trials was analysed using Cochran's Q test. Due to the small sample sizes, an additional measure of heterogeneity or inconsistency I 2 (Q-(df)/Q) was calculated [ 26 ]. I 2 describes the percentage of total variation across studies that is due to heterogeneity rather than chance and ranges between 0% (no inconsistency) and 100% (high heterogeneity), with values of 25%, 50% and 75% suggested as low, moderate and high heterogeneity [ 26 ].
The analyses were repeated excluding one study at a time to investigate the influence of each individual study on the overall meta-analysis summary using Stata's user-written function 'metainf'.
Statistically significant results are more likely to be published than studies with non-significant results. Therefore, the presence of publication bias was assessed informally by visual inspections of funnel plots, which represent a plot of a study's precision (1/standard error) against effect size. The absence of studies in the right bottom corner (low precision and small effect sizes) of a funnel plot is usually taken as an indication of publication bias. The visual assessments were corroborated by its corresponding statistical analogue, Begg's adjusted rank test [ 27 ], and additionally by Egger's test [ 28 ], as implemented in 'metabias'.
We also used the Duval and Tweedie [ 29 ] non-parametric 'trim and fill' method of accounting for publication bias in meta-analysis as implemented in Stata's user-written command 'metatrim' [ 30 ]. If the conclusion of the meta-analysis remains unchanged following adjustment for the publication bias using the trim and fill method, the results can be considered as robust, excluding publication bias. | Results
After careful assessment on a case-by-case basis, 30 studies (including 849 AN patients in total) were found using the above-mentioned criteria. All of the identified articles used either the NART (N = 14) or Wechsler's derived tests (WAIS, WISC, short forms of WAIS and HAWIK tests) (N = 16).
Sources of participants
All the selected studies used IQ test on people with AN or recovered. In most studies participants came from clinical populations (N = 28). In all, 14 studies involved only participants who were inpatients at the time of the study, 2 involved outpatients only, 8 involved both inpatients and outpatients, 4 included community samples apart from inpatients and outpatients and 1 study stated that their sample came from a volunteer database only [ 31 ]. One study did not specify the source of their participants.
Table 1 shows details of the studies included in this review. All the estimated IQ data is summarised in the meta-analyses described below.
Estimated IQ in AN as measured by the NART
The NART was used in 14 studies, with a total sample size of 365 AN patients. The sample size of the trials ranged between 10 and 46. The mean IQ of the 14 studies ranged from 96.1 to 116.8. A meta-analysis using a random effects model revealed an estimate of the mean standardised mean difference (SMD) of 0.72 with a 95% confidence interval of 0.53 and 0.91.
The SMD of 0.72 means that patients with EDs score on average 10.8 units (95% CI 7.9 to 13.6) above the average IQ of the normative population. There was evidence of considerable heterogeneity across studies (Cochran's Q test: X 2 (13) = 43.7, p < 0.001 and I 2 = 70.2%). This variance was particularly due to the data from Mathias and Kent [ 32 ]. In this study, the AN group obtained lower IQ compared with norms, with an effect size of d = -0.26 (see Figure 1 ). To investigate the influence of this study on the overall meta-analysis, the meta-analysis was repeated excluding one study at a time to ensure that the results were not biased by a single outlier. Rerunning the analysis without the Mathias and Kent study [ 32 ] increased the SMD slightly from 0.72 to 0.79 (95% CI 0.68 to 0.90). There was no more evidence for heterogeneity between studies (Cochran's Q test: X 2 (12) = 9.58, p = 0.65, I 2 = 0%). The influence of other studies on the overall estimate was minor (see Figure 3 ).
Publication bias
A funnel plot based on all 14 studies did not indicate publication bias either with or without Mathias and Kent [ 32 ], nor did formal tests (Begg's test z = 0.47, p = 0.64 and Egger's test t = 0.86, p = 0.40 and Begg's test z = 0.63, p = 0.53 and Egger's test t = 0.69, p = 0.50, respectively). The trim and fill method did not indicate missing studies.
In summary, there is no evidence for publication bias and the estimated effect size found from the random effects model is realistic. The overall conclusion from this analysis is that people with AN tend to consistently score higher than population norms on the NART across published studies.
IQ in AN as measured by Wechsler's tests
Wechsler's tests were used in 16 studies with a total sample size of 484 patients. The WAIS was used in five studies, WAIS-R in six studies, the short form of the WAIS (WASI) in one study, two studies used both the WAIS and the WISC according to the age of participants and two studies used HAWIK (the German version of WAIS). The sample size of the trials ranged between 10 and 98. The mean IQ of the 16 studies ranged from 96.5 to 117.6. Using a random effects meta-analysis, the estimate of the pooled SMD was a small to medium effect size of d = 0.40 (95% CI 0.18 to 0.61) across all studies. The SMD of 0.40 translates that patients with EDs score on average 5.9 units (95% CI 2.7 to 9.2) above the average IQ of the normative population.
The meta-analysis (see Figure 2 ) revealed a high degree of heterogeneity across the studies (X 2 (15) = 81.2, p < 0.001), with an index of inconsistency of 81.5%; parameters that justified the use of a random effects model. The heterogeneity was not due to a single study, as when the meta-analysis was re-estimated omitting each study in turn, no single study had a significant influence on the results. About half of the included studies showed a moderate to high effect size, whereas the remaining showed little or no mean effect, which may explain this heterogeneity. This observation will be discussed later in this section.
The analyses were repeated excluding one study at a time to investigate the influence of each individual study on the overall meta-analysis summary.
The influence of individual studies on the estimated overall effect size (removing each study and recalculating overall effect) was minor. Rerunning the meta-analysis excluding one study at a time resulted only in minor differences of the estimated SMDs (range of d = 0.35 to 0.44; see Figure 4 ).
Publication bias
From the Forest plot (Figure 2 ), it could be assumed that there are two types of studies: those with an ES around 0 (no differences between the IQ of those with AN and norms) and those with an ES around 0.75 (moderate to high differences between populations). This, alongside the fact that the funnel plot based on all 16 studies is slightly asymmetrical, would suggest some publication bias towards studies with larger effect sizes. This was confirmed by Egger's test (Egger's test t = 2.11, p = 0.05, Begg's test z = 1.09, p = 0.27). However, the trim and fill method did not estimate any missing study, which suggests that the results are robust (Figure 5 ). Also, studies with an ES around 0 or 0.75 seem not to be different in terms of the year of the study, the number, age or BMI of participants, comorbidity, or whether participants were in/outpatients. Therefore, the results suggest the absence of an identifiable publication bias.
IQ and BMI
BMI data available from the studies included in this review were correlated with the estimated IQ obtained from studies in order to understand the role of nutritional status in the intellectual functioning of people with AN.
Firstly, a correlation analysis was run between BMI and IQ with all data available (from NART or Weschler's scales). The results did not show a correlation between these two variables. However, when separating studies according to the instrument used to estimate IQ (Weschler's scales or NART), a trend for a significant moderate correlation was found in those studies using the NART (Spearman coefficient = 0.56, p = 0.07), meaning that taking all these studies as a group, lower premorbid IQ was associated with lower BMI. No correlation was found between IQ estimated by Weschler's scales and BMI, suggesting that current IQ does not fluctuate with changes in BMI.
IQ and recovery
We were able to identify only four studies including women with past history of AN now recovered (N = 64, see Table 2 ). Three of these studies used the NART and the revised NART (NART-R) and only one study used the WAIS. The small number of studies did not allow a formal random effect meta-analysis. However, in all four studies, patients scored significantly more than 100, as the 95% confidence intervals show (Table 2 ).
A descriptive comparison of these results suggests that those recovered from AN score higher on IQ measures (mean IQ ranged from 109.3 to 118.1) than groups with current AN (mean IQ in current AN ranged from 96.1 to 116.8 and 96.5 to 117.6 using the NART and WAIS, respectively) and norms. Along the same lines as the conclusion above, this very preliminary result may indicate that those who recover tend to have higher premorbid IQ. More research in recovered samples is needed in order to clarify this observation. | Discussion
The aim of this study was to provide a comprehensive systematic review of the literature including a meta-analysis for estimated IQ data in AN compared with the average IQ obtained from normative data. Studies included in this review were selected on the basis of quality of the data and validity of instruments used to estimate intellectual ability. Thus, NART and Weschler's scales were selected as the most common and reliable measures.
It was established that most people with AN have higher average IQ scores compared with the average of normative data. Specifically, studies using the NART consistently showed a higher IQ in AN patients in comparison with established norms. However, those using Wechsler's scales obtained more heterogeneous results, with half of the studies showing moderate to high effect sizes (average ES = 0.75) and half of them with low or negligible effect sizes (average ES around 0). Examination of the characteristics of the various studies (for example, age at testing, BMI, reported comorbid conditions, and so on) was not able to explain this heterogeneity. These results overall show that people with AN have at least as high IQ as norms, which indicates a difference, compared to other psychiatric conditions.
There are two points that are worth highlighting in this discussion about the heterogeneity of results. Although the WAIS and NART are highly correlated, the NART estimates premorbid IQ and Wechsler's scales measure current ability. Predictions could be made from NART scores about performance, verbal and total IQ based predominantly on verbal abilities, which are thought to be generally preserved in AN [ 1 , 7 ], whereas Weschler's scales assess mixed verbal, performance and visual spatial abilities. The latter have been reported to be more impaired in people with acute AN (see, for example, [ 7 ]). It might be possible that those studies using Wechsler's scales with lower effect sizes are demonstrating differences in the performance of the samples that are not evident in full scale IQ data (for example, differences in performance versus verbal IQ or lower scores in scales involving visual spatial abilities). We are not able to clarify this point with this review, as most studies provided full scale IQ only, as previously mentioned. Also, Wechsler's scales used in the studies included in this review are composed of different scales (for example, WAIS, WISC, short version, and so on) and there was no consistent use of one single instrument.
Both premorbid and current IQ yield valid and interesting information for future studies and the measure selected will depend on the nature and objectives of future studies. Therefore, it seems that simple and reliable measures such as the NART provide more consistent data on IQ, without the bias that anomalies in cognitive performance (typically present in the acute phase of AN) may introduce in test performance. Also, from the available literature, it is still hard to draw firm conclusions regarding performance and verbal IQ. Clear reporting in future studies (for example, separating verbal from performance IQ) will help to address this point.
A coordinated approach and consensus of IQ measures in the field will make data more comparable and will provide better insight into the relationship between illness severity and the neuropsychological profile of AN. For example, meta-analyses and systematic reviews conducted in schizophrenia allowed researchers to identify that, before the onset of psychosis, IQ scores are approximately 0.5 standard deviations below that of healthy comparison subjects, and low IQ could be considered as one of the risk factors for schizophrenia [ 33 , 34 ].
In order to explore the question about the potential contribution of IQ as a predictor of recovery, we made an attempt to review studies on recovered AN populations. Only four studies [ 35 - 38 ] reported IQ data on women with a past history of AN. We did not have efficient power to draw strong conclusions from the available studies. However, a meta-analysis showed that people who had recovered from the illness had higher IQ in comparison to norms and studies on acute AN groups, included in this paper. There is a growing literature suggesting that IQ level can predict treatment outcome in psychiatric conditions such as schizophrenia and autism (see, for example, [ 39 ]). We predict that this line of research will also be highly informative for the ED field. For example, it was reported that higher IQ predicts completion of psychological treatment in early studies [ 14 ]. To our knowledge, none of the treatment studies conducted in EDs looked at drop-out data in the context of IQ. In general, it is clear that all treatment studies in AN have high drop-out rates [ 40 ]. From the results of this study, it seems likely that successfully treated patients with AN have higher premorbid IQ, which would support the hypothesis of the higher the IQ, the better the treatment prognosis. However, more studies will be needed to confirm this prediction and which other factors may be involved (for example, environmental, physiological, and so on).
We believe that this systematic appraisal of the literature was helpful in highlighting a trend that suggests that people with AN have average or higher than average IQ (in both NART and WAIS studies). The benefit of this appraisal of current knowledge will help researchers in planning future studies and formulate important questions, such as: do patients with higher IQ have better prognosis? How could high IQ be effectively used in psychological treatment? Is IQ decline evident in AN? Whereas the most obvious reason will be malnutrition, none of the studies so far have used premorbid and current IQ measures simultaneously.
Finally, this study helped us to reflect on the fact that comparison clinical groups should be carefully selected in future studies on AN, because IQ will be an important contributing factor in social cognition, cognitive tasks, either using self-report or experimental instruments.
This review has some limitations. Firstly, one of the limitations is the retrospective nature of the data. Secondly, results in studies using Wechsler's scales showed high heterogeneity, which makes it difficult to draw strong conclusions from these scales. We examined the influence of the use of different versions of the test as well as different age groups and other clinical characteristics of the samples. Despite efforts to clarify the reasons behind such different results, we were not able to identify a consistent factor across studies. We have also confirmed a lack of literature related to other ED diagnostic categories, such as BN, EDNOS or recovered ED groups in the context of IQ. This fact precludes the possibility to generalise these results to other ED groups and more importantly, the examination of potential relationships between IQ, recovery and prognosis. Thirdly, it is important to mention that most of reviewed studies involved clinical participants (inpatients or outpatients) or registered volunteers. One of the possibilities is that these samples are highly selected because they are not population-based samples, rather people who seek treatment in the clinics or are willing to participate in research. This may relate to higher education and IQ performance but these questions are beyond the scope of this study. Finally, as the main outcome of most of studies included in this review was not IQ performance, it is likely that a publication bias exists, however it is not possible to address it in this study.
From this review some recommendations arise: the majority of accessed studies used the NART or Wechsler's scales. It will be useful if future studies continue to use these measures to make future data comparable. It will also be desirable for all studies looking at neuropsychological factor to include an IQ measure due to the effect that it may have on neuropsychological task performance. Finally, since neuropsychological research is increasing, it will be helpful to report covariate analysis in relation to IQ; for example, with treatment outcomes, symptom severity and recovery. | Conclusions
This research highlights an important and underexamined factor in AN. The main conclusion is that IQ in AN is at least as high as the average IQ found in the normative population and most studies show that this group have a high average IQ. There is a preliminary but important observation about IQ in the recovered population, which is that this group may represent a group with higher IQ than norms and current AN groups, opening the question about the influence of this factor on treatment and recovery. We think that exploring IQ in the context of treatment and recovery may provide useful information for clinicians and researchers. | Background
It has been hypothesised that people with anorexia nervosa have a higher intelligence quotient (IQ) level than the general population. The purpose of this review was to systematically appraise the research into reported IQ levels in people with anorexia nervosa.
Methods
A search using the terms intelligence quotient, IQ, intelligence, cognition, eating disorders and anorexia was conducted in electronic databases only.
Results
In all, 30 peer-reviewed studies written in English that used well established measures of intelligence quotient (the National Adult Reading Test and Wechsler Intelligence Scales) were identified. This review established that people with anorexia nervosa score 10.8 units and 5.9 units above the average intelligence quotient of the normative population on the National Adult Reading Test and Wechsler Intelligence Scales, respectively. An association was found between Body Mass Index and intelligence quotient, as measured by the National Adult Reading Test.
Conclusions
More studies including other eating disorder categories and recovered people are needed to explore important questions regarding the role of the intelligence quotient in treatment response. | Competing interests
The authors declare that they have no competing interests.
Authors' contributions
CL contributed to the design of the study, performed the main search, data extraction, data synthesis, supported general analyses and interpretation of data, and contributed to drafting the manuscript. DS contributed to the design of the study, performed the statistical analyses, drafted the results section and critically reviewed the manuscript. KT contributed to the design of the study, did an independent search of the papers, supervision/interpretation of data and drafted the main part of the manuscript. All authors approved the final manuscript. | Acknowledgements
This work is part of the ARIADNE (Applied Research into Anorexia Nervosa and Not Otherwise Specified Eating Disorders) programme, funded by a Department of Health NIHR Programme Grant for Applied Research (reference number RP-PG-0606-1043). The views expressed in this publication are those of the author(s) and not necessarily those of the NHS, the NIHR or the Department of Health. The authors would like to thank Nils Köthke for his assistance in the preparation of an early draft of the manuscript. | CC BY | no | 2022-01-12 15:21:45 | Ann Gen Psychiatry. 2010 Dec 23; 9:40 | oa_package/2a/80/PMC3016382.tar.gz |
PMC3016383 | 21129226 | Background
Australia has experienced a steady increase in the proportion of older adults, with projections that 22% of the population will be aged over 60 by 2025 [ 1 , 2 ]. This ageing population is heavier than a generation ago, with in excess of 60% of older adults now classified as overweight or obese [ 3 ]. Overweight and obesity levels are increasing at a rapid rate worldwide while other non-communicable diseases (NCDs) such as heart disease, type 2 diabetes and cancer are also on the rise [ 4 , 5 ]. It has been estimated that in developed countries, the cost of obesity equates with 0.7 to 2.8% of the total yearly health expenditure [ 6 ]. For example, the annual cost of obesity is estimated to be $21 billion in Australia and $2.1 billion in the state of Western Australia [ 7 ]. The increase in the prevalence of overweight and obesity is of particular concern, in view of the strong association between excess body weight and chronic health problems. It is known that as age increases physical activity declines [ 8 ], with 46% of Australians aged 60 to 75 years being insufficiently active and 33% being completely sedentary [ 1 , 9 ]. Over the years, the physical activity levels of older Australians have reduced [ 9 , 10 ] while their food consumption has increased [ 11 ]. This follows the worldwide trend in diet which is shifting towards an increased consumption of saturated fats, with the level of fat consumed exceeding the recommended proportion of daily energy intake [ 12 - 14 ].
The benefits of regular physical activity are well recognised [ 5 , 15 , 16 , 18 - 21 ], regardless of body mass index (BMI) [ 22 ]. The greatest health improvements appear to occur when a person moves from being sedentary (<100 mins/week) or involved in light (1-2.9 METS) to moderate-intensity activity (>3 METS) [ 23 ]. Low-intensity aerobic exercise is typically recommended for older adults as it can be sustained for longer, results in less tiredness and injury, and therefore may result in greater energy expenditure than high-intensity exercise [ 24 ]. The Australian Government has recently developed physical activity guidelines for older Australians to help improve their health and well being. The guidelines recommend that moderate-intensity physical activity be performed for a minimum of 30 minutes on most, preferably all, days of the week [ 25 , 26 ]. In addition, eating adequate amounts of fruit and vegetables can provide essential nutrients for healthy tissue bolster the immune system and protect against chronic diseases [ 5 , 27 - 29 ]. The Australian Guide to Healthy Eating [ 30 ] recommends between four to seven 75 gram serves of vegetables and two to three 150 gram serves of fruit for adults aged over 60 years. In addition, both dietary fat and refined carbohydrate should be reduced to achieve appropriate balance in macronutrient intake necessary for an acceptable body weight [ 31 ].
Maintaining adequate levels of physical activity [ 32 ]and sustaining an appropriate diet [ 33 ] are important public health goals to address obesity and to minimise the adverse physiological changes [ 34 ] associated with ageing. However, there remains a need for systematic assessment of dissemination strategies to improve health outcomes [ 35 , 36 ], recognising that older people are a heterogeneous group that would benefit from interventions to suit their personal needs and circumstances [ 37 ]. The design of interventions needs to be rigorous [ 38 ] with large samples and longer time frames [ 39 , 40 ]. Additionally, home-based nutrition and physical activity programs for older adults may reduce future costs to health care [ 37 ]. This paper describes the protocol of a randomised controlled trial that aims to improve the physical activity and nutrition behaviours of insufficiently active people aged 60 to 70 years. | Methods/Design
Study design
This project will consist of the development, implementation and evaluation of a physical activity and nutrition intervention. The program is designed to increase physical activity levels, enhance nutritional intake and assist in the management of body weight of insufficiently active 60-70 year-olds. It will be conducted in metropolitan Perth, the capital of the State of Western Australia. The intervention and evaluation design has been based on a large pilot project that produced encouraging results with respect to adherence and behaviour change [ 41 ].
The study will be a 12-month randomised controlled trial (RCT) (Ref Figure 1 ). Data will be collected from participants over three time points at pre-intervention, immediately post-intervention and at 6-months post-study. The project protocol has been approved by the Curtin University Human Research Ethics Committee (approval number HR 186/2008).
Recruitment
A stratified random sampling procedure will be adopted to recruit participants from 60 suburbs (neighbourhoods) within the Perth metropolitan area. Selection criteria for these suburbs are: (a) comprised of at least 14% 60 year-olds and above, reflecting the State average [ 42 ]; (b) containing at least 120 adults aged 60-70, to ensure a sample size sufficient for the matching of telephone numbers to the Perth Electronic White Pages [ 43 ]; and (c) are of low or medium socio-economic status (SES) based on the Socio-Economic Index for Area (SEIFA) [ 44 ], a value derived from income, education level, employment status and skill level. Suburbs will be arbitrarily matched for low and medium levels of socio-economic status. The suburbs will then be assigned to either the intervention group or the control group using a table of random numbers. The sample size will be n = 300 for each of the intervention and control groups at baseline. This strategy is based on a previous RCT conducted by the research group that successfully recruited a similar study sample [ 45 ].
Using the Federal Electoral Roll (FER), 7200 potential participants in total will be randomly drawn from the 60 Perth suburbs, with the aim to recruit 15 participants per suburb. Participants need to be: (a) "insufficiently active", i.e. not achieving 30 minutes of moderate-intensity physical activity on at least 5 days per week [ 25 ]; (b) aged 60 to 70 years; (c) healthy to the extent that participation in a low-stress physical activity and nutrition program would not place them at risk; (d) not to have taken part in any research studies that involve exercise or nutrition within the last five years; and (e) not to be on any special diet.
Procedure
The Survey Research Centre at Curtin University will match telephone numbers of the 7200 names from the FER to the Perth Electronic White Pages prior to making the initial contact. It is anticipated that the matching will yield an 80% success rate [ 45 ]. During the initial contact, the purpose of the study will be explained and the caller will determine whether the individual meets the selection criteria. Participants who give verbal consent will be assigned to an intervention or control group. A self-completion questionnaire will then be sent to them, along with an explanatory cover letter and self-addressed envelope for returning the questionnaire. Participants will be advised to complete the Physical Activity Readiness Questionnaire and to furnish a medical clearance if deemed necessary before commencing the program.
Intervention implementation strategies
1. Staff training
Senior university Health Science students with expertise in physical activity, nutrition and health promotion will be recruited as "Guides". Potential Guides will undergo screening for suitability and intensive training; receive a comprehensive Guide's manual of dietary and physical activity guidelines; receive regular support via email and phone contact from the project coordinator; and be awarded a certificate upon completion of the training. The aim is for Guides to follow the successful New Zealand ‘Green Prescription Program' [ 21 , 46 , 47 ]. They will be responsible for coordinating regular group meetings; phone/email contacts with participants; and be accessible for information sharing and answering questions. They will be supervised by an accredited Dietitian and a Human Movement Specialist.
2. Provision of resources and instructional materials to participants
The intervention group participants will receive a booklet designed to motivate them to improve their levels of physical activity and their nutrition, through goal setting. The booklet which has been updated from a pilot project [ 41 , 48 , 49 ]; will be supported by additional written materials including an interactive calendar and exercise chart. The intervention group will also be provided with a resistance band to perform the exercises described in the program, and a pedometer to monitor walking and to record the number of daily steps. A bi-monthly newsletter will reinforce the key messages.
3. Follow-up and support
The intervention group will be allocated Guides who will conduct suburban-based group meetings (available to those who can attend) and monitor the progress of their group participants. Each Guide will supervise and support participants in one or two suburbs (n = 10 to 20). They will contact their participants via phone (or email if preferred). The Guides will make three pre-arranged motivational phone calls at 4, 12 and 20 weeks to give advice and individualised consultation, as well as to monitor attainment of goals and provide encouragement, support and feedback [ 46 , 47 , 50 ]. Guides will maintain a detailed log book of the contacts made with their allocated group of participants. To increase the likelihood of sustainability, the National Heart Foundation will provide ‘Heartline', a website and 1-800 telephone number for further information. All resources are designed to support participants adoption of health-enhancing behaviours with the opportunity to access information and have questions related to physical activity and nutrition answered.
Control group
Requests to complete the self-administered questionnaires will be the only contact the control group will receive from the project staff.
Outcome measures
A self-administered questionnaire will be completed by both groups of participants at baseline (pre-intervention), 6-months (immediately post-intervention) and 12-months (6-months post-intervention). The questionnaire comprises of previously validated instruments [ 20 ], and will undergo further reliability testing prior to its initial use at baseline.
Physical activity will be measured using The International Physical Activity Questionnaire (IPAQ) [ 51 ]. IPAQ has undergone extensive reliability and validity testing in 12 countries. The instrument has acceptable measurement properties for use in many settings and is specifically designed for population-based prevalence studies of physical activity. A strength exercise question based on recommendations from the American Heart Association [ 32 ] will be included to ensure the main components of the home-based exercise program are also measured.
Dietary intake will be measured using a modified version of the Fat and Fibre Barometer [ 52 ]. A question from the New South Wales Government report on soft drinks [ 53 ] will be appended to measure frequency of soft drink consumption. Validated questions will also confirm participants' stages of change regarding fruit and vegetable consumption [ 54 ]. A question from the Western Australian Physical Activity Taskforce 2005 State-wide adult physical activity survey will be used to assess confidence to participate in at least 30 minutes of physical activity on five or more days of the week [ 55 ].
General physical and mental health will be measured by The Medical Outcomes Study Short-Form Health Survey (SF-8) [ 56 ]. SF-8 is a standard generic international instrument to assess health status and is comprised of two summary scales - the physical component summary (PCS) score and the mental component summary (MCS) score.
Social support will be measured by the Dukes Social Support Scale (DSSI) [ 31 ]. The Scale is a subjective evaluation of the type and number of social interactions and has been validated for use with older people. The instrument contains two sub-scales that measure social interaction and satisfaction and has good internal consistency (Cronbach's alpha 0.77) and test-retest scores (0.70 to 0.81)[ 31 ].
A single item from the Social Support for Physical Activity questionnaire [ 57 ] (SSPA) will also be used to measure perceived levels of social support for physical activity provided by friends and family. In addition, a single question will be asked about loneliness [ 58 ] while a standard validated question [ 59 ] will be used to confirm the participants' smoking status.
Demographics will be assessed by questions on gender, age, educational level, country of birth, marital status, socioeconomic status, financial status and co-morbidities. Anthropometric measures will include self-reported height and weight, waist and hip girth. A recent study has confirmed that self-assessment measures are suitable for such studies when a correction factor is applied [ 60 ].
A sub-sample of 100 participants will be selected from the intervention group. Following self-report of their height, weight, waist and hip girths the research team will measure each of these variables. Calculations of differences between self-reported and measured data will be undertaken to identify a correction factor based on the methodology of Dhaliwal et al. [ 60 ].
Process evaluation
A brief feedback sheet will be mailed to all participants to evaluate the booklet [ 48 ]. It invites the participants to rate the booklet in terms of interesting to read, easy to understand, usefulness of advice, suitability for the age group, and the relevance of messages. Participants will also be asked to comment on specific features they particularly like or dislike, as well as suggestions for further improvement [ 61 ]. The calendar, exercise sheet and other program resources will also be evaluated via a similar previously utilised feedback format [ 48 ].
Sample size
This is a RCT with outcomes measured at three time points. Power calculations are based on linear mixed model and assuming 70% complete data across the three assessments due to attrition and non-respondents. In the power analyses, effect sizes of interest are associated with the correlation coefficient (or semi-partial correlation). For the mixed regression analyses of physical activity times and metabolic equivalent tasks, a sample size of n = 600 [150 per gender by intervention or control group] will provide sufficient power (80%) to detect a medium effect size (accounting for approximately 16% of the variance) for gender by age interactions at a single time point without covariate adjustment. Power to detect these same interactions in the trends (based on 3 assessments) is sufficient to detect a smaller effect, accounting for approximately 11% of the variability. | Results from the PANS study are due in mid-2011
As the aging population increases there is an urgent need to develop sound interventions capable of making a positive change to health status with consequent reduction in pressure and cost to the health care system. This physical activity and nutrition program offers a unique approach compared to other such programs for older people previously conducted in Australia for the following reasons.
The target group will be selected from younger seniors groups (60-70 years), and low and medium SES groups rather than high SES groups. Samples will be randomly selected and actively recruited through the Australian FER, and not community volunteers recruited through advertising. The intervention will provide valuable data on the effectiveness of strategies to improve dietary intake and increase physical activity in the community. The project has been designed to evaluate the strength of combining both physical activity and nutrition in order to improve the health of seniors. The evaluation data will be collected from participants in their own communities and not in a research centre, making the program relevant to the normal population and not limited to a clinical group or setting. The project will provide guidelines for appropriate sample recruitment, and the development, implementation and evaluation of a minimal, home-based tailored physical activity and nutrition intervention program. The information gathered will be useful for minimising barriers to participation in physical activity and nutrition programs. The outcomes of the project will have significant potential benefits to the Australian community via increased physical activity and better nutrition to reduce chronic disease (and associated costs), as well as enhanced mental health and improved quality of life. | Discussion
Results from the PANS study are due in mid-2011
As the aging population increases there is an urgent need to develop sound interventions capable of making a positive change to health status with consequent reduction in pressure and cost to the health care system. This physical activity and nutrition program offers a unique approach compared to other such programs for older people previously conducted in Australia for the following reasons.
The target group will be selected from younger seniors groups (60-70 years), and low and medium SES groups rather than high SES groups. Samples will be randomly selected and actively recruited through the Australian FER, and not community volunteers recruited through advertising. The intervention will provide valuable data on the effectiveness of strategies to improve dietary intake and increase physical activity in the community. The project has been designed to evaluate the strength of combining both physical activity and nutrition in order to improve the health of seniors. The evaluation data will be collected from participants in their own communities and not in a research centre, making the program relevant to the normal population and not limited to a clinical group or setting. The project will provide guidelines for appropriate sample recruitment, and the development, implementation and evaluation of a minimal, home-based tailored physical activity and nutrition intervention program. The information gathered will be useful for minimising barriers to participation in physical activity and nutrition programs. The outcomes of the project will have significant potential benefits to the Australian community via increased physical activity and better nutrition to reduce chronic disease (and associated costs), as well as enhanced mental health and improved quality of life. | Background
Along with reduced levels of physical activity, older Australian's mean energy consumption has increased. Now over 60% of older Australians are considered overweight or obese. This study aims to confirm if a low-cost, accessible physical activity and nutrition program can improve levels of physical activity and diet of insufficiently active 60-70 year-olds.
Methods/Design
This 12-month home-based randomised controlled trial (RCT) will consist of a nutrition and physical activity intervention for insufficiently active people aged 60 to 70 years from low to medium socio-economic areas. Six-hundred participants will be recruited from the Australian Federal Electoral Role and randomly assigned to the intervention (n = 300) and control (n = 300) groups. The study is based on the Social Cognitive Theory and Precede-Proceed Model, incorporating voluntary cooperation and self-efficacy. The intervention includes a specially designed booklet that provides participants with information and encourages dietary and physical activity goal setting. The booklet will be supported by an exercise chart, calendar, bi-monthly newsletters, resistance bands and pedometers, along with phone and email contact. Data will be collected over three time points: pre-intervention, immediately post-intervention and 6-months post-study.
Discussion
This trial will provide valuable information for community-based strategies to improve older adults' physical activity and dietary intake. The project will provide guidelines for appropriate sample recruitment, and the development, implementation and evaluation of a minimal intervention program, as well as information on minimising barriers to participation in similar programs.
Trial Registration
Australian and New Zealand Clinical Trials Registry ACTRN12609000735257 | Competing interests
The authors declare that they have no competing interests.
Authors' contributions
LB coordinated the PANS program and drafted the manuscript. JJ, PH, AL, LB, TS, DK, AH and AA designed the study, and revised the manuscript. All authors read and approved the final manuscript.
Pre-publication history
The pre-publication history for this paper can be accessed here:
http://www.biomedcentral.com/1471-2458/10/751/prepub | Acknowledgements and funding
This study is funded by a three-year National Health and Medical Research Council (NHMRC) grant, project number 533501. The authors are grateful to Chantel Kent, Sharryn Batt and the Group Guides for their contributions to the project. | CC BY | no | 2022-01-12 15:21:45 | BMC Public Health. 2010 Dec 6; 10:751 | oa_package/39/9f/PMC3016383.tar.gz |
|
PMC3016384 | 21129227 | Background
Physicians have an essential role in the sickness absence process; however, there is very little scientific knowledge about this. In the last years, some more studies have been published but the number still is scarce [ 1 - 6 ]. Possible negative consequences of being sickness absent have recently been highlighted [ 7 , 8 ] as well as the importance of sickness certificates being issued with the same caution as other recommendations made by physicians to their patients, in order to avoid negative consequences [ 1 , 9 , 10 ].
In most countries there are two requisites for being entitled to sickness benefits; one must have a disease or an injury and this disease or injury must have affected one's work capacity [ 11 ]. In Sweden, as in most countries, all physicians may issue sickness certificates. However, writing the certificate is only one of the tasks that may be involved in a consultation where sickness-certification is considered. Those different tasks can be summarised as [ 1 ]:
• Determine if the patient has a disease or an injury.
• To ascertain whether the disease or injury impairs the patient's functional ability to the extent that the work capacity is also impaired in relation to her or his work demands.
• Consider, together with the patient, the possible advantages and disadvantages of being sickness absent.
• Determine the duration and grade (full or part time) of sick leave and the medical investigations, treatments, or other measures needed during the sick-leave period.
• Determine possible needs for contact with other specialists, the social insurance office, occupational health services, the employer, or other stakeholders and if so, to establish adequate communication.
• Issue a certificate that provides sufficient information for the employer or social insurance officer to decide whether the patient is entitled to sickness benefits and in need of further return-to-work measures.
• Document assessments and actions taken.
In a systematic literature review of studies of sickness-certification practice, scientific evidence could be established for only two issues; that the sickness certificates had low quality and that the physicians found sickness-certification problematic [ 1 , 11 ]. The Swedish Council on Technology Assessment in Health Care (SBU) concluded that the number of studies was low and that sample sizes were small and often very biased. Moreover, SBU stated that larger studies were warranted to be able to generalize results and to get more detailed information, as basis for interventions.
In this research area it is especially crucial to also use the physicians' perspective on the sickness-certification task and related problems, in order to be able to tailor interventions along the line of their experiences and wishes. Moreover, most studies in this area have included only general practitioners (GPs) [ 2 , 3 , 11 - 27 ]. Some studies have compared the frequency and the severity of problems among GPs with that of other physicians [ 1 , 5 , 6 , 28 - 30 ]. Nevertheless, the knowledge about sickness-certification practice among other physicians than GPs is still scarce.
Therefore, we in a previous questionnaire study included physicians also from other clinical settings [ 6 ]. One finding was that consultations involving sickness-certification were equally, or even more, frequent among other specialists and clinics than in primary health care (PHC)/among GPs [ 6 , 29 ]. This was unexpected and we have, consequently, here pursued our research about this in a larger study, including all the physicians in Sweden.
The aim was to gain detailed knowledge about physicians' work with sickness-certification regarding frequency of different types of situations and severity of related problems, in general as well as in specific clinical settings. | Methods
A cross-sectional questionnaire study was conducted. The questionnaire was administered to all physicians in Sweden, a country with 9.3 million inhabitants. The study population was the 36,898 physicians of all ages who lived and mainly worked in Sweden in October 2008. They were identified using a register of all physicians, held by the company that in Sweden has the most complete such register; Cegedim AB.
A comprehensive questionnaire of 163 questions about various aspects of sickness-certification practice and related work issues was developed, based on a previous questionnaire, research, and contacts with many different clinicians and researchers in the area [ 29 , 31 ]. The questionnaire was tested in a pilot study with a random sample of 100 physicians (62.0% response rate).
The questionnaire was distributed by mail to the participant's home addresses in order to avoid interaction with colleagues in completing the questionnaire. Three reminders were sent to non-responders. Distribution, registration, scanning of questionnaires, and basic management of data was administered by Statistics Sweden. The response rate was 60.6%. There was no large bias in the drop-out; as expected the response rate was somewhat higher among women and older physicians (Table 1 ).
In this study we included the 14,210 participants who were below the age of 65 and had consultations concerning sickness-certification at least some times in a year.
Data sources
Information about age, sex, and being a board certified specialist was provided by the National Board of Health and Welfare via Cegedim AB. All other data were provided by the questionnaire. Information about type of clinical setting the participant mainly worked in is presented for 18 specified clinics, "other clinics", and "administration/research/education" (in tables and figures called "Administration"), respectively.
Frequency of consultations, frequency of related problems, and severity of experienced problems, respectively, were measured by answers to three questionnaire items:
- Frequency of consultations concerning sickness-certification was measured by the alternative answers; "more than 20 times a week", "6-20 times a week", "1-5 times a week", "about once a month", "a few times a year", and "never or almost never". The two response alternatives "about once a month" and "a few times a year" were combined to "less than once a week".
- The response alternatives regarding frequency of problems in handling sickness-certification consultations were; "more than 10 times a week", "6-10 times a week", "1-5 times a week", "about once a month", "a few times a year", and "never or almost never". The alternatives "about once a month" and "a few times a year" were combined to "less than once a week".
- Regarding severity of experienced problems , the response alternatives were "very", "fairly", "somewhat", and "not at all" to a generic question "How problematic do you generally find it to handle sickness-certification of patients?" and to 16 specified problems.
Statistics
Results from descriptive statistics of frequencies of consultations and problems regarding those, the association between them, as well as the severity of problematic tasks were stratified by type of clinic. Analysis made of the variations between age groups and between men and women turned out to be small and the results appear, therefore, for all together.
The odds ratios (OR) of having very or fairly problematic situations were estimated with 95% confidence interval (CI), adjusted for specialist/not specialist. The reference group was physicians working in internal medicine clinics, chosen because of its large size and moderate level of problems. ORs were calculated for items that at least half of the responders found very or fairly problematic. The analyses were performed using the SPSS 17.0 program.
The study was approved by the Regional Ethical Review Board of Stockholm. | Results
Of all the responding physicians below the age of 65, 67.4% had consultations considering sickness-certification at least a few times a year. That rate was higher for non-specialists than for specialists; 73.6% and 65.1%, respectively (Table 1 ). In a number of clinical settings, nearly all physicians had such consultations at least a few times a year; PHC (97.4%), infectious diseases (97.0%), occupational health service (96.4%), oncology (96.0%), and rheumatology (99.0%).
From here on, only those 14,210 physicians <65 years of age who at least sometimes per year had sickness-certification consultations were included in the analyses. More than one third of the physicians in twelve types of clinics had such consultations more than five times a week (Figure 1 ). The highest proportion of physicians who had such consultations more than five times a week were physicians in orthopaedic clinics and occupational health service. Notably is that also pain management, oncology, psychiatry, rehabilitation, rheumatology, and neurology, together eight different clinics, had much higher rates of sickness-certification consultations than those in PHC.
Frequency of sickness-certification consultations as well as of related problems varied substantially with type of clinic. Characteristics of the clinics regarding mean age, sex, and proportion of board certified specialists varied as well (Table 2 ).
The association between the proportion of physicians in different clinics having sickness-certification consultations more than five times a week and the proportion having problems with these work tasks at least once a week is illustrated in Figure 2 .
In most clinics, the proportion of physicians having many such consultations was positively associated to the proportion finding them problematic. However, the physicians in oncology clinics and in PHC diverged. A high rate of the physicians in oncology clinics had sickness-certification consultations but a relatively low rate found them problematic, while the contrary was found for PHC.
The physicians were asked to assess how problematic they experienced 16 different situations (Table 3 ). The task most physicians (59.9%) rated as fairly or very problematic was to assess the patient's work capacity followed by providing a long-term prognosis for the duration of the work incapacity (58.4%). Moreover, a high percentage found it problematic to handle prolongations of a sick-leave period that initially had been certified by a colleague (52.3%) and to provide social insurance officers with other types of medical certificates, e.g. for disability pension (49.9%). The least problematic task was to know what to document in the medical chart, 43.2% stated that this was not at all problematic.
The OR of having problems in handling sickness-certification cases was, using physicians in internal medicine as reference group, highest among physicians in PHC; OR 3.3, 95% CI 2.9 to 3.7 and in rheumatology clinics; OR 2.6, 95% CI 1.9 to 3.5 (Table 4 ). Physicians in neurology, pain management, orthopaedics, and psychiatric clinics also had an OR above 1.0. Physicians in PHC had significantly higher OR for all the specified problems while physicians in pain management didn't have OR above 1.0 for any of the specified problems. Physicians in psychiatric, neurologic, or orthopaedic clinics had higher ORs for experiencing it very problematic to assess and to prognosticate patients' work capacity. Physicians working in orthopaedic clinics had a slightly higher OR for finding it problematic to prolong a sick-leave period initially certified by another physician. | Discussion
This is, so far, the internationally largest questionnaire study of physicians' sickness-certification tasks and problems, regarding type of situations they encountered and problems they experienced, including all physicians working in Sweden. Sickness-certification consultations were far more frequent than anticipated - more than two thirds of the physicians had such consultations, and of those over 80% had that at least once a week. Moreover, the rate of having such consultations was much higher in many other clinical settings than in PHC, e.g. in orthopaedics and in oncology. This has not been studied in other countries. Obviously, the previous focus on GPs regarding these tasks needs to be reconsidered. Nevertheless, a much higher rate of the physicians in PHC experienced these tasks as problematic. The majority of all physicians rated the task to assess magnitude of work incapacity and the prognosis of such incapacity as problematic. More detailed knowledge, regarding the broad tasks involved in sickness-certification consultations, has been called for [ 32 , 33 ]; some of that is provided here.
Strengths of the study are the very large sample size, that all physicians (N = 36,898) working and living in Sweden were included, that all clinical settings were included, and the detailed questions about these tasks. The study group is large enough to admit sub-group analysis, e.g. regarding different types of clinics. Another strength, from an intervention perspective, is that these results were based on the physicians' own experiences of tasks and problems. They can, therefore, be of good use when e.g. targeting different types of competence development [ 13 ]. A limitation is the drop out of 39%. Nevertheless, the response rate can be considered high for this type of study and the study design admits analyses of bias in the dropout. Differences in dropout rates between physicians being board certified specialist and non-specialist i.e. not yet fully trained or registered specialist might have affected our results. Non-specialists were, as expected, younger. During training, they often change residence, also geographically, which might be one reason for the higher drop out. The non-specialists had a higher dropout rate and also reported more problems regarding sickness certification than the specialists, which might have lowered the crude OR for problems reported from clinics with higher proportion of non-specialists. That is, in some cases the ORs might be underestimated, even when adjusted for rate of registered specialists.
To sickness certify a patient is a common task in health care in Sweden as in many other countries. However, this recommendation can so far not be based on scientific evidence [ 1 ]. In a previous, smaller study a slight association was found between physicians having at least six sickness-certification consultations a week and rates of having problems with this at least once a week [ 6 ]. The results of the present, much larger study, goes in the same direction and we found similar associations in the majority of clinics, however, not for physicians in oncology and PHC. The proportion of physicians experiencing problems regarding sickness-certification in general, as for specific items, varied with type of clinic. The physicians in PHC had the highest ORs for experiencing problems, although they did not have the highest frequency of sickness-certification consultations.
Actually, also some physicians in geriatrics and child care had sickness-certification cases. This can be explained in at least three ways; one is that the participants were asked to indicate the type of clinic where they mainly worked, but some might also be clinically active in other types of clinics. The second is that some of their patients are adolescents, above the age of 16, and thus can be sickness absent and in geriatrics also some patients work in spite of old age. The third is that they might sickness certify parents of children or relatives of geriatric patients. So far, none have studied this.
We were surprised by the very high rates of consultations and also by the large variety in rates of physicians experiencing problems with these tasks. In Sweden, as in most welfare countries, specialists generally are to refer patients to PHC when treatment is finished or stabilised. It is an understanding that sickness-certification is to be monitored from PHC if a patient is referred to other clinics by the general practitioner in PHC. Nevertheless, very high rates of physicians in different clinics had such tasks very often. More studies are needed to verify these results also in other countries. However, other studies indicate that GPs generate about half of all sickness-certification, which is well in line with our results [ 1 , 34 ]. An obvious issue here is whether the results can be generalized also to other countries. Sickness-certification practices have only been studied in very limited populations and mainly for GPs in other countries - thus, the situation might be similar there, however, that remains to be seen. Regarding other aspects of sickness-certification practice, results from different countries and from different time periods have been unexpectedly similar [ 11 ].
The most problematic part of sickness-certification seemed to be to assess the magnitude of the patient's work capacity. This result goes in line with some previous studies [ 19 , 35 , 36 ]. A variety of instruments for assessment of work capacity are used in different countries, however, scientific knowledge on the validity of them, their effects, and on possible implications for the work of physicians is warranted [ 37 ]. Overall, the highest OR for problems with work-capacity assessments was found among physicians in PHC and in psychiatry, rheumatology, and neurology which had a still higher OR for problems providing a long-term prognosis of work capacity. We have not found any other studies about this.
The majority of physicians had sickness-certification consultations every week and problems experienced regarding this varied substantially in frequency as well as severity between clinical settings. The physicians at a vast majority of the clinics regarded sickness-certification consultations as problematic and far more so among physicians in PHC in spite of that they did not have as many such consultations. So far, most interventions concerning sickness-certification have been targeted towards physicians in PHC/GPs. Other physician groups with high frequencies of consultations and/or problems were found in oncology, orthopaedic, psychiatric, pain management, and rheumatologic clinics. The results indicate the importance to take account of the variety of problems in physicians in different clinical settings experience, when planning interventions aimed at improving their work with sickness-certification of patients. | Conclusions
So far, most interventions regarding physicians' sickness-certification practices have been targeted towards PHC and GPs. Our results indicate that those physicians also to a great extent experience sickness-certification consultations as problematic. Nevertheless, also other physicians have high risks for experiencing them as problematic, e.g. in rheumatology, neurology, psychiatry, and orthopaedic clinics. Moreover, in several clinical settings the physicians had such consultations more often than the GPs. Thus, the results indicate that much can be gained through focusing on physicians also in other types of clinics when planning interventions to improve physicians' sickness-certification practice. | Background
How physicians handle sickness-certification is essential in the sickness-absence process. Few studies have focused this task of physicians' daily work. Most previous studies have only included general practitioners. However, a previous study indicated that this is a common task also among other physicians. The aim of this study was to gain detailed knowledge about physicians' work with sickness-certification and of the problems they experience in this work.
Methods
A comprehensive questionnaire regarding sickness-certification practice was sent home to all physicians living and working in Sweden (N = 36,898; response rate: 61%). This study included physicians aged <65 years who had sickness-certification consultations at least a few times a year (n = 14,210). Descriptive statistics were calculated and odds ratios (OR) with 95 % confidence intervals (CI) were estimated for having different types of related problems, stratified on clinical settings, using physicians working in internal medicine as reference group.
Results
Sickness-certification consultations were frequent; 67% of all physicians had such, and of those, 83% had that at least once a week. The proportion who had such consultations >5 times a week varied between clinical settings; from 3% in dermatology to 79% in orthopaedics; and was 43% in primary health care. The OR for finding sickness-certification tasks problematic was highest among the physicians working in primary health care (OR 3.3; CI 2.9-3.7) and rheumatology clinics (OR 2.6; CI 1.9-3.5). About 60% found it problematic to assess patients' work capacity and to provide a prognosis regarding the duration of work incapacity.
Conclusions
So far, most interventions regarding physicians' sickness-certification practices have been targeted towards primary health care and general practitioners. Our results indicate that the ORs for finding these tasks problematic were highest in primary health care. Nevertheless, physicians in some other clinical settings more often have such consultations and many of them also find these tasks problematic, e.g. in rheumatology, neurology, psychiatry, and orthopaedic clinics. Thus, the results indicate that much can be gained through focusing on physicians in other types of clinics as well, when planning interventions to improve sickness-certification practice. | Competing interests
The authors declare that they have no competing interests.
Authors' contributions
The authors of this manuscript are members of a research project investigating physicians' sickness-certification practice. CL contributed in the study design, data collection, interpretation, drafting, and writing the manuscript, and supervision. BA and GN contributed to the conception, design of the study, interpretation of the data, and revisions of the manuscript. EH performed the statistical analyses and contributed to interpretation and revisions of the manuscript. AE and AL contributed to the data analysis and interpretation as well as revisions of the manuscript. YS contributed to interpretation and revisions of the manuscript. KA contributed to the conception, design of the study, data collection, interpretation of the data, revisions of the manuscript and supervision. All authors have read and approved the final manuscript.
Pre-publication history
The pre-publication history for this paper can be accessed here:
http://www.biomedcentral.com/1471-2458/10/752/prepub | Acknowledgements
The project, including development of the questionnaire, data collection and management, was financially supported by the Swedish Research Council of Working life and Social Sciences, Stockholm County Council, Ministry of Health and Social Affairs, and the National Social Insurance Agency. | CC BY | no | 2022-01-12 15:21:45 | BMC Public Health. 2010 Dec 6; 10:752 | oa_package/d8/1b/PMC3016384.tar.gz |
PMC3016385 | 21134265 | Background
In Sweden, the social security and sickness insurance systems are comprehensive and aim to provide people whose illness prevents them from earning their own living with sufficient income, either through sickness benefits or through disability pension, based on the principle of income replacement [ 1 ]. Nevertheless, as the system is designed to cater for those who have worked for some time, there is a proportion of the population who do not qualify for such benefits. Their last resort is social assistance, provided by the municipal social services. Young people, single mothers and people with an immigrant background are common groups among recipients of social assistance. [ 2 , 3 ]
Social assistance is a selective, means-tested benefit aiming to guarantee a reasonable standard of living. Local governments administer and finance social assistance and work under a national legal framework [ 2 , 4 ]. In 2009, about 237 000 households in Sweden received social assistance on at least one occasion during the year (5.8% of the population). Social assistance is considered to be a temporary resort but in 2009 over one third of all recipients were classified as long-term recipients (had received social assistance for at least 10 months). [ 5 ]
In our recent study among social assistance recipients [ 6 ], a substantial proportion reported chronic illness. People who are ill and seeking social assistance may be categorised in two groups: those who have a medical certificate showing that they are not able to work, but whose sickness insurance or disability pension is not sufficient or who are not entitled to other sickness benefits; and those whom social welfare professionals judge incapable of work at the moment because of social or medical reasons (for example drug or alcohol abuse, mental ill-health, neuropsychiatric problems or musculoskeletal pain). It is difficult to estimate the proportion of recipients of social assistance who are chronically ill. In their statistics, municipalities have different ways to code reasons for receiving social assistance and there are many different codes which are also health related (not all clients disclose that they have health problems). For instance, in a study in Stockholm 2007 [ 7 ] 37 percent of all adult recipients of social assistance were classified as being ill (including both mental and physical illnesses, and drug related ill-health).
The experience of receiving social assistance may differ for individuals, and may vary depending on the reasons for not being able to support oneself. Some clients in society are considered as more deserving than others. Van Oorschot talks about "deservingness rank order in European welfare states", that people tend to support the provision of welfare for elderly and sick people and those with disabilities more strongly than welfare for unemployed people or immigrants [ 8 ]. Recipients of social assistance are understood as the "least deserving" in this deservingness rank order. The set-up of a society, including social security systems and levels of benefits, and the views of the public on recipients of social assistance all colour the experience of being a social assistance recipient, which varies between societies [ 8 - 11 ]. In addition, the individual's own view of deserving or not deserving the assistance of society may influence their experience of living on social assistance. Besides having a limited income, living on social assistance may also have other meanings for recipients, for example loss of autonomy and being dependent on social services, feeling shame and not be part of the rest of the society [ 6 , 9 , 12 , 13 ], which may affect health and well-being.
According to Salonen (1993) needs-based assistance is always subject to debate and also follows business cycles. Means tested social assistance has low legitimacy in the general population. In times of recession there is more questioning of the system and groups receiving assistance. [ 10 ] Social assistance recipients are stigmatised by labelling, to separate them from other groups receiving other forms of state assistance. At the same time attention is shifted from societal causes to unwanted or deviant behaviours and individualization of the causes of welfare recipiency. The public discourse regarding needs-based assistance is thereby stigmatising, and the recipient perceives shame and being excluded. [ 10 , 14 ]
In our previous study among social assistance recipients [ 6 ], chronic illness was a common feature, but we found few other studies concerning those who were chronically ill and receiving social assistance in Sweden, and how this was perceived by the recipients themselves. From a public health and policy perspective it is important to study this group, as they are disadvantaged not only in terms of having a chronic illness, but also in terms of being dependent on social assistance for their subsistence which may further exacerbate their ill health and contribute to social exclusion. The purpose of this study was to explore in depth how social assistance recipients with chronic illness perceive and respond to the experience of living on social assistance. | Methods
The study reported here is part of a larger study in Stockholm aiming to study experiences of living on social assistance in Sweden [ 6 , 15 ], and also a part of a comparative project between Britain and Sweden, which aimed to identify and compare the influence of policies and services on resilience in poor households. This comparative project was one component of the ESRC Human Capability and Resilience Network, concentrating on resilience in the face of socio-economic disadvantage and poverty [ 16 ].
In our previous study among 33 social assistance recipients [ 6 ], two overarching themes emerged: material and psychosocial dimensions of living on social assistance. Interviewees' explanations of what led them to claim social assistance were also described. Seventeen of the 33 interviewees stated that they had had work incapacity due to illness or disability for more than one year. [ 6 ] Therefore, we decided to investigate further how these interviewees perceived their current situation when living on social assistance. Usually the concept of illness refers to the subjective response to feeling unwell [ 17 ]. In this study we rely on self-reports of interviewees. The guiding methodological approach was grounded theory [ 18 ]. Grounded theory was regarded as suitable as the study was explorative and aimed to investigate the perceptions of living on social assistance and social processes shaping these perceptions.
Setting and sample selection
We conducted fieldwork in six sites in Stockholm County, providing a variety of geographical settings with differing socioeconomic compositions. According to the Regional Ethical Review Board in Stockholm, section 5, the study did not require ethical approval under local guidelines (dnr 04-609/5).
We employed a purposive sampling strategy [ 19 , 20 ], which generates a sample suited to the specific needs of the study and the research questions. We did not decide in advance the number of interviewees or the type of categories they would represent; the number was determined during the sampling and initial analysis of the data. Our primary purpose was to explore experiences of people living on social assistance. The secondary purpose was to gather data from people with potentially different experiences (both positive and negative) to enable to contrast their experiences of living on social assistance.
The inclusion criteria for sample selection were that a participant should be of working age (between 18 and 64 years), currently receiving social assistance as their main source of income or as a supplement to their other incomes and willing to be interviewed. We aimed to recruit a heterogeneous sample of social assistance recipients comprising both men and women in different ages, including for example long-term unemployed people, lone mothers, and immigrants to Sweden. All these groups are common among social assistance recipients in Sweden [ 2 , 3 , 10 ].
To recruit interviewees, we first contacted units working with social assistance in each of the study sites and came into contact with some of those interviewed through social workers. We then contacted other places to which social assistance recipients were often referred, including several centres for labour market activities for people receiving social assistance. Most of the individuals asked to participate agreed. Three of those who were recruited did not attend arranged interviews for different reasons. Several interviewees did not attend the first time we had booked our interview. We arranged a new time and in some cases a third time. Hence, the recruitment was time consuming. We ended the data collection when we felt the "saturation point" was reached, that more interviews would add so little information that it would not be meaningful to conduct further interviews [ 20 ].
The first author (AM) conducted 33 in-depth interviews. The length of the interviews varied from 40 minutes to two and a half hours. Most interviews were carried out in Swedish, but one in Finnish and one in English. A further three interviews were conducted with the aid of an Arabic speaking interpreter, when the interviewees felt they could not express themselves properly using Swedish. Before the interview, the aim of the study was explained to the interviewees, who were informed of their right to withdraw and that responses would be kept anonymous.
An interview guide was used including open-ended questions about the daily life of interviewees, their contacts with services in the community, what made them feel well and what bothered them most, and how they managed their finances. Interviewees were also asked to reflect on how they felt about their health and the future. The interviews took place at social welfare offices, at the local public library or in activity centres to which social assistance recipients were referred. Interviews were tape recorded with permission, and were conducted in 2005-06.
The present analysis focuses on how those 17 interviewees who reported having chronic illness perceived their current life situation when living on social assistance. They were selected for this analysis and are the subjects of this paper.
Study participants
Demographic characteristics of the interviewees are presented in Table 1 . Twelve of them had children.
All 17 interviewees reported that they had an illness which had lasted for more than a year. Interviewees were a heterogeneous group of people concerning their ill health and troubles they had with their illness. Both physical and mental illnesses were reported (e g epilepsy, heart disease, musculoskeletal pain, depression, anxiety and panic attacks) as well as drug/alcohol related health problems. Some interviewees had experienced times in their lives when they could not even get out of bed and manage daily routines like shopping and cooking. Some interviewees had quit taking drugs and alcohol and suffered from health problems related to that. Some had mental health problems, e g depression and panics attacks, and were in the middle of the recovery process to manage their daily lives with their illness.
Fourteen interviewees had social assistance as their main source of income. Ten interviewees had lived on social assistance for more than five years; four interviewees reported that they had received social assistance from time to time over several years. At the time of the interview three of the interviewees had disability pension, at a level insufficient for their subsistence, which meant that they could qualify for social assistance as a supplement to their pension. In such a case social welfare offices did not make specific demands on them. Some of the interviewees said they participated in labour market activity, several that they were waiting for an examination of their work capacity, and some said they were not participating in any such activity at the moment because of their health condition.
Analysis
All interviews were transcribed verbatim and were analysed initially with open coding, whereby the researcher assigns codes to pieces of text, going through each transcript, line by line [ 18 ]. The paradigm model was a guiding principle in axial coding where data is put back together in new ways by making connections between categories and subcategories [ 18 ]. For example, we identified interviewees' strategies to manage their daily life as an emerging category. In the final stages of the analysis the core category of the study was developed and selected (selective coding) and through this we made sense of the relationships between categories and subcategories developed during the analysis. The Atlas.ti 5.6.2 software (Scientific Software Development, Berlin) aided this process.
Data analysis followed the constant comparative method [ 18 , 21 ] through ongoing comparisons of categories from one case to the next (for example experiences, situations, actions), within a case and comparing incidents with incidents. All the authors were involved in discussions about emerging categories and the analysis of the data. The names of interviewees in the accounts in the results-section have been changed to protect interviewees' anonymity. Quotations were selected to illustrate the issues that emerged from the data. | Results
The findings fell into four main categories. Three categories concerned how interviewees responded to their daily life. Different strategies (living one day at a time, taking steps forwards and backwards and making attempts to find ways out of the situation) were identified in the accounts as ways that the individuals managed the situation and maintained or improved well-being. The fourth category was labelled as the core category of the study and identified those features found to buffer or undermine interviewees' well-being.
The categories found in the analysis were seldom conscious choices or something the interviewees planned, but we could identify them in the accounts they gave. The identified categories often occurred simultaneously, and were not mutually exclusive. How interviewees perceived their situation depended also on other circumstances (e.g. experiences of labour market practice, family members' health situation) during the period being ill and living on social assistance.
Figure 1 was constructed to illustrate the conceptual model emerging from the data about influencing features and how they interact.
At the centre of the model is the individual well-being for social assistance recipients who are chronically ill. The three categories immediately surrounding that all influence well-being and interact with one another. The interviewees had a short time perspective in their lives. They did not plan the future because the future was so uncertain and they perceived their opportunities to influence it as limited. Living one day at a time or "today" was their way to lessen the stress over the economic, health-related and social situation. In this sense the short time perspective acted as a buffer in this situation. At the same time living this day was also undermining as they had no opportunity to make plans for the next week or month, or the future.
The process of improving health or life situation included both steps forwards and backwards for most of the interviewees. For some the change to better health or social situation was rapid but for most it took time. Different small steps forwards, such as managing daily routines like shopping and cooking, were often necessary for major transitions like finding a job.
In the accounts we also identified interviewees' conscious or unconscious attempts to find "escape routes", ways out of the uncertain situation. Getting other benefits, like sickness benefit or getting a diagnosis were identified as such.
Contextual features like the prevailing welfare system including rights to benefits and benefit levels, access to and responsiveness of services as well as social relations and networks could buffer or undermine individuals' strategies and their well-being as a whole. These features were interwoven in interviewees' accounts.
Core category: buffering and undermining features to well-being
The core category of the study is called "buffering and undermining features to well-being". These features were discussed directly in interviews or indirectly through talk about meeting professionals and being in contact with the welfare system. In Table 2 examples are given of these features which we have categorized at individual, neighbourhood, service and socio-political level, and which shaped the context in which interviewees perceived their situation when they lived on social assistance and had a chronic illness.
At the individual level, several features were discussed as buffering (for example feeling safe and having self-confidence) in hardship. Undermining features were related to the economic situation, social relations or psychological aspects (like feeling frustration and hopelessness). These features also influenced how the interviewees reacted in adversity. This can be illustrated for example by how Lisa, a 25 year old woman who suffered from panic attacks, responded when she was asked about what is most important in life:
For me it is to feel safe, have a balance somehow in life between difficult things and good things. Right now I don't have that balance. But if you can find that feeling, which I think is possible, then you have an inner peace. And that, I think, is what you should have .
Good relations with friends or children gave strength and quality of life in hardship for several interviewees. Good facilities in the neighbourhood and in the local community as well as access to and quality in public services and contacts with professionals were perceived as important. A key issue concerning services was how professionals treated their clients. Nina, a 43 year old woman, described her experiences of being in contact with health and social services after years of heavy drinking, destructive relations and traumatic life events. She perceived she had been treated as an individual, taken "seriously". She said:
I don't feel that I am a piece of paper among other papers on their desk and that feels good. When they treat you as an individual, it is also easier to talk about my situation...And then it becomes easier for social services to understand which kind of needs I have. Their role as I see it, is to push us back to society...sometimes it takes a short time and sometimes a longer time. It depends; we all have different things to deal with .
Access to nature and leisure time recreation were other examples of important buffering features especially in the situation with limited economic resources. On the other hand, where there were family problems or destructive relations; where access to services was poor or services were perceived as non-supportive; or where the immediate environment was poor, these features acted to undermine the well-being of individuals. One example of these undermining features was when the contacts with authorities made interviewees feel worse. As social assistance recipients they had to act as they were told; otherwise they did not get their benefits. They had to answer to a lot of questions about their private life. Some interviewees described they felt " as a little child " when being in contact with social welfare.
At the socio-political level, the accounts referred to the context in which social and health services were implemented and to which rights, benefits and social and health services interviewees had access. Features such as rights and access to child care and welfare benefits, provided the basic opportunities for welfare and prerequisites to build on in the process of change. Most interviewees, especially those with an immigrant background, were worried about their own possibilities to find a job and earn a living, but were happy for the opportunities their children had in society. Ali, a 53 year old man with an eye-disease waiting for an operation had lived in Sweden for six years, and reported that all his four children had learned Swedish quickly through the day care and school. The oldest ones wanted to study medicine and social sciences at the university. Although they lived in such an insecure economic situation, Ali perceived that his children had opportunities for education and " a better life ". He said: " The future is theirs, not mine..."
In the following section we present the three categories of strategies identified in the accounts as ways that individuals managed their situation and maintained or improved well-being.
Living one day at a time
We found the category, "live life one day at a time", as a response to difficult circumstances; not thinking too much about the future and problems in life. To live on social assistance was in itself a strain; being ill at the same time increased the vulnerability. A way to lessen the stress over the economic, social and health situation was to live "today". This category includes two subcategories (uncertainty and shame with social assistance and between hope and despair).
Uncertainty and shame with social assistance
Living on social assistance was something most interviewees did not want. When the situation was not temporary, but long lasting, it meant not being able to live an independent life, to have to declare personal things to social welfare professionals, being dependent. Petra, a 27 year old woman with musculoskeletal pain was anxious about her situation:
P: This life I live now, I don't want to have it. This is a total disaster .
I: What in your life is a disaster?
P: I like to work, have something to do. But when I have this pain, it doesn't work. There are no options; I do labour market practice now, but for how long? Who knows? One month perhaps?
Economic insecurity was described as a stressful factor in everyday life. Interviewees did not know exactly when they would receive the payment from social services. Especially at the end of the month they used to worry. Lisa, a 25 year old woman who suffered from panic attacks described:
Last days of the month, you live with the coins...It should be the 27 th we get the money, but sometimes things happen on the way. Your application is somehow missing or they have done something wrong and they tell you it is going to take three more days to fix. And the bills should be paid at that time and then I get stressed. You have to phone to the Electricity Company and your landlord and all the troubles that come with it.. .
The majority of the interviewees did not qualify for sickness benefit. Social assistance provided them with the opportunity to get by during the time they were sick and not able to work. However, they had to renew their application for social assistance each month and go through the same appraisal procedure over and over again. To apply for money every month was perceived negatively by most, especially when they were ill and health improvement was their primary objective.
Frustration over the situation was expressed in all interviews. Several of the interviewees also expressed shame over having to live on social assistance, even if it was not " their fault "; they were unable to work because of their poor health, labour market situation or family problems. There were several examples in the accounts of how shame was manifested in daily life, for example not telling the truth to other people when they asked what they did for their living. Annica, a 47-year old divorced woman with musculoskeletal pain, a mother of three children recounted her experiences when her vacuum cleaner broke down and she had to ask for extras from social welfare:
It was very embarrassing. They have to come and see. They asked a lot of questions, how long I had that Hoover and so on. But I don't ask for a Hoover from social welfare if I don't need that. It really was a last way out for me. I asked my family, my relatives if they had any. They didn't have, otherwise I would never have asked social welfare .
Between hope and despair
Some younger interviewees were hopeful and saw some chances to get better and find a job some day. Several interviewees in their 50 s were more hesitant because of their age and poor health. Ibrahim, a 48 year old man with a lung disease and pain in his leg, father of two small children said:
I really want to work and provide for my family. But sometimes I think it is too late, I am too old...I am like half a person [with his health problems] but still I am trying .
Several expressed frustration over their " hopeless " situation, especially those who had a low disability pension and supplemented their income with social assistance. They were ill, retired with disability pension and could not work. At the same time they constantly had problems with their household finances. Even the younger ones reported that they could not plan for their future, and hoped there would not be any unexpected expenses. Minna, a 29-year old woman, a mother of four children who had suffered from depression after domestic violence explained that she was " successful " in handling money, but the situation still was difficult:
I don't dare to plan any future. Anything can happen on the way. So I take a day at a time. I don't even plan next weekend. I am not that kind of person who is planning because you never know. Hopefully it is going to get better some day. That is my goal. To find a job and get somewhere .
Two steps forwards, one step backwards
The category "two steps forwards, one step backwards", illuminates the process of being ill when living on social assistance, and trying to find solutions related to social, material and health issues. Sometimes the process consists of many steps forwards, but often also some steps backwards and then some forwards again. This category includes two subcategories (the process of change, and finding meaning in life or giving up).
The process of change
In interviewees' accounts, improvements in health or life situation came about through a process of small changes in daily routine so as to manage activities in everyday life like doing shopping, being able to go out for a walk, meeting a friend or getting help with something in daily routines. Some interviewees had also participated in labour market activities which in some cases the participants had perceived as not helping them that much. In other cases the impact of the labour market activities was the opposite, they felt stronger after participating. For example Lisa, a 25 year old woman who suffered from panic attacks said that activities had helped her " to grow " and " to feel safer ".
There were descriptions about both steps forwards and backwards. Nina, a 43 year old woman described her attempts to find a way " back " after years of heavy drinking. She realised that she wanted "so much" and " too quickly ". She tried to participate in work rehabilitation and at the same time participate in a rehabilitation program for her alcohol problems while working half time:
I really did not manage all that, mentally or physically. I think after the rehabilitation program it would have been better if I had been on sick leave a couple of months because I was not ready. The point of the program is to reflect on things. I had awful pain all the time and at the same time a thousand things on my mind. I felt I was going crazy. I was so tired. One morning I woke up and I did not manage anymore .
Finding meaning in life or giving up
Different small steps preceded major transitions like attempts to find a job or get a diagnosis of the health problem or get another source of income. Four subcategories could be identified influencing these steps forwards: finding meaning in life despite difficult circumstances; enjoy small things in everyday life like being able to go out and take a walk; children and social contacts giving joy (interaction with others); or interests giving joy.
Several interviewees described how they found meaning in life even when living in difficult circumstances. Having children "helped" them to see the positive aspects in life. Children were described as one of the most positive things in life, the adult children helping them and the joy they had to watch the younger ones grow up. Many interviewees were proud of their children and felt stronger when they talked about them. Marcus, a 45-year old man suffering from health problems after years of drug abuse said:
My daughter is very important for me. I will see her growing older and all stuff with the family and so on. I want to see her happy and have a good life and if I can I want to help her as much as possible .
Several interviewees who had adult children had assistance from them, both in social and material aspects. For example Leyla, a 42 year old divorced woman who had problems with restless legs and back pain, described how her son helped her every month to pay part of her rent because the social welfare office did not pay the whole amount (her flat was considered too big and expensive for her). Hossein, a 55 year old divorced man with disability pension, described how his son helped him with all kinds of daily routines like taking a shower and shopping food (he was not able to manage these things by himself because of his ill-health).
There were, however, also some interviewees who expressed problems with their family situation and relations. Some interviewees had children who were addicted to drugs, and some had children with learning disabilities or difficulties in school. Several interviewees also discussed domestic violence or partners who were drinking or taking drugs. Their difficult home situation did not help improve their health. In these cases the effect was the opposite; they felt worse. Susan in her 50 s told about her daughter's addiction to drugs and the chaos that brought in their life. She had financial troubles in her business for a period and ended up bankrupt:
If I had felt well, I never would have ended up bankrupt. In that situation I had no strength to struggle. My daughter was taking drugs all the time and it was constantly something going on, problems in school. This had been going on for a longer time and I was ignoring difficulties with my income because I had no strength. We had no structure at all in our life at that time .
Life was described in several interviews as quite isolated, because of their illness and material situation. Several interviewees said they could not afford to travel, take a vacation or have extras like going out to have dinner. There were however also those interviewees who could find joy in things which did not cost money: like taking a walk with their dogs or meeting their friends which also made them feel better. Ibrahim, a 48 year old man who was very frustrated over his situation liked the multicultural neighbourhood where he was living. The social contacts helped him to get some distance to his problems:
...I think it is the contact between people here, the feeling of not being alone. You always have someone to talk to, you can sit on your balcony or in the kitchen, and someone is always passing by and saying hello, outside too when you are outdoors and playing with your children. I know most people here. I talk a little bit so you are active all the time. You don't have to sit alone with your feelings and think. I have relatives here, people from the same country as I am, other immigrants too...I can speak my language with them too. We meet and talk and do things together...If I lived in some of these affluent neighbourhoods I would not have this opportunity.. .
Escape routes - external support
This category found in the accounts highlighted "ways out" and was related to the labour market or social and health services in some way or another. There were also a few interviewees who could not find any way out of their situation. One of them was Hossein, a 55-year old man with disability pension suffering from a lung disease and pain in his legs:
I have so many problems with my money... When your income is limited, your life is limited...I am not able to see forward, I don't see any future...When you don't have any money, then the pain comes...If I don't have a good income, I can't have good health .
Three "escape routes" were identified (job or labour market practice, having a good relationship with a professional and getting a diagnosis or other benefits).
Job or labour market practice
Working full time after a longer period of illness or unemployment was an option which was discussed with hesitation. Most interviewees hoped to "come back" step by step through part time work or some form of labour market practice. It was not easy to find a job after being outside the labour market for several years. Minna, a 29 year old woman with four small children was afraid to fail if she took a "too big step"; she wanted to find a part time work experience:
...I have gone through so much; I am not able to seek a job. I can't get a permanent job because I am going to screw it up anyway. I have the strength to fight for two weeks and then I get tired. I have had so much trouble with things, private things, drug abuse [in her family] and this and that .
Having a good relationship with a doctor or social worker
Professionals in both social and health services played an important role in interviewees' lives. They had the power to influence the help which interviewees could access. Meeting a professional who put their energy into trying to help was appreciated. In the accounts there were examples of these "good" meetings that acted as turning points in their situation. The change did not happen over a day but as a process with small steps forwards, which these contacts with professionals help made possible. One example of this was Marcus, a divorced man in his 40 s who explained his addiction and his struggle to quit. Having taken drugs for years, he decided to quit because he had no strength left to live that kind of life. He met a doctor who helped him to a get a chance to quit through access to treatment and he also met a social worker willing to support him. He explained important qualities of a professional:
Most important is that you have a good contact, that you have trust in that person...I am glad that I got this girl [social worker], I have. She really has done more for me than she has to do. She really has helped me .
Getting a diagnosis/getting other benefits
Getting sickness benefit was considered valuable as it indicated acceptance or confirmation of being sick, even if the payment was at a low level and needed to be supplemented by social assistance. For some it did not matter so much where they got the money from because the payment would be so low anyway. The most important factor was to get a diagnosis and know the cause of their illness. Petra, a 27 year old single woman with musculoskeletal pain was one of them:
My pain is getting worse and worse because they have not done anything, not a single medical examination. They say I have problems with my muscles. But I have had this for over ten years now.. .
Annica, a 47-year old divorced woman with musculoskeletal pain, a mother of three children was waiting to do a work capacity appraisal. She got her income from social assistance instead of sickness benefit although she could not work because of her pain. She was not happy that social welfare professionals sent her to different activities which she felt were not suitable for her, with her health problems. She believed that if she got sickness benefit, they would probably have more understanding for her health situation.
As social assistance recipients, the interviewees did not automatically have access to rehabilitation. This depended on a range of factors, including the kind of health problem they were suffering from, in which municipality they lived and which professional they met.
Lisa, a 25 year old woman with panic attacks described how the professionals sent her back and forth between the unemployment office and the social welfare office because they thought it was unclear how much demands she was capable of managing. The unemployment office thought she could not manage full time work and the social welfare office judged differently. She explained further:
All the time, I ran back and forth, back and forth between them. If I was not actively seeking jobs I didn't get any money. That is why some months I didn't get any money at all. And I had no energy to fight with them either...I suffered from panic attacks; I couldn't manage to take a bus or train because of that...and they said: seek jobs everywhere...It was impossible . | Discussion
Although the social security and sickness insurance systems are comprehensive in Sweden and aim to provide resources for people whose illness prevents them from earning their own living, there are people who are not entitled to these benefits and are referred to social assistance. This study illustrates the hardships associated with being chronically ill and living on social assistance in Sweden, and strategies employed by the individuals to manage the situation and maintain or improve their well-being. Furthermore, it describes how the interaction between individual and contextual features may buffer or undermine well-being. Living on social assistance is a strain, especially in the longer term, both for the individual and his/her family, and may also lead to social exclusion [ 13 , 14 ].
Social assistance clients in Sweden, including those with chronic illness and unable to work, have to renew their application for social assistance each month and go through the same appraisal procedure over and over again. Although not intended as such, for some individuals and groups social assistance has become a more or less permanent solution for their subsistence problems. Some interviewees indicated that they would prefer to have their income from social insurance, even if it were at a low level. With such an arrangement, at least there would not be the economic uncertainty associated with social assistance. Instead of income security, social assistance had become a source of permanent insecurity in their lives. This contributed further to worse health and well-being, as also indicated in other studies [ 7 , 9 ].
Arguably, chronically ill persons on social assistance have the same kind of needs for rehabilitation as those chronically ill persons who qualify for sickness benefit [ 22 , 23 ]. Several interviewees (for example those whose health problems started before they came to Sweden and those who had not yet entered the labour market) were not covered by services organised by the social insurance office and their possibilities for rehabilitation were limited. In such cases access to rehabilitation depended on the municipality in which interviewees lived, what kind of health problem they suffered from and which professional they met. Lack of rehabilitation and sometimes even lack of a diagnosis were hindering interviewees in the recovery process. Some interviewees, particularly those who had diffuse or undiagnosed health problems, perceived a lack of coordination between the different authorities involved. This is in line with findings of other investigations in Sweden [ 7 , 22 ], and suggests that other approaches need to be considered in order to meet the needs of chronically ill persons who do not qualify for the regular services.
Some interviewees expressed the view that it is "not their fault" that they had to rely on social assistance; in case of illness society should help and give support. As sick or disabled they saw themselves as "deserving" [ 8 ] which lessens the stigma associated with social assistance. In an earlier study Gunnarsson [ 24 ] found that younger and older women gave social assistance different meanings: for the younger social assistance was a way to be economically independent from their parents. The older ones, who in several cases were also ill, perceived social assistance as economic dependency and saw that finding a job or getting sickness benefit would be the solution. In our study, social assistance was not perceived in the long run as a route to independence, but as a source of dependency and insecurity, even among the younger. Social assistance still today is associated with guilt [ 25 ], shame and feeling stigmatised [ 9 , 10 , 12 , 14 ]. Several interviewees in this study as well as in our earlier study among social assistance recipients in Sweden [ 6 ] did not want to identify themselves as one of "those" who receive social assistance, but explained their situation as caused by external factors outside their reach, as poor health, family problems, poor labour market or discrimination.
Interviewees had contacts both with health care and social services, and were dependent on professionals and welfare benefits for their living. Services and professionals could be buffering or undermining depending on what kind of help and support interviewees were referred to or how interviewees perceived they were treated. There were several examples of how professionals and services had facilitated a positive change in the interviewees' lives. They felt they had been treated with respect and as individuals, without being judged. There were also examples of the opposite, when services and professionals "made things worse". Different professionals may also have qualitatively different views on clients, leading to "unequal encounters" and differences in opportunities for rehabilitation [ 26 ]. It is important that services and professionals especially consider the needs of individuals with chronic illness who depend on social assistance for their subsistence, as they may be in a particularly vulnerable and dependent position in relation to public services.
In an earlier study of ours [ 27 ] carried out in Britain among people living in poor households, positive changes in interviewees' lives depended on opportunities arising for them to release their potential in order to start further processes. The achievements in everyday life like being able to get children to school on time were examples of an "upwards moving spiral", small steps toward something more positive. In the present study, accounts also illustrated that the process of improving health or life situation included both steps forwards and backwards. Taking "too big steps" too quickly could result in regression and losing self-esteem.
Finding work or labour market practice was also found as an "escape route". The "dream" or desire was to "find work and get somewhere". However, most of the interviewees in this study were sceptical about finding and managing full-time job, with their health problems and having been outside the labour market for several years. A gradual and stepwise entry or comeback into the labour market would therefore be more realistic for many in this group.
The findings of this study suggest that health and social welfare professionals should focus on finding individually adapted solutions for their clients, as well as building trust and understanding about the strategies individuals use in adverse circumstances to maintain or improve their well-being. From a health and social policy perspective the links between ill health and social disadvantage should be recognised, and coordination between health and social services for this group needs to be improved. There is also a need to further develop ways of entering the labour market for social assistance clients with chronic illness. The maintenance of good quality in local health and social services benefits the population as whole, but especially the most vulnerable groups who depend on services for their living. Social assistance clients with chronic illness are a vulnerable group who risk "falling between the stools" in their contacts with health care providers and other welfare institutions.
Limitations
This study was explorative and its results cannot be generalised. Nevertheless it has highlighted some experiences, strategies and needs of a particularly disadvantaged group in the Swedish society. In this study we did not present comparisons between interviewees living on social assistance with or without chronic illness. However, several categories (e.g. living one day at a time or having a good relationship with a professional) were found to be important among both groups in our data. Follow up studies of persons with and without chronic illness living on social assistance might shed further light on the development over time of these groups. Ethnic background was not a specific focus in this study but will be addressed in a coming study. | Conclusion
Chronically ill persons living on social assistance face many adversities. How they perceive and respond to the experience of living on social assistance is coloured by different features at individual, neighbourhood, service and socio-political levels. Contextual features, including the way in which individuals interact with services, how they are treated by professionals and what kind of help and support is available through the welfare system, are crucial in this adverse situation. Policy making and public services should especially consider the needs of chronically ill persons living on social assistance, who may be particularly vulnerable to a further decline in their health and to social exclusion if they are in this situation for a prolonged period of time. | Background
In Sweden, the social security and sickness insurance systems are comprehensive and aim to provide people whose illness prevents them from earning their own living, with either sickness benefits or disability pension. Some, however, are not entitled to these benefits or receive social insurance benefits at a level too low for subsistence, and are referred to social assistance. The purpose of this study was to explore in depth how social assistance recipients with chronic illness perceive and respond to the experience of living on social assistance.
Methods
Seventeen in-depth interviews were carried out with chronically ill people who had received social assistance for several years. Grounded theory informed the design of the study.
Results
The study showed that different strategies (living one day at a time, taking steps forwards and backwards and making attempts to find ways out of the situation) were employed by social assistance recipients to maintain or improve their well-being. Contextual features like the prevailing welfare system, public services and the local neighbourhood could buffer or undermine these strategies and their overall well-being. These features together influenced how interviewees perceived their situation, the possible ways out of the situation and the consequences for their well-being.
Conclusion
From this study it is evident that the way in which individuals on social assistance interact with services and how they are treated by professionals plays an important role in their well-being, in combination with what kind of help and support is available for recipients through the welfare system. In this respect, persons living on social assistance with chronic illness are particularly vulnerable. This study suggests that more effort should be made to find long term solutions concerning income support, rehabilitation and other services provided to this group. | Competing interests
The funding organisations had no role in the study design, data collection, analysis, interpretation or writing this article. The authors declare that they have no competing interests.
Authors' contributions
AM participated in the design of the study, carried out the interviews, analysed the data and drafted the manuscript. EJ participated in the analysis of the data and preparing the manuscript. MW participated in the design of the study, in the analysis and preparing the manuscript. BB participated in the design of the study, in the analysis and preparing the manuscript. All authors had read, revised and approved the final version.
Pre-publication history
The pre-publication history for this paper can be accessed here:
http://www.biomedcentral.com/1471-2458/10/754/prepub | Acknowledgements
We would like to thank the people who participated in this study. The research was funded by Stockholm County Council and the Economic and Social Research Council (ESRC) under grant No. L326253061 for the study of how social welfare policies and practices build or undermine resilience in poor households in Britain and Sweden. The study is part of the multi-project ESRC Priority Network on Human Capability and Resilience http://www.ucl.ac.uk/capabilityandresilience . The ESRC bears no responsibility for the analyses or interpretations presented here. | CC BY | no | 2022-01-12 15:21:45 | BMC Public Health. 2010 Dec 6; 10:754 | oa_package/e8/97/PMC3016385.tar.gz |
PMC3016386 | 21138555 | Background
Smoking remains the single greatest cause of preventable illness and premature mortality and it is estimated that the cost of treating smoking-related illness in the UK is now £2.7 billion each year[ 1 ]. Smoking is strongly linked to socio-economic disadvantage; in 2008, 27% of adults living in households in England headed by someone in a manual occupation smoked, compared to 16% in non-manual households[ 2 ]. Consequently, smoking is the largest contributor to health inequalities between the rich and the poor in the UK; it is estimated that more than half the difference in survival to 70 years of age between social classes I and V may be due to the higher smoking prevalence in class V[ 3 ]. Whilst smoking prevalence has declined over recent decades, this fall has been less marked in the more socioeconomically disadvantaged groups, so that the gap between smoking prevalence in higher and lower socioeconomic groups has widened[ 2 ]. Reducing smoking prevalence, especially in disadvantaged groups, is therefore essential to improving life expectancy, cutting health care costs, and reducing health inequalities. Identifying, measuring and attempting to explain socio-economic disparities in tobacco use are important first steps in developing strategies and targeting resources to reduce them.
It is widely recognised that traditional measures of socio-economic status have limitations and may underplay the extent of socio-economic disparities in smoking prevalence. The most frequently-used measures of socio-economic status for monitoring health in the UK are those based solely on occupation. Recent data demonstrate a higher smoking prevalence in routine and manual occupational groups (29%) than the 21% found in the population overall[ 2 ], though this figure falls well short of the prevalence of 75% or more found in studies of select disadvantaged groups[ 4 , 5 ]. Area-based measures, such as the Townsend Index of Deprivation[ 6 ], are often more easily ascertained than individual-level measures of deprivation, and are likely to reflect important area-level determinants of health and lifestyle. However, those in common usage combine data on a relatively small range of factors, just four census items in the case of the Townsend Score (unemployment, car ownership, housing tenure and household overcrowding), and are therefore likely to fail to capture important socio-economic determinants of smoking.
Mosaic is a UK geodemographic classification system, developed by Experian as a consumer segmentation and marketing tool[ 7 ]. Using data on over 400 variables from multiple sources, Mosaic classifies postcode areas into 61 'types' and 11 'groups' in terms of demographics, lifestyle characteristics and behaviours. Mosaic may provide a novel tool with which to identify new aspects of the socio-economic differentials in smoking behaviour in the UK. We have, therefore, used Mosaic, alongside the Townsend Index, to examine smoking prevalence within patients in a large primary care dataset, The Health Improvement Network (THIN)[ 8 ]. | Methods
The THIN Dataset
THIN is a large dataset of electronic medical records from over 400 general practices throughout the UK[ 8 ], and contains data for approximately 6.8 million patients, over 2 million of whom are currently alive and can be followed prospectively. The dataset is broadly representative of the UK population in terms of patient age and sex, though mortality rates 5% lower than national figures suggest the dataset may slightly under-represent more deprived populations[ 9 ].
All patients over the age of 16 and registered with a THIN practice on 1st January 2008 were identified. Of these, patients who registered with a practice within the previous three months, and who were therefore less likely to have had their smoking status recorded, were excluded (the 2004 GP contract requires that smoking status of newly-registering patients is documented within three months for this recording to be financially rewarded[ 10 ]), leaving a sample of 2,426,370 individuals for analysis. These patients' medical records were searched for the last smoking-related Read Code documented in their notes before the index date, which was then used to classify patients as current, or non-current, smokers. Patients with no mention of smoking in their medical records were deemed to be non-smokers - it has been shown previously that this assumption produces smoking prevalence estimates in THIN in line with national statistics[ 11 ].
EPIC, the providers of THIN, mapped the postcode of each patient in THIN to the area's Mosaic type and group and Townsend score; the latter was provided as a categorical variable corresponding to national quintiles of deprivation in order to preserve patient anonymity.
Mosaic
Mosaic is a tool designed to enable businesses to understand consumers' demographic and lifestyle characteristics and ensure they target their products or services at the right people, in the right locations. The tool is an area-based classification system which allocates individuals to one of the 11 Mosaic groups or 61 types based on the nature of the people living within the same postcode area. The classification is carried out at the level of the full UK postcode, equivalent to approximately 15 households, and so all individuals living in these households will be assigned to the same Mosaic category according to their 'average' characteristics[ 7 ].
Approximately one third of the variables used to classify people are derived from the UK decennial census and the remainder from a combination of public and Experian-proprietary datasets. These include property valuations, house sale prices, self-reported lifestyle surveys, a survey of adults' consumption of products, brands and media, and intelligence gathered through monitoring internet use[ 7 ]. Detailed algorithms to explain how these variables are combined to assign each UK postcode to a Mosaic group and type are not available from Experian due to commercial sensitivities.
Mosaic data were provided for each individual in THIN according to their postcode, categorised into 61 'types', and their aggregated 11 broader 'groups'. Table 1 summarises the characteristics of the 11 Mosaic groups, showing the group name assigned by Experian and a brief description of the individuals in that group.
A fuller description of each type and group was obtained from the Mosaic Interactive Guide[ 12 ], an interactive program available from Experian which provides, for each of the 11 groups and 61 types, a photo collage that gives a snapshot of the characteristics of people in that particular category, and written descriptions of their main features such as typical housing types, income, and residents' attitudes towards the area they live in. In addition, the Interactive Guide describes the distribution of categories throughout the UK and ranks categories according to their relative performance across the variety of measures used to build Mosaic.
Analysis
Initially, the proportion of THIN patients who were current smokers was estimated by quintiles of the Townsend Index of Deprivation. Odds ratios, unadjusted and adjusted for age and sex, were obtained by logistic regression. Then, the prevalence of current smoking within each of the eleven Mosaic groups and each of the 61 types was calculated, and the magnitude and range of prevalence figures compared with those calculated previously for each Townsend quintile. Again, logistic regression was carried out to obtain odds ratios for being a current smoker in each Mosaic group and type, both unadjusted and adjusted for age and sex. All analyses were completed using STATA version 11.0 (STATA Corp, College Station, TX).
The Interactive Guide[ 12 ] was used to conduct a qualitative exploration of the common characteristics of people living in the ten Mosaic types with the highest smoking prevalence, and the ten Mosaic types with the lowest smoking prevalence, and to attempt to identify any groups with unexpectedly high or low prevalence.
Ethical approval: This study was approved by the Derbyshire Research Ethics Committee. | Results
Of the 2,426,370 patients aged 16+ analysed in this study, 82% resided in England, 8% in Scotland, 6% in Wales and 4% in Northern Ireland, in line with official population estimates[ 13 ]. The average age of patients was 47.1 years and 49.3% were male. A Read Code documenting smoking status was available for 87.5% of patients; the proportion of patients with no record of smoking status in their notes increased from 11.3% of patients in the least deprived Townsend quintile to 13.6% in the most deprived, and ranged from 9.3% of those in Mosaic group I to 15.2% of those in group E. Overall, 20.8% of patients were recorded as current smokers in their medical records (22.5% of men and 19.1% of women).
A Townsend score was available for 86.9% of patients. As Figure 1 shows, smoking prevalence increased across the quintiles of Townsend score, from 13.5% in the least deprived quintile to 32.7% in the most deprived quintile (p-value for test of trend <0.001). In the logistic regression analysis adjustment for age and sex made little difference to the results and, therefore, adjusted odds ratios only are presented. The odds of being a current smoker was increased almost three-fold in those in the most deprived quintile compared to the lowest quintile (adjusted OR 2.96, 95% CI 2.92 to 2.99).
A Mosaic classification was available for 94.1% of patients, 7.2% more than the percentage of patients for whom a Townsend score is available. As Figure 2 (ranked with categories in order of increasing prevalence) shows, prevalence was highest in group G ("Municipal Independence") at 36.8% and lowest in group A ("symbols of Success") at 11.1%. The odds of current smoking was increased over four fold in group G compared to group A (adjusted OR 4.41, 95% CI 4.33 to 4.49).
A brief description of the ten Mosaic types with highest and lowest smoking prevalence is shown in Table 2 . From the fuller descriptions available in the Mosaic guide, the 10 types with highest smoking prevalence were characterised by households that are mostly occupied by single residents, often single parents (G42, F37, F40, G41). Types D24 and H47 consist of cohabiting couples with children. The typical age of people in these Mosaic types is under 34, with the exception of types G43 (65-84 years) and F39 (65-84 years).
Almost all of the ten Mosaic types with the highest smoking prevalence are described as typically having few qualifications, and the majority are either unemployed or have manual occupations. The exception to this pattern is type F35, who are a diverse group comprising both disadvantaged young people as well as university graduates. A significant number of people have few, if any, qualifications, but the proportion of people with university degrees holding professional positions is well above the national average. Annual household income is below £7,499 for seven of the ten Mosaic types with the highest smoking prevalence, and below £24,999 for the other three.
The typical property types occupied by Mosaic types with high smoking prevalence are a mixture of houses and flats in urban or suburban locations, with 'public rented' ownership being most characteristic. With the exception of H47, none of the types have access to a car, and all types feel that they live in an area where there is little community support. Of the ten Mosaic types, nine are receptive to communication channelled through TV, eight to telemarketing, and seven to the tabloid press.
Of the 10 Mosaic types with lowest smoking prevalence, almost all are characterised by households that include married couples, with J51 as the exception, though individuals in this category are often widowed. The most typical age is between 45 and 64, and all types are described (with the exception of J51) as having degree level qualifications. All types not characterised as retired are within professional employment. Average annual household income for most types that are not retired is over £50,000, apart from types A04 and B10 who have a typical income between £25,000 and £49,999. The typical property type occupied by those in groups with the lowest smoking prevalence are houses owned outright, in suburban, semi-rural, or, in the case of type J51, seaside locations. All types, again except for J51, have easy access to a car, and every type feels that they live in a good area with support from neighbours. These groups are likely to be receptive to communication via broadsheet newspapers and the internet. | Discussion and Conclusions
The Townsend Index of Material deprivation and Mosaic provide two different ways of profiling an individual's social circumstances in terms of the area in which they live. Amongst patients in the large primary care dataset of THIN, we have shown clear socioeconomic differences in smoking prevalence according to both of these measures. When using the Townsend Index, we found smoking prevalence to be progressively higher in those living in more deprived areas, in accordance with previous cross-sectional studies carried out in the UK that used the Townsend Index to indicate deprivation[ 14 ]. When using Mosaic groups, the likelihood of being a current smoker is highest in groups F and G, groups that are dependent on social benefits, compared with group A, which encompasses the most affluent members of the population.
However, the range of estimates of smoking prevalence is greater across the 11 Mosaic groups and even more so across the 61 Mosaic types compared to the difference across Townsend quintiles. For example, the highest prevalence observed in the Mosaic types was 42.7%, whilst the Townsend quintile with the highest proportion of current smokers had a prevalence of 32.7%. Smoking prevalence in the Mosaic group with the lowest proportion of current smokers was 8.6%, compared to 13.5% in the lowest Townsend quintile. Mosaic paints a worse picture of social disparities in smoking prevalence in the UK than previously-used measures of social class, and may be a useful tool for distinguishing the characteristics of groups with a particularly high smoking prevalence.
A Mosaic classification and/or Townsend quintile was missing for some patients in this analysis, though the odds of being a smoker for these are not extreme, suggesting that this data was missing at random and a high-prevalence group has not been missed. It is unclear why this information was missing for some people, but it may be that these patients' postcodes were not recorded by their general practice, and therefore EPIC was unable to map them to the area-based measures of deprivation.
An assumption has been made that patients not classified as current smokers are non-smokers, including those with no smoking-related Read codes documented in their medical records. This may lead to an underestimation of smoking prevalence. However, it has been shown that prevalence figures obtained using this assumption are reasonably reflective of those suggested by nationally-representative surveys[ 11 ], and the majority of patients with missing smoking records in THIN are either ex- or non-smokers[ 15 ]. The differences in the proportion of patients with a smoking record in different categories of each measure of deprivation were small and unlikely to have contributed to any great extent to the socio-economic differences in smoking prevalence reported.
The Mosaic classification, which groups individuals into 61 categories, may be quite a cumbersome system to use as a socioeconomic measure in most statistical models and, as noted already, little information is available from Experian about how the classification is derived, limiting assessment of the validity of their approach and the potential to replicate it. Some variables indicating individuals' health status, which may be related to past or current smoking behaviour, are used to derive Mosaic; this may confound identification of the groups most and least likely to smoke. In the interpretation of Mosaic it is also important to be aware that the classification is an area-level measure, based on postcode areas of approximately 15 households, and that the characteristics of any given type will only apply to the majority of individuals of that type - not all of them. It is also important to note that the estimated odds ratios presented in this study will overestimate the respective risk ratios across the groups as smoking prevalence is 20.8% overall, not a rare outcome. In calculating odds ratios, the use of the Mosaic group or type with the lowest smoking prevalence as the reference category will have maximised the difference in odds ratios observed across categories, though this is an appropriate approach for demonstrating the wider extremes in smoking prevalence that can be identified using Mosaic.
Those Mosaic types with a higher prevalence of smoking were characterised by minimal levels of education, low income, and manual occupations. These findings are consistent with existing knowledge, acquired using individual measures of socioeconomic status, such as income, education, and occupation[ 14 ]. However, Mosaic provides further detail - the Mosaic types with highest prevalence do not have access to a car, have little community support, are debt-ridden and tend to spend a lot of time in front of the television. Some of these factors seem likely to contribute to difficulty in quitting smoking, suggesting, perhaps, difficulty in accessing cessation support and advice. These findings provide some insights into how these groups might be targeted, such as through mobile smoking cessation services and provision of transport to enable access to existing services, or through television campaigns, utilising the principles and techniques of social marketing to ensure that smokers are targeted with appropriate cessation interventions[ 16 ]. There is some evidence that providing cessation services in novel settings, such as community pharmacies, dental surgeries or workplaces, may be effective in engaging large numbers of smokers, though more research is needed to determine whether these are successful in reaching disadvantaged groups in particular[ 17 ]. Similarly, mass media campaigns may have a valuable role to play in encouraging smoking cessation, though again there is limited evidence whether such campaigns are effective in reaching large numbers of the most disadvantaged smokers[ 18 , 19 ]. Given that many Mosaic groups with the highest smoking prevalence are in debt, offering financial incentives may provide a useful tool to engage these groups in cessation services. Existing research suggests such incentives may indeed increase the number of disadvantaged smokers who attempt to quit, and the number who succeed in doing so, though again further studies would be of benefit[ 17 ].
This study is one of the first to look at the association between Mosaic and smoking prevalence, and certainly the first to do so on such a large scale. In conclusion, the Mosaic classification system has been found to be a useful tool in examining the disparities in smoking prevalence between different socioeconomic groups within the UK, with those in the group with the highest smoking prevalence being over four times as likely to smoke as those in the group with the lowest prevalence. Mosaic is potentially useful for identifying the characteristics of groups of heavy smokers which can then be used to tailor cessation interventions to ensure these are as successful as possible and make the best use of resources. Though Mosaic only classifies individuals living in the UK, a similar approach to the use of market research and consumer segmentation intelligence may provide a means to identify groups of people with high smoking prevalence in other countries and target them with appropriate cessation interventions. | Discussion and Conclusions
The Townsend Index of Material deprivation and Mosaic provide two different ways of profiling an individual's social circumstances in terms of the area in which they live. Amongst patients in the large primary care dataset of THIN, we have shown clear socioeconomic differences in smoking prevalence according to both of these measures. When using the Townsend Index, we found smoking prevalence to be progressively higher in those living in more deprived areas, in accordance with previous cross-sectional studies carried out in the UK that used the Townsend Index to indicate deprivation[ 14 ]. When using Mosaic groups, the likelihood of being a current smoker is highest in groups F and G, groups that are dependent on social benefits, compared with group A, which encompasses the most affluent members of the population.
However, the range of estimates of smoking prevalence is greater across the 11 Mosaic groups and even more so across the 61 Mosaic types compared to the difference across Townsend quintiles. For example, the highest prevalence observed in the Mosaic types was 42.7%, whilst the Townsend quintile with the highest proportion of current smokers had a prevalence of 32.7%. Smoking prevalence in the Mosaic group with the lowest proportion of current smokers was 8.6%, compared to 13.5% in the lowest Townsend quintile. Mosaic paints a worse picture of social disparities in smoking prevalence in the UK than previously-used measures of social class, and may be a useful tool for distinguishing the characteristics of groups with a particularly high smoking prevalence.
A Mosaic classification and/or Townsend quintile was missing for some patients in this analysis, though the odds of being a smoker for these are not extreme, suggesting that this data was missing at random and a high-prevalence group has not been missed. It is unclear why this information was missing for some people, but it may be that these patients' postcodes were not recorded by their general practice, and therefore EPIC was unable to map them to the area-based measures of deprivation.
An assumption has been made that patients not classified as current smokers are non-smokers, including those with no smoking-related Read codes documented in their medical records. This may lead to an underestimation of smoking prevalence. However, it has been shown that prevalence figures obtained using this assumption are reasonably reflective of those suggested by nationally-representative surveys[ 11 ], and the majority of patients with missing smoking records in THIN are either ex- or non-smokers[ 15 ]. The differences in the proportion of patients with a smoking record in different categories of each measure of deprivation were small and unlikely to have contributed to any great extent to the socio-economic differences in smoking prevalence reported.
The Mosaic classification, which groups individuals into 61 categories, may be quite a cumbersome system to use as a socioeconomic measure in most statistical models and, as noted already, little information is available from Experian about how the classification is derived, limiting assessment of the validity of their approach and the potential to replicate it. Some variables indicating individuals' health status, which may be related to past or current smoking behaviour, are used to derive Mosaic; this may confound identification of the groups most and least likely to smoke. In the interpretation of Mosaic it is also important to be aware that the classification is an area-level measure, based on postcode areas of approximately 15 households, and that the characteristics of any given type will only apply to the majority of individuals of that type - not all of them. It is also important to note that the estimated odds ratios presented in this study will overestimate the respective risk ratios across the groups as smoking prevalence is 20.8% overall, not a rare outcome. In calculating odds ratios, the use of the Mosaic group or type with the lowest smoking prevalence as the reference category will have maximised the difference in odds ratios observed across categories, though this is an appropriate approach for demonstrating the wider extremes in smoking prevalence that can be identified using Mosaic.
Those Mosaic types with a higher prevalence of smoking were characterised by minimal levels of education, low income, and manual occupations. These findings are consistent with existing knowledge, acquired using individual measures of socioeconomic status, such as income, education, and occupation[ 14 ]. However, Mosaic provides further detail - the Mosaic types with highest prevalence do not have access to a car, have little community support, are debt-ridden and tend to spend a lot of time in front of the television. Some of these factors seem likely to contribute to difficulty in quitting smoking, suggesting, perhaps, difficulty in accessing cessation support and advice. These findings provide some insights into how these groups might be targeted, such as through mobile smoking cessation services and provision of transport to enable access to existing services, or through television campaigns, utilising the principles and techniques of social marketing to ensure that smokers are targeted with appropriate cessation interventions[ 16 ]. There is some evidence that providing cessation services in novel settings, such as community pharmacies, dental surgeries or workplaces, may be effective in engaging large numbers of smokers, though more research is needed to determine whether these are successful in reaching disadvantaged groups in particular[ 17 ]. Similarly, mass media campaigns may have a valuable role to play in encouraging smoking cessation, though again there is limited evidence whether such campaigns are effective in reaching large numbers of the most disadvantaged smokers[ 18 , 19 ]. Given that many Mosaic groups with the highest smoking prevalence are in debt, offering financial incentives may provide a useful tool to engage these groups in cessation services. Existing research suggests such incentives may indeed increase the number of disadvantaged smokers who attempt to quit, and the number who succeed in doing so, though again further studies would be of benefit[ 17 ].
This study is one of the first to look at the association between Mosaic and smoking prevalence, and certainly the first to do so on such a large scale. In conclusion, the Mosaic classification system has been found to be a useful tool in examining the disparities in smoking prevalence between different socioeconomic groups within the UK, with those in the group with the highest smoking prevalence being over four times as likely to smoke as those in the group with the lowest prevalence. Mosaic is potentially useful for identifying the characteristics of groups of heavy smokers which can then be used to tailor cessation interventions to ensure these are as successful as possible and make the best use of resources. Though Mosaic only classifies individuals living in the UK, a similar approach to the use of market research and consumer segmentation intelligence may provide a means to identify groups of people with high smoking prevalence in other countries and target them with appropriate cessation interventions. | Background
There are well-established socio-economic differences in the prevalence of smoking in the UK, but conventional socio-economic measures may not capture the range and degree of these associations. We have used a commercial geodemographic profiling system, Mosaic, to explore associations with smoking prevalence in a large primary care dataset and to establish whether this tool provides new insights into socio-economic determinants of smoking.
Methods
We analysed anonymised data on over 2 million patients from The Health Improvement Network (THIN) database, linked via patients' postcodes to Mosaic classifications (11 groups and 61 types) and quintiles of Townsend Index of Multiple Deprivation. Patients' current smoking status was identified using Read Codes, and logistic regression was used to explore the associations between the available measures of socioeconomic status and smoking prevalence.
Results
As anticipated, smoking prevalence increased with increasing deprivation according to the Townsend Index (age and sex adjusted OR for highest vs lowest quintile 2.96, 95% CI 2.92-2.99). There were more marked differences in prevalence across Mosaic groups (OR for group G vs group A 4.41, 95% CI 4.33-4.49). Across the 61 Mosaic types, smoking prevalence varied from 8.6% to 42.7%. Mosaic types with high smoking prevalence were characterised by relative deprivation, but also more specifically by single-parent households living in public rented accommodation in areas with little community support, having no access to a car, few qualifications and high TV viewing behaviour.
Conclusion
Conventional socio-economic measures may underplay social disparities in smoking prevalence. Newer classification systems, such as Mosaic, encompass a wider range of demographic, lifestyle and behaviour data, and are valuable in identifying characteristics of groups of heavy smokers which might be used to tailor cessation interventions. | Competing interests
The authors declare that they have no competing interests.
Authors' contributions
SL and LS conceived the study, AS performed the statistical analyses and wrote the first draft of the manuscript, and all authors contributed to its critical revision and approved the final version.
Pre-publication history
The pre-publication history for this paper can be accessed here:
http://www.biomedcentral.com/1471-2458/10/755/prepub | Acknowledgements
The authors acknowledge the support of Yue Huang and Chris Smith at the University of Nottingham in preparing the THIN data, and Emma Bradley and Emily Sparks at Experian for their advice regarding the Mosaic data.
Funding: Lisa Szatkowski is supported by a Cancer Research UK PhD Studentship. Funding from the National Prevention Research Initiative is also gratefully acknowledged. Relevant NPRI funding partners: British Heart Foundation; Cancer Research UK; Department of Health; Diabetes UK; Economic and Social Research Council; Medical Research Council; Research and Development Office for the Northern Ireland Health and Social Services; Chief Scientist Office, Scottish Executive Health Department; The Stroke Association; Welsh Assembly Government and World Cancer Research Fund. Sarah Lewis is a member of The UK Centre for Tobacco Control Studies, a UKCRC Public Health Research: Centre of Excellence. Funding from British Heart Foundation, Cancer Research UK, Economic and Social Research Council, Medical Research Council, and the National Institute for Health Research, under the auspices of the UK Clinical Research Collaboration, is gratefully acknowledged. | CC BY | no | 2022-01-12 15:21:46 | BMC Public Health. 2010 Dec 7; 10:755 | oa_package/d4/e8/PMC3016386.tar.gz |
PMC3016387 | 21143849 | Background
Socioeconomic inequality and its impact on health is a growing global public health concern [ 1 ]. Smoking has been identified as the single biggest cause of inequality in morbidity and mortality between rich and poor people in many countries [ 2 ]. Studies from Western countries have reported an association between socioeconomic status (SES) and smoking to the disadvantage of those in lower SES groups [ 3 ]. Studies among adolescents have shown the same pattern, with some exceptions where the association was found only for some ages, genders or SES indicators [ 4 - 9 ]. In developing countries among adolescents, the relationship between socioeconomic factors and smoking is unknown. In this study, we explore this relationship among adolescents in Ghana, a developing country in sub-Saharan Africa.
Unlike in Western and many other developing countries, the prevalence of smoking in sub-Saharan Africa is relatively low both among adolescents as well as adults, based on the scanty information available [ 10 ]. In Ghana, a small study of urban adolescents shows that lifetime cigarette use was 7.5% [ 11 ] and among adults, in one region, 4% were current smokers [ 12 ]. Thus the prevalence of smoking is relatively low despite a long history of tobacco cultivation and manufacturing [ 13 ], but still Addo et al. [ 14 ] found that the current prevalence of tobacco use among civil servants in the capital city of Accra represents a rise over a thirty year period. On the other hand, traditionally, the population has used smokeless tobacco, tawa, but how common this is at the population level or how it relates to SES is not known.
In Ghana, reminiscent of most African countries, there are very little or no tobacco control measures and accessibility as well as availability to minors are unrestricted [ 11 ], except on religious or moral grounds. Therefore in view of the little or no tobacco control measures, we expect lower tobacco use among adolescents in the higher socioeconomic groups, who are likely to be favoured by any available health education, parental education and other socio-cultural factors, but higher among those in the lower socioeconomic groups, resulting in socioeconomic differences in tobacco use similar to those found in Western countries.
Assessments of an adolescent's SES should take into account the transitional nature of adolescence and should be conceptualized in two dimensions: familial SES, reflecting the social class of origin, and the adolescent's individual social position in relation to his/her peers [ 5 - 9 ]. The individual social position measured by school career or school performance, predicts education in adulthood [ 15 ]. In addition to SES, inter-generational social mobility has been shown to relate to health behaviours including smoking [ 16 - 18 ]. Inter-generational social mobility can be conceptualized as the transition between familial (original) SES in childhood and individual (achieved) social position in adulthood.
The aim of this study was to investigate socioeconomic differences in smoking and tawa use among Ghanaian adolescents using multiple SES measures which assess familial SES and the adolescent's individual social position. Based on these two dimensions, we also explore how the inter-generational social mobility relates to tobacco use. | Methods
Data
A cross-sectional survey was conducted from June to August 2008 on health behaviours and lifestyles of school-aged adolescents in three administrative regions in Ghana. Thirty schools were randomly sampled, ten per region, from Eastern (total number of schools in the region = 2924), Greater Accra (total number of schools in the region = 1825) and Volta Regions (total number of schools in the region = 2184). The Ghana Education Service's School Health Programme register of schools in the country was the source of the sampling frame. The sampling was done as follows: First, ten schools were randomly selected so that they comprised of four public Junior High Schools (total number in the three regions = 5325), two private Junior High Schools (total number in the three regions = 1395), three public Senior High Schools (total number in the three regions = 171) and one private Senior High School (total number in the three regions = 47) in each region in order to reflect the school types in Ghana. Second, in each school, all students whose names were found in the class attendance register of the randomly selected classes were eligible to participate in the survey. The eight page questionnaire was anonymous and self-administered. It was designed to exclude any information that will reveal the identities of the participants. One trained supervisor was assigned to each classroom during the questionnaire administration to address pupils' concerns when necessary. The survey commenced simultaneously in all the participating classes in a given school. Participants were asked to drop their questionnaires in an envelope placed in front of the class on completion. The study protocol was approved by the ethical committee of the Ghana Health Service Research Unit in Accra, Ghana.
The characteristics of the respondents are presented in Table 1 . Out of the 1566 respondents who completed the questionnaire, only 13-18-year-old students were included in this study (N = 1165). They comprised of 41.5% (483) boys and 55.3% (644) girls. The mean ages for boys and girls were 15.8 years and 15.9 years, respectively. The response rate was 89.7% (the sample was based on academic year's register of pupils). Only one pupil denied answering. A convenient sample of 127 non-students in the same age group showed similar pattern of responses for most of the key indicators we measured in our survey. Among this group 3.6% and 7.3% were tawa users and smokers, respectively.
Indicators of socioeconomic status
Indicators of familial socioeconomic status
A material affluence scale (MAS) of five categories (poorest, poor, average, affluent and most affluent) was used based on our previous research [ 19 ]. The items on which the scale was based covered three aspects of material circumstances: household assets (e.g. television) and housing characteristics (e.g. types of house), other assets (e.g. farm ownership) and school related indicators (e.g. working, other than doing household chores, in the morning before going to school). Material affluence mirrors the lack or availability of the resources and goods necessary for decent living in relation to what is generally available in the society [ 20 ]. Various kinds of scales measuring material affluence have been constructed to capture the amount of these kinds of resources available in the families [ 19 , 21 ]. The items of the scales are meant to envelop the key aspects of wealth as well as the material circumstances of the family.
Family structure was measured in four categories (nuclear family, both parents alive but not living together, only one parent alive, or both parents dead). Adolescents living in a family other than where both parents were alive and living together were regarded as socially disadvantaged.
Father's, mother's or other guardian's highest level of education were categorised into illiterate, basic education, secondary education and tertiary education according to the classification of the Ghanaian educational system. Parental occupational status was measured by respondents reporting their father's, mother's, or other guardian's occupation or employment status. These were categorised in grades A (chief in rank), B (professional and managerial), C (professional non managerial), D (skilled manual), E (unskilled manual) and unemployed according to grades in the Ghana Civil Service (Head of Civil Service 2000) None of the respondents fell into the A category. We stratified grades B and C as high grade and grades D, E and unemployed as low grade in the analysis.
The adolescent's individual social position
Adolescents indicated their school performance in the previous term examination. These were coded into three categories high (excellent, very good), middle (good), and low (average, poor). Adolescents indicated their plans after graduation from the current level of schooling (continue schooling, learn a trade, look for job and not sure). These were coded as continue schooling and not continue schooling (learn a trade, look for job or not sure).
Predicted inter-generational social mobility
Two measures of inter-generational social mobility (upward mobility, stable high, stable low and downward mobility) were used. Two combinations of social class of origin (measured by MAS and father's education) and achieved social position (measured by plans after graduation) were computed. Mobility 1: MAS was categorised into High (3 = top 20%), Medium (2 = next 40%) and Low (1 = lowest 40%) while plans after graduation was categorised as continue schooling (1) and not continue schooling (0). Adolescents were classified as socially stable in the low SES (stable in low SES), if MAS = 1 and Plans after graduation = 0. And if MAS = 2 and Plans after graduation = 1 or if MAS = 3 and Plans after graduation = 1, they were classified as socially stable in the high SES (stable in high SES). Adolescents were classified as upwardly mobile, if MAS = 1 and Plans after graduation = 1. Adolescents were classified as downwardly mobile, if MAS = 2 and Plans after graduation = 0 or if MAS = 3 and Plans after graduation = 0, Table 1 .
Mobility 2: Father's education was categorised into High (3 = tertiary education), Middle (2 = secondary education) and Low (1 = illiterate or primary education). Adolescents were classified as socially stable in the low SES (stable in low SES), if father's education = 1 and Plans after graduation = 0. And if father's education = 2 and Plans after graduation = 1 or if father's education = 3 and Plans after graduation = 1, they were classified as socially stable in the high SES (stable in high SES). Adolescents were classified as upwardly mobile, if father's education = 1 and Plans after graduation = 1. Adolescents were classified as downwardly mobile, if father's education = 2 and Plans after graduation = 0 or if father's education = 3 and Plans after graduation = 0, Table 1 .
Indicators of tobacco use
Smokers were adolescents who had ever smoked a cigarette. Tawa users were those who had ever tried tawa. Tawa comes in two forms: Fine-grain tawa -tobacco that often comes in teabag-like pouches that users "pinch" or "dip" between their lower lip and gum, allow it sit there and spit out the juice and chewing tawa - tobacco which comes in shredded or twisted tobacco leaves that users put between their cheek and gum, chew it and spit out the juice.
Parental smoking was based on adolescents' responses to two separate questions regarding their mothers' and fathers' smoking measured in five categories (father or mother smoked at present, had never smoked, had smoked but had stopped, couldn't say anything about parental smoking or had no father or mother). Dichotomous (never vs ever/current smokers) variables were made for maternal and paternal smoking.
The proportions of missing data were relatively low for all the indicators (Table 1 ). The proportion of tawa users and smokers were 5.7% and 6.6%, respectively.
Statistical analysis
Pearson's Chi-square tests were used to test the associations between gender and each of the studied variables. Logistic regression analysis was used to model the associations between the socioeconomic indicators and tobacco use. The strength of the associations was expressed by odds ratios (OR) and 95% confidence intervals (CI). First, bivariate models (Model1) were fitted including each of the socioeconomic measures one at a time, controlling for age and gender. Second, multivariate logistic regression models were used to test whether individual SES measures were independently predictive of tobacco use. Model 2 included age, gender and all the statistically significant socioeconomic indicators and then Model 3 comprising of the indicators in Model 2 plus parental smoking. For the social mobility analyses models 2 was adjusted for family structure and model 3 was adjusted for family structure and parental smoking. In all analyses, those with the highest socioeconomic advantages were used as the reference categories. | Results
Tobacco use by familial socioeconomic status
A lower level of material affluence was associated with the likelihood of tawa use but the association was not statistically significant with smoking. Adolescents who lived in family types other than the nuclear family were more likely to smoke or use tawa compared to those who lived in the nuclear family. Lower paternal education predicted both smoking and tawa use (Table 2 , Model 1). Adolescents whose fathers had primary education were more likely to use tawa compared to those whose fathers had tertiary education, albeit at borderline statistical significance. Adolescents who had illiterate fathers were more likely to smoke than those whose fathers had tertiary education. There were no statistically significant associations between tobacco use and mother's education, and father's or mother's occupation. In multivariate analysis, material affluence independently predicted tawa use when the effects of the other statistically significant familial socioeconomic measures (Table 2 , Model 2) and parental smoking were controlled for (Table 2 , Model 3). Similarly, family structure independently predicted smoking and tawa use.
Tobacco use by individual social position
There were striking differences in tobacco use by plans after graduation but not by school performance (Table 3 ). Adolescents who did not have any plans of continuing schooling after graduating were more likely to smoke or use tawa than those who planned to further their education. In multivariate analysis, plans after graduation independently predicted both smoking and tawa use even after controlling for MAS, family structure and father's education (Table 3 , Model 2) and parental smoking (Table 3 , Model 3).
Tobacco use and adolescents' predicted inter-generational social mobility
Tawa use and smoking were related to both downward social mobility and stable low SES whether mobility was measured by material affluence scale or father's education compared to being stable in the high SES (Table 4 ). In a multivariate analysis, tawa use and smoking were independently related to downward social mobility and particularly being stable in low SES by both indicators of social mobility, after adjusting for family structure (Table 4 , Models 2) and parental smoking (Table 4 , Models 3). The only exception was that relationship between tawa use and downward mobility disappeared after controlling for parental smoking. Upwardly mobile adolescents did not differ from those stable in the high SES by smoking or tawa use.
Age and gender differences
Family structure and material affluence were associated with tawa use in the same direction for both genders but statistically significant only for girls. Also, the associations between plans after graduation and both forms of tobacco use were statistically significant only for girls. When analysed separately in two age categories, younger adolescents (13-15-year-olds) and older adolescents (16-18-year-olds), the association between MAS and tawa use was statistically significant only among the younger adolescents. There were more girls in the sample than boys (Table 1 ). | Discussion
The main findings are that, first, socioeconomic differences, measured by both familial and individual SES exist in tobacco use among Ghanaian adolescents to the disadvantage of those in the lower socioeconomic groups. The differences follow the same pattern as those found in Western countries. Second, an adolescent's individual social position, measured by plans after graduation, is a stronger predictor of tobacco use than familial SES. Third, children expected to end up in adulthood in a lower SES than their families (downwardly mobile) or remained stable in the low SES are more likely to use tobacco than those children who are stable in the high SES. Fourth, the socioeconomic pattern was similar for smoking and tawa use, except that material affluence scale was related to tawa use only.
Our finding of higher probability of tobacco use among adolescents in lower SES groups is mostly in line with previous studies [ 4 , 9 ]. Some studies have reported high prevalence of smoking among adolescents whose parents had a low educational or occupational position [ 4 , 9 ]. Contrary to our expectation, familial SES measured by father's or mother's occupation and mother's education were not important predictors of an adolescent's smoking or tawa use in this study. On the other hand, adolescents of lower familial SES measured by material affluence scale were more likely to use tawa compared to those on higher material affluence scale. The traditional tawa is likely to be cheaper, more available and accessible compared to cigarettes. It is also relatively easier to hide and use without anybody noticing since it is smokeless, and perhaps the Ghanaian society is more tolerant to its use than smoking. These and other socio-cultural factors could explain in part why material affluence scale was related to tawa use but not smoking.
An adolescent's individual social position indicated by plans after graduation was strongly related with both forms of tobacco use in a similar pattern as in Western countries [ 5 ] but school performance was not. Higher prevalence of both smoking and tawa use was found among adolescents who did not have plans to continue schooling after graduation compared to those who planned to continue. Previous studies using indicators which capture the adolescents' individual SES have shown that adolescents of low individual SES are more likely to take up smoking and other health compromising behaviours, similar to our results [ 5 , 6 , 8 , 9 , 18 ]. For example, adolescents who discontinue school after the comprehensive school often engage in health-damaging behaviours typical of lower socioeconomic groups [ 5 ]. There are plausible explanations for the strong negative association between plans after graduation and tobacco use in our study. In Ghana, where there is high unemployment for even those with post-secondary education, having no plans to continue schooling after the Junior or Senior High School levels could therefore be a true sign of failure and hopelessness both for the present and the future. This may lead to low self-esteem, stress and depression and consequently result in tobacco use as a means of handling these frustrations [ 22 ]. This indicator is likely to reveal the hidden characteristic of an adolescent's individual social position independent of his or her familial status.
An adolescent's individual social position as indicated by school performance was not related with tobacco use. In Western countries, adolescents who have poor school performance have higher prevalence of smoking than those with good school performance [ 9 ]. Some explanations given for this association are that adolescents with poor school performance are likely to benefit less from health education than those of better school performance. Also, adolescents with poor school performance turn to smoking behaviour as a coping lifestyle in the face of the stress caused by educational demands [ 22 ]. In Ghana, and perhaps in most developing countries, although school performance is an important determinant of educational success and consequently future social position, factors such as gender, affordability as well as socio-cultural factors are equally important in determining the link between school performance, educational success and hence social position. It is not clear to what extent these factors account for the non-statistically significant relationship between school performance and tobacco use found in this study. Furthermore, school performance was self-reported as in most studies [e.g. [ 9 ]] but it is uncertain to what extent this might have affected the relationship between school performance and tobacco use reported here.
Adolescents living in a nuclear family had less likelihood of tobacco use than those in non-nuclear families, independent of parental smoking. Previous studies have highlighted the role of parents in the prevention of health compromising behaviours among adolescents [ 23 - 25 ]. Flisher et al. [ 26 ] found that among South African adolescents, not being raised by both parents was significantly associated with cigarette smoking among black and colored students. It was inversely associated with cigarette use among black students. It is likely that difference in parenting upbringing style between family structures or perhaps less parental control among adolescents not living with both parents account for this association.
There is paucity of study on inter-generational social mobility and tobacco use among adolescents. Previous study shows that health compromising behaviours such as smoking and alcohol use are more frequent among downwardly mobile and less frequent among upwardly mobile young people than their peers who have persisted in their SES of origin [ 17 ]. A recent study also found that among young people, risk behaviours like tobacco use were more prevalent among downwardly mobile or those stable compared to those upwardly mobile [ 18 ]. In our study, downwardly mobile adolescents and those staying in the low SES were more likely to use tobacco compared to those stable in the high SES similar to the previous findings. Furthermore, our findings of higher probability of tobacco use among those stable in the low SES highlight the effects of cumulative socioeconomic disadvantage over generations on adolescents' tobacco use. We did not find any statistically significant difference in tobacco use among upwardly mobile adolescents compared to their peers who were stable in the high SES.
Strength and limitation
We used a representative sample of schools both in urban and remote rural areas in three regions which are representative of the entire country, the first study of its kind in Ghana. Some of the questions we used have also been used in other studies, for example, the Global Youth Tobacco Survey (GYTS) and the Global School-based student Health Survey (GSHS) which have been conducted in many African countries. Moreover, to the best of our knowledge our study is the first of its kind which has investigated the traditional smokeless tobacco (tawa) in Ghana.
Self-report is the only way to conduct large surveys but it could lead to recall bias or intentional miss-reporting which could affect the accuracy of the reports. However, this should not affect the relationships between SES and tobacco use among adolescents. Similar methods have been used in most previous studies [ 4 , 5 ]. The study was cross-sectional therefore the cause and effect relationship cannot be emphasised as an etiological conclusion, nonetheless it can be argued that at the adolescent age socioeconomic status is likely to precede tobacco use and not the reverse. During data collection an investigator was present in the classroom to address the concern of the pupils when necessary. Although we do not perceive that this might have affected the adolescents' responses, if it did, it would be more likely to have resulted in the under estimation of both the tobacco use prevalence and the socioeconomic status rather than over estimation. Our sample of students for the study was drawn from a sample of schools. The clustering of students may slightly change the standard error of our estimates, although unlikely to change neither the overall results nor the conclusion reached in this study. Due to scarce resources, only adolescents in schools have been included in this study. On the other hand, a similar pattern of responses for most of the key indicators in this study was found among a convenient sample of non-students in the same age group. Moreover, the school enrolment rate in Ghana for the age group of our respondents is high, 78.8% for Junior High Schools (Ministry of Education, Science and Sport, Ghana, 2008). | Conclusions
Our finding of higher likelihood of tobacco use among adolescents in lower socioeconomic groups suggests that in the future there will be differences in tobacco use as well as tobacco related morbidity and mortality in Ghana between adult socioeconomic groups which will follow into health differences similar to those seen in Western countries. Furthermore, this study shows that, during adolescence, tobacco use is more influenced by individual social position than familial SES. As an adolescent's familial SES is an assigned status, its impact may be less on their health behaviours during the period of transition when adolescents move from dependent to independence. On the other hand, individual social position captures the transitional nature of adolescence as well as the social position within their peers. This study adds to the knowledge of socioeconomic differences in tobacco use among adolescents in developing countries, particularly in Africa. Health promotion and tobacco control strategies aimed at reducing adolescence tobacco use should pay attention to those of lower social and material statuses, and those in danger of discontinuing education after the basic level. | Background
In Western countries, tobacco use is most prevalent among adolescents in lower socioeconomic groups. The association between socioeconomic status (SES) and tobacco use among adolescents in developing countries is unexplored. Using multiple SES measures, we investigated this association among adolescents in Ghana.
Method
A school-based survey of a representative sample of 13-18-year-old Ghanaians (N = 1,165, response rate = 89.7%) was conducted in three regions, in 2008. Logistic regression analysis was used to evaluate the relationship of smoking, tawa (smokeless tobacco) use with familial SES (parental occupation and education, material affluence scale, family structure), an adolescent's individual social position (school performance, plans after graduation) and inter-generational social mobility (predicted by the differences of familial and individual positions).
Results
Socioeconomic differences existed in tobacco use whether measured by familial SES or individual social position with higher prevalence in lower socioeconomic groups. Low father's education and living in a non-nuclear family were associated with both forms of tobacco use while low material affluence was associated with tawa use only; individual social position measured by plans after graduation was the strongest predictor of both smoking and tawa use. Inter-generational downward social mobility and particularly staying in low SES was related to both forms of tobacco use.
Conclusions
Similar to Western countries, lower SES is related to an adolescent's tobacco use also in developing countries. Cumulative socioeconomic disadvantage over generations increases the probability of tobacco use. | Competing interests
The authors declare that they have no competing interests.
Authors' contributions
DD, LK and AR were involved in the conception and design of the study. DD, SR and AR were involved in the drafting and the revising of the questionnaire. DD was the principal investigator during the data collection. DD analysed the data and drafted the manuscript. All authors were involved in the interpretation of data and the critical revision of the manuscript for important intellectual content. All authors gave final approval of the version to be published.
Pre-publication history
The pre-publication history for this paper can be accessed here:
http://www.biomedcentral.com/1471-2458/10/758/prepub | Acknowledgements
Many thanks to the World Health Organisation (WHO) Country Office, Ghana, particularly Mr Selassi D'Amanda, for the provision of office space and technical support during the data collection. Our appreciation also goes to the Ghana Health Service, Adolescents health unit, for providing the vehicle for the data collection. We would like to thank Mr. Charles Fenuku, Mr. Akuffo, Miss Forzia, Miss Tani Abubakari, Miss Rejoice Nutakor, Mrs. Cynthia Bosumtwi-Sam, Miss Rose, Mr Boahen, Mr. Francis Kpodo and his team for assisting in data collection and entry. We thank Mr. Lasse Pere for data management.
Funding: The study was financially supported by the Finnish Cultural Foundation Central Fund, Tampere University research stipend, Juho Vainio Foundation, Academy of Finland grant number 130481 and the Competitive Research Funding of the Pirkanmaa Hospital District, Tampere University Hospital. | CC BY | no | 2022-01-12 15:21:46 | BMC Public Health. 2010 Dec 8; 10:758 | oa_package/4b/22/PMC3016387.tar.gz |
PMC3016388 | 21143875 | Background
Over the last two decades, Australia, like many countries, has experienced widening inequalities in the social determinants of health. From 1996-2007, the former conservative government's policies increased inequality across Australian communities, despite achieving sustained economic growth [ 1 ]. Policy analysts argue that this was caused by government action on two levels. Firstly, by pursuing policies, such as disinvestments in public education and healthcare, that actively penalized those at the lower end of the social gradient [ 2 ]. Secondly, by contracting out government services to not-for-profit organisations, the government constrained the efforts of non-state actors to address the needs of disadvantaged individuals and communities [ 3 ].
In 2007, the Australian Labor Party ran for government on a platform of increased equality and social inclusion. Since taking office they have launched the Social Inclusion Agenda (SIA): an ambitious large-scale social policy initiative designed to re-orientate the delivery of social and public services and redress growing inequality. The government anticipates that the SIA will combat complex and intractable problems of exclusion and disadvantage by promoting social, economic and civic participation and by re-orientating the provision of welfare and social services [ 4 ]. Under the government's vision of a socially inclusive Australia, all citizens will have resources and opportunities to: participate in education and training; work in employed, voluntary, family or caring capacities; become engaged in their local communities; and have a voice to effect decisions which influence their lives [ 4 ]. While many hope that the SIA will achieve institutional change within government, significant emphasis is being placed on supporting and better facilitating non-state actors to address disadvantage. Primarily, these actors are not-for-profit organisations operating in the welfare sector. This approach is in keeping with international trends; welfare reform is increasingly concerned with altering the way in which welfare systems operate and are organized [ 5 ]; to achieve this re-organisation, governments are increasingly looking to the not-for-profit sector [ 6 ].
The SIA has the potential to significantly reduce inequalities in the social determinants of health, thereby bringing widespread benefits to individuals and communities. However, given the reliance on non-state actors, the SIA is a challenging and risky policy intervention. The choice of social inclusion as a policy framework adds another layer of complexity to this challenging implementation environment. Individuals who are disadvantaged or marginalised experience multiple, complex, and changing barriers to wellbeing and inclusion [ 1 ]. It is important to note that for services to be successful in promoting the inclusion of these individuals, they must be flexible, adaptive and locally responsive [ 1 ]. The SIA therefore faces an added challenge of allowing for local flexibility, while enforcing a new guiding principle for welfare and the delivery of services.
To ensure the success of the SIA, significant macro-level implementation is required between different levels, and across different departments, of federal and state governments. The government has also made a strategic commitment to the not-for-profit sector in order to deliver on the outcomes of the SIA. This means that the SIA must also be successfully adapted and implemented at the local level by these organisations. To facilitate this, the government aims to create new partnerships and opportunities to encourage not-for-profit organisations to deliver more innovative, integrated and holistic services for marginalised and disadvantaged people.
The not-for-profit (or third) sector has an important, and increasingly prominent, role in addressing the social determinants of health. Since the 1970s the sector has been an important facilitator of social and civic participation, and is understood to build social capital and promote social cohesion [ 7 ]. Research into the social determinants of health has now compiled a considerable body of evidence that links social participation, cohesion and some forms of social capital to health [ 8 , 9 ]. The third sector is also fundamental to more explicitly public health-orientated activities: not-for-profit operations encompass health promotion, community development, community empowerment, and consumer participation [ 10 ]. Over the last two decades, the sector has become increasingly involved in the provision of social and public services, which support the health and welfare of the population [ 3 , 7 , 6 , 11 , 12 ]. Not-for-profits are also increasingly seen as important contributors to policies that impact the social determinants of health.
However, as Michael Marmot [ 13 ]: p160. argues, "[w]hile the real and potential contribution of the third sector to reducing health inequalities is recognised, there remains concerns about how the sector is supported, both to deliver its services and to effectively engage [with government] as a strategic partner".
The decision to implement the SIA primarily through the not-for-profit sector means that ultimately the success or failure of the policy resides within the organisational contexts of individual not-for-profits. Under the SIA, not-for-profit organisations face a new set of rules - how organisations respond to these rules will influence whether the SIA is realised at the local level [ 14 ]. Thus, successful implementation relies upon: the government putting in place the right infrastructure (e.g. funding structures) to allow organisations to address exclusion; the appropriateness of social inclusion as a framework for the sector; and the willingness and capacity of individual not-for-profits to engage with the agenda.
For Australia, the SIA is arguably one of the most significant broad-scale public health policies of recent times. With a focus on redressing inequality, increasing participation and social connectedness and providing better services and support to citizens, it has the potential to make significant inroads into inequalities in the social determinants of health. However, the reliance on the not-for-profit sector raises a number of questions: will there be sufficient change at a state and institutional level to allow not-for-profit organisations to successfully address exclusion? Will organisations respond to the challenges thrown down by the SIA? And, is social inclusion an appropriate, or useful, concept for those who work at the coalface of disadvantage? | Methods/Design
To answer these questions, this study takes a policy-to-practice approach: examining changes in federal policy through to 'street-level' service delivery. In doing so, it aims to build a practical understanding of how the SIA will impact upon the not-for-profit sector and its ability to address disadvantage and promote social inclusion. The study is funded by the Australian National Health and Medical Research Council and the Sidney Myer Foundation. It received approval from the University of Melbourne School of Population Health Human Ethics Advisory Group.
Core objectives:
1. To determine what changes are occurring under the SIA in the relationships between the not-for-profit sector organisations and other sectors, such as government and private enterprise
2. To examine how this change impacts upon not-for-profit organisations and their ability to promote social inclusion
3. To ascertain if, and how, this change can inform health promotion strategies which seek to address health inequalities through promoting social inclusion
The research takes a policy to practice approach. Case studies of two not-for-profit organisations will be contrasted with the perspectives of policy makers, researchers and other experts in the field. The research will be carried out in three phases:
1. Policy and desktop analysis of government policy and public debate relating to the SIA
2. Ethnographically-informed comparative case study analysis of two not-for-profit organisations involved in the delivery of social services
3. Interviews with experts in the field, such as federal policy makers and not-for-profit researchers
Increasingly, studies of social policy implementation and welfare reform have used inductive case study approaches at the organizational level [ 15 - 19 ]. In the case of social services, this is viewed as a particularly appropriate and productive methodological turn [ 14 , 15 ]. Emerging research in this area highlights the importance of examining whether and how written policies are "translated into operational realities", and the variations and permutations that occur during this process [ 20 ]: p145. This 'street-level' analysis addresses current gaps in our understanding of policies and how they work [ 20 ]. For example, it gives us a fuller picture of how policies such as the SIA are "produced and experienced in daily life" [ 20 ]: p145. Brodkin[ 20 ] argues that this approach is most valuable when policy implementation involves "change in organizational practice, discretion by frontline workers, and complex decision-making in a context of formal policy ambiguity and uncertainty" [ 20 ]: p145.
Lurie [ 18 ] has argued for the use of comparative case study methodologies for street-level analysis of welfare reform. Comparative case studies examine multiple cases within a shared framework, seeking out what is both common and particular, and examines the patterning of variables and relationships [ 21 , 22 ]. However, the use of ethnographic techniques in this area is rare. While ethnography has gained increasing prominence in implementation and welfare studies, it has primarily been used to examine the experiences of clients rather than organizations [ 23 ].
Study protocol
Phase one: Desktop & Policy Analysis
A desktop and policy analysis of the SIA will run throughout the duration of the study. Through a descriptive and analytical examination of the development of the SIA and the initiatives which sit under it, this phase will determine: how the SIA is implemented at various levels across and between government(s); how the government is engaging with the not-for-profit sector; and the fit of these activities, and the overall framework of social inclusion, with the sector and the goal of addressing disadvantage.
Phase two: Organisational case studies
Phase two uses comparative case study analysis to examine how the SIA impacts upon the not-for-profit sector [ 24 ]. While the types of organizations that exist within the third sector are extremely diverse, the experiences of the organizations in this study are likely to give inference to the experiences of other organizations and the sector more broadly [ 21 ]. The study will examine organisational culture change and shifts in organisation-government relations under the SIA. In doing so, it will investigate how these changes impact upon the ability of organisations to promote social inclusion. Change may be found in organisational discourses, practices and activities, such as collaborative or innovative approaches to service provision.
Two not-for-profit organisations will be purposefully sampled for comparative case study analysis. A comparative case study methodology has been chosen because the SIA has no clear set of outcomes with regard to its impact on the not-for-profit sector [ 24 ]. The project is informed by ethnographic technniques approach and combines semi-structured qualitative interviews with targeted participant observation and document analysis [ 25 - 28 ]. This approach enables the two case studies to run in tandem so that time sensitive changes can be captured.
The targeted participant observation will include attendance at events and meetings deemed appropriate for 12 months [ 28 ]. In addition, two programs that take a social inclusion approach to service delivery will be selected in each organisation for on-going observation. Up to 10 interviews will be conducted with stakeholders, staff and volunteers (where appropriate). During this time detailed fieldnotes will be kept and interviews will be recorded and transcribed verbatim. A document analysis will also be undertaken at each organisation. This will include documents such as funding agreements, strategic plans, newsletters, and program reports. Analysis will be inductive and thematic.
Discussion group
This phase of the study is informed by reciprocal ethnography methodology [ 25 - 27 ]. This stems from feminist research and enables engagement and empowerment of participants, through shared dialogue and the encouragement of shared learning and outcomes [ 25 - 27 ]. This is a socially inclusive methodology in keeping with the study aims. In each not-for-profit organisation a self-selecting discussion group of staff, volunteers and board members will be established. This group will convenea minimum of four times through data collection and analysis to workshop findings and interpretations.
Case study sampling
Potential organisations were identified from the Australian Council of Social Services (ACOSS) membership list. ACOSS is the peak not-for-profit organisation for the Australian social services sector. Organisations were selected on the basis of the following criteria combined with a willingness to participate:
• Social services organisation
• Has a service delivery role
• Has an advocacy role
• Has an interest in social justice or social inclusion
• Has a social policy unit.
Of the two organisations included in the study, one advocates for a social inclusion framework for the delivery of social services. This organisation has substantial networks with government. The other organisation has a more arms length relationship with government, and is more reticent about the SIA and the framework of social inclusion for the delivery of services.
Phase 3: Stakeholder & policy maker interviews
The last phase of the study will take what has been learnt through the first two phases and contrast it with the perspectives of experts in the field. Federal policy makers, researchers and other experts involved in not-for-profit sector-related research or policy will be interviewed. Areas for inquiry will be determined by the findings from phases one and two and will seek to contrast the experiences of the selected organisations with the perspectives of policy makers and experts.
Criterion-based, purposive sampling of up to 15 individuals chosen on the basis of current/past role in not-for-profit sector-related policy or research will be conducted [ 29 ]. Snowball sampling will be carried out - participants will be asked to nominate other stakeholders, until saturation is reached. Interviews will be individual and semi-structured [ 29 , 30 ]. Analysis will be thematic; findings will be contrasted with themes identified in the first two phases of the study. | Discussion
The evidence on the social determinants of health has meant that public health practitioners are increasingly expected to operate outside the health sector and in partnership with other sectors. The importance of social policy to health therefore cannot be underestimated. However, before the impact of broad-scale social or public health policies such as the SIA can be assessed at the population level, we need to understand how they are produced and experienced in daily-life. To do this, studies must investigate the implementation process. This is consistent with methodological advances occurring in political science and social policy, which have emphasised the importance of investigating how, and if, policies are translated into operational realities. | Background
In this paper we present the research design and methods of a study that seeks to capture local level responses to an Australian national social policy initiative, aimed at reducing inequalities in the social determinants of health.
Methods/Design
The study takes a policy-to-practice approach and combines policy and stakeholder interviewing with a comparative case study analysis of two not-for-profit organisations involved in the delivery of federal government policy.
Discussion
Before the health impacts of broad-scale policies, such as the one described in this study, can be assessed at the population level, we need to understand the implementation process. This is consistent with current thinking in political science and social policy, which has emphasised the importance of investigating how, and if, policies are translated into operational realities. | List of abbreviations
SIA: Social Inclusion Agenda. ACOSS: Australian Council of Social Services.
Competing interests
The authors declare that they have no competing interests.
Pre-publication history
The pre-publication history for this paper can be accessed here:
http://www.biomedcentral.com/1471-2458/10/759/prepub | Acknowledgements
This study is supported by the Australian National Health and Medical Research Council and the Sidney Myer Foundation. The study design was developed with input and guidance from Dr Therese Riley, Prof Paul Smyth and Dr Marion Frere at the University of Melbourne. | CC BY | no | 2022-01-12 15:21:46 | BMC Public Health. 2010 Dec 8; 10:759 | oa_package/be/26/PMC3016388.tar.gz |
||
PMC3016389 | 21162715 | Background
Obesity has been reported to have a negative impact on physical health and psychological well-being [ 1 , 2 ]. There is an association between depression, anxiety and obesity, and several studies have shown that obese women are more vulnerable than obese men to the development of psychiatric and psychological disorders [ 3 - 9 ]. A number of studies have investigated symptoms of depression and anxiety during and after pregnancy [ 10 - 13 ]. Josefsson et al. found in a longitudinal study that the prevalence of depressive symptoms was 17% during late pregnancy and 13% postpartum [ 10 ]. Rubertsson et al. found a prevalence of 14% in early pregnancy and 11-14% during the first year postpartum [ 12 ]. Both anxiety and depression during pregnancy are strong predictors for postpartum depression [ 11 , 13 ].
Obesity during pregnancy is also associated with a high risk for medical complications [ 14 ]. Maternal obesity is, for example, associated with an increased risk for preeclampsia and gestational diabetes mellitus as well as for complications during delivery. In addition, there is an increased risk for antepartum stillbirth and macrosomia [ 14 ]. Psychological health and well being during and after pregnancy among obese women have not been thoroughly investigated. The association between body mass index (BMI) and self-reported postpartum depressive symptoms was investigated in a study in the United States [ 15 ]. In a stratified random sample of approximately 600 obese women, 30.8% reported moderate or greater depressive symptoms compared with 22.8% in a similar sample consisting of about 1800 normal weight women, two to six months postpartum. Krause and co-workers investigated the prevalence of postpartum depression among approximately 500 overweight and obese postpartum women, recruited from a randomized controlled intervention study designed to encourage postnatal weight loss through increased physical activity and decreased caloric intake [ 16 ]. In this study the prevalence of depression was 9.2% and there was no relationship between BMI and postpartum depression.
In a recent prospective intervention study primarily designed to minimize obese women's weight gain to less than 7 kg during pregnancy, we showed that the women who received a structured motivational and behavioral treatment combined with regular physical exercise had a significantly lower weight gain compared with a control group of obese pregnant women who received regular antenatal care without any negative effect on delivery or neonatal outcome [ 17 ]. Since it is known that obesity can have a negative influence on psychological well-being, it is important to investigate if the state of psychological well-being of obese pregnant women attending a weight gain restriction program with a focus on behavioral changes differs from psychological well being in a control group. There are to our knowledge no intervention studies designed to accomplish weight-gain restriction for obese pregnant women that have also examined the women's psychological health status.
We hypothesized that pregnant obese women attending an intervention program, based on motivational and behavioral treatment and with the primary aim of reducing weight-gain during pregnancy, would show fewer depressive and anxiety symptoms throughout pregnancy and postnatally than women not attending such a program. Hence, the aim of this study was to investigate psychological well-being measured as symptoms of depression and/or anxiety among obese pregnant women attending a weight gain restriction program and to make comparisons with a control group receiving traditional antenatal care. | Methods
The Swedish antenatal health care system reaches almost 100% of all pregnant women. The antenatal- and delivery care are free of charge. At the antenatal care clinics (ANC) healthy pregnant women are recommended to attend the regular antenatal program with seven to nine visits to a midwife, and, if needed, arrange for extra appointments with an obstetrician and/or with the midwife.
Subjects
During the period November 2003 to December 2005 a total of 317 obese pregnant women in early pregnancy were consecutively registered at the ANC in the city of Linköping and surrounding area. The inclusion criteria for the study were BMI ≥30 and Swedish-speaking. We excluded all women with pre-pregnant diagnosis of diabetes, thyroid dysfunction or a psychiatric disease treated with neuroleptic drugs. After excluding women who did not meet the inclusion criteria, had miscarriage or legal abortion or moved out from the catchment area in early pregnancy, 230 women were eligible and invited to participate. A total of 70 women refrained from participation and five women dropped out during the intervention. One hundred fifty-five women (67.4%) completed the intervention. This subsample consisted of 151 obese women with singleton pregnancies (Figure 1 ).
All obese pregnant women (n = 437) consecutively registered during the same period at the ANC in two nearby cities with surrounding area, formed a control group. The inclusion- and exclusion criteria were the same as for the intervention women. Thereafter women who had a miscarriage or legal abortion were excluded. Finally, 385 women were eligible and invited to participate. Of this total, 177 women refrained from participation and 15 women dropped out during pregnancy. One hundred ninety-three women (50.1%) completed the participation. This subsample consisted of 188 obese women with singleton pregnancies (Figure 1 ).
In a previous study, no differences were found between the intervention group and the control group regarding age, parity, marital status, smoking, BMI and occupation [ 17 ]. The women in the intervention group reported a higher education level than the women in the control group (p = .044). The women who refrained from participation in both groups were, on average, one year younger than the women who participated in the study (29 vs. 30 years, p = .018). Moreover, those who declined to participate were smokers and had previous children to a higher extent than those participating in the study. For a detailed description of the study participants, see Claesson et al [ 17 ].
All data related to pregnancy, delivery and the puerperium were registered in the standardized and identical Swedish antenatal pregnancy, delivery, and neonatal records. The data were manually extracted from the records. This study was approved by the Human Research Ethics Committee, Faculty of Health Sciences, Linköping University.
Intervention
The obesity intervention program for pregnant women was based on extra visits to a specially trained midwife (author IMC). The women in the intervention group made an average of 22 visits during their pregnancy. The motivational interview/talk followed guidelines set forth by Miller and Rollnick [ 18 ]; the goal of this interview was to motivate the obese pregnant woman to change their behavior and to obtain information useful in meeting their needs. The weight gain goal for the study was less than 7 kg and this target was only discussed once during the intervention and that was at the first visit to the midwife. The midwife worked throughout the whole program with assessing the pregnant woman's knowledge of obesity in general and as a risk factor for her pregnancy and delivery outcome as well as for the wellbeing of her child. If the woman lacked sufficient knowledge, she was offered information and given accurate facts. The woman was also informed about the potential consequences of different behaviors associated with eating and food intake; written information was supplied if needed.
All women were given the opportunity to attend an individual 30-min session every week. The session included weight control and counseling characterized by its collaborative structure i.e. counseling based on creating a partnership that honors the woman's expertise and perspectives and enables the counselor to provide an atmosphere that is conducive rather than coercive to change. The woman's own judgment of her motivation and the possibility of changing a behavior, the advantages and disadvantages of changing a behavior, the choice of strategies for adopting and maintaining a new behavior were all topics of the sessions. All women who attended the program were also invited to an aqua aerobic class (once or twice a week), especially designed for obese women. The obese women in the control group attended the routine antenatal care program.
Measures
The Beck Anxiety Inventory (BAI) was used to measure the severity of anxiety [ 19 ]. The BAI consists of a 21-item self-report inventory where each item describes a common symptom of anxiety. The BAI discriminate anxiety from depression [ 19 ]. The respondent was asked to rate each symptom over the preceding week on a four point scale (0-3). Scores of 0-7 reflect minimal anxiety, 8-15 mild anxiety, 16-25 moderate anxiety and score of 26-63 indicate severe anxiety [ 20 ]. The upper interval limit may be adjusted depending on the purpose for the use. To minimize the rate of false negative results the upper interval limit may decrease and to minimize the rate of false positive results the upper limit may be increased. Some studies have investigated and validated the BAI in non-clinical samples, composed of women as well as men [ 21 - 23 ]. The results support the use of the BAI in a community sample. In this study we used a cut-off level of ≥10 to define symptoms of anxiety.
The Edinburgh Postnatal Depression Scale (EPDS) is a 10-item self-report scale assessing symptoms of depression such as dysphoric mood, anxiety, and feeling of guilt, suicidal ideas and "not coping". Each item is scored on a four point scale (0-3) and rates the intensity of depressive symptoms during the previous 7 days. The scale is specifically designed to screen for postpartum depression but can also be used as a valid measure of dysphoria through the various stages of pregnancy and the puerperium [ 24 ]. The validity of the Swedish version has been tested [ 25 ]. The EPDS cannot by itself confirm a diagnosis of depressive illness, but when using a cut-off level of >12 Cox et al. [ 24 ] showed a sensitivity of 86%, a specificity of 78% and a positive predictive value of 73% for major depressive illness. Another validation of the EPDS by Murray & Carothers [ 26 ] also used a cut-off level of >12 showed a sensitivity of 68%, a specificity of 96% and a positive predictive value of 67% for both major and minor depressive illness. To find all actual major depressions, Cox et al. [ 24 ] propose a cut-off level ≥10 to reduce detection failure in the postnatal period. When selecting this threshold the sensitivity for detection of major depression increased to almost 100% and the specificity to 82% [ 27 ]. In this study we used a cut-off level of ≥10 to define symptoms of depression.
The women in the intervention- and control group answered the BAI and the EDPS at around gestational week 15, 35 and 11 weeks postpartum.
Statistics
All analyses were performed using the SPSS program, version 16.0 (SPSS Inc., Chicago, IL). Statistical significance was defined as (two-sided) p ≤ 0.05. Group differences between the intervention women and the control women were estimated by using the χ 2 - test on categorical variables. The Student's t -test on continuous, normally distributed variables measuring changes in mean score on BAI and EPDS over time was used. Analyses were performed between the intervention- and control group as well as within the groups. Furthermore, to make a more comprehensive assessment of group differences, between as well as within the two groups, logistic regressions were performed with the three BAI and EPDS measurements as dependent variables. The grouping variable has been adjusted for socio-demographic variables (age, parity, marital status, socioeconomic factors and occupational status) and complications during pregnancy (hyperemesis, gestational diabetes mellitus, preeclampsia, premature contractions, lumbar and pelvic pain), complications during delivery (acute cesarean section, instrumental delivery, induced delivery, bleeding >1000 ml, perineal tears) and neonatal complications (small for gestational age, preterm <37 weeks, Apgar Score at 5 min <7 and Apgar Score at 10 min <7). | Results
Anxiety symptoms
In analyses of the anxiety symptoms using the BAI, no significant differences in the prevalence of symptoms were found between the groups (Table 1 ).
Bivariate analysis showed that there was a difference within the intervention group due to socioeconomic factors at the time of the two measurements during pregnancy (p = .045 and p = .000). Women with a lower level of education had symptoms of anxiety more often than the women who had a higher level of education. The same results were found within the control group (p = .010 and p = .012). There was also a difference within the control group at the time of the two measurements during pregnancy concerning occupational status (p = .003 and p = .009). Women who were unemployed showed symptoms of anxiety more often than women who were gainfully employed. The multivariate analyses (logistic regression) showed no difference between the intervention group and the control group after adjustment for socio-demographic variables and pregnancy- and delivery complications (Table 2 ).
Measurements of fluctuations in symptoms of anxiety between the two assessment points in gestational week 15 and 11 weeks postpartum showed no differences either within or between the intervention group and the control group. A total of 5% of women in the intervention group and 4% of women in the control group had symptoms of anxiety at all three assessment points. As shown in Table 3 , there was no difference in symptoms of anxiety in relation to weight gain between the intervention group and the control group at the time of the different assessments. Neither was there any difference between the occurrence of anxiety symptoms within the intervention and within the control group (data not shown).
Depressive symptoms
There were no differences in the occurrence of symptoms of depression between the groups (Table 4 ).
A total of 18.7% of women in the intervention group and 18.0% in the control group showed depressive symptoms in early pregnancy (Table 4 ). In late pregnancy, the percentage of women with depressive symptoms was found to be 22.0% in the intervention group and 17.5% in the control group. Postpartum the percentages were 11.2% in the intervention group and 10.5% in the control group.
Bivariate analyses showed that a lower level of education was related to symptoms of depression in the intervention group in gestational week 15 (p = .002) and in gestational week 35 (p = .003). Similar results were found in the control group in gestational week 35 (p = .001). There was also a relationship between occupational status and depressive symptoms: in the intervention group in gestational week 15 (p = .010), in the control group in gestational week 35 (p = .001) and 11 weeks postpartum in both groups (p = .049 respectively p = .017). Women who were unemployed showed more symptoms of depression than women who were gainfully employed. Analyses within the groups showed a significant relation in the control group between complications during pregnancy; i.e. diabetes, preeclampsia, preterm contractions etc. and symptoms of depression (p = .010). No such relationship could be found in the intervention group. The multivariate analyses (logistic regression) show that there was no difference between the intervention group and the control group after adjustment for socio-demographic variables and pregnancy- and delivery complications (Table 5 ).
Measurements of fluctuations in depressive symptoms at the assessment points in gestational week 15 and 11 weeks postpartum showed no differences either within or between the intervention group and the control group. A total of 6% of women in the intervention group and 4% of women in the control group had symptoms of depression at all three assessments. As shown in Table 6 , there was no difference in symptoms of depression in relation to weight gain in the intervention group and the control group at the time of the different assessments. Neither was there any difference between the occurrence of depressive symptoms within the intervention group and the control group (data not shown).
A total of six women in the intervention group (4.0%) and three women in the control group (1.6%) had symptoms of both anxiety and depression at all three assessment points. | Discussion
In this prospective intervention study with a primary aim of minimizing gestational weight gain of obese women, the prevalence of symptoms of anxiety and depression during pregnancy varied between 18% and 25% within the intervention- and control group. The postnatal prevalence of symptoms of anxiety and depression varied between 9% and 11% within these two groups. No differences were found between the groups. Around 4-5% of all women had symptoms of anxiety, 4-6% had symptoms of depression and only a few women presented symptoms of both anxiety and depression at all three assessments. There was no relationship between symptoms of anxiety, depression and weight gain during pregnancy as measured on three different occasions. In both groups, women with a lower level of education and without employment showed symptoms of anxiety and depression more often than women with a higher level of education and gainful employment. After adjustment for socio-demographic variables and pregnancy, delivery and neonatal complications there were no differences in the prevalence of symptoms of anxiety between the groups. However, there was a relation between complications during pregnancy and symptoms of depression in the control group.
We hypothesized that participation in the intervention program would increase psychological well-being due to weekly motivational support and would therefore result in a lower prevalence of anxiety or depressive symptoms than would have resulted without the program. This hypothesis was not confirmed. Neither did we find any relation between weight gain during pregnancy or postnatal weight and the prevalence of symptoms of anxiety or depression among obese women in the intervention group and the control group. One can consider whether socioeconomic factors such as unemployment and or a low level of education compose a stress on the individual and the effect this has. It is possible, as suggested by Britton [ 28 ], that mothers with a low level of education may have a tendency towards developing high levels of anxiety because they are less able to handle the demands and expectations placed on them during the period when they take on the maternal role. One may also speculate if worrying about the future, as concerns both employment and household economy, may lead to symptoms of both anxiety and depression.
The prevalence of anxiety in a general population of pregnant women during and after pregnancy has been investigated in earlier studies [ 13 , 29 , 30 ]. In our study the prevalence of symptoms of anxiety among obese pregnant women was around 23% during pregnancy, which is in accordance with findings by other authors [ 13 , 30 ]. Breitkopf et al. assessed anxiety symptoms among pregnant, non-pregnant and postnatal women, and found that the anxiety scores were lower among postnatal women in comparison with pregnant and non-pregnant women even after controlling for depressive symptoms [ 29 ]. This is in line with our results where the symptoms of anxiety were lower postpartum than during pregnancy. Nothing in our study indicates that obese pregnant or postnatal women have more symptoms of anxiety than are exhibited in the general pregnant population.
Several studies have investigated the prevalence of depressive symptoms during pregnancy and postpartum [ 10 , 12 , 31 - 34 ]. The EDPS has been used with different cut-offs for evaluating depressive symptoms [ 10 , 12 , 31 , 33 , 34 ]. The prevalence in these studies varies from 8% to 17% during pregnancy and 9% to 13% postpartum. A study that used the same cut-off on the EPDS for depressive symptoms as in this study found a prevalence of depressive symptoms during late pregnancy and postpartum similar to what we found [ 10 ].
Furthermore, two studies have investigated the impact of obesity on the risk for postpartum depression with differing results [ 15 , 16 ]. In the study of LaCoursiere the woman was asked to assess her depressive symptoms' two - six months after delivery [ 15 ]. The five-level scale was from "not depressed at all" to "very depressed and had to get help". Answer on symptoms' level 'moderately' or more, indicated postnatal self-reported depression and was given by 31% of the obese women postpartum [ 15 ]. In the study of Krause et al., where the women completed the EPDS with a cut off of ≥13, six weeks postpartum, a prevalence of 9% among overweight and obese women postpartum was shown [ 16 ]. Our results at the postnatal measurement were in accordance with the findings in the study by Krause.
The relationship between antenatal and postnatal symptoms of depression has been investigated by Josefsson et al. who showed that 6% of the women had symptoms both during pregnancy and postpartum and this is in line with our findings [ 10 ]. The occurrence of depressive symptoms during and after pregnancy in relation to demographic characteristics in a general pregnant population has also been investigated [ 12 , 16 , 32 , 33 , 35 ]. These studies have shown an association between economic difficulties, low household income, unemployment, lower educational attainment and depressive symptoms [ 12 , 16 , 32 , 33 ]. The connection between anxiety and depressive symptoms and maternal and neonatal outcome has been investigated [ 36 - 38 ]. A review by Alder and colleagues found elevated levels of anxiety and depression to be related to obstetric complications, preterm labor and alleviation of labor pain [ 36 ]. Vollebregt et al. investigated the association of preeclampsia and gestational hypertension with psychosocial stress among nulliparous in the first half of pregnancy and found that anxiety, pregnancy-related anxiety or depression had no effect on the incidence of preeclampsia and gestational hypertension [ 38 ]. Berle and co-authors reported a relationship between anxiety disorder during pregnancy and low Apgar score at one and five minutes, but no relationship was observed with low birth weight or preterm delivery [ 37 ]. We did not find any differences between the two groups of women in our study regarding the relationship between symptoms of anxiety and/or depression and pregnancy-, delivery- and neonatal complications. In the control group, however, there was a relationship between pregnancy complications and depressive symptoms at the assessment in late pregnancy.
This study was not randomized, which can be seen as an important limitation. In all scientific research it is important to control external factors and the environment has been found to exert a powerful influence on people's emotions and behavior [ 39 ] and careful consideration must be given to ensure that the intervention group and the control group will get treatment and care at the same setting. We chose therefore to use ANCs in two nearby cities to serve as controls. The antenatal programs in Sweden are standardized and almost identical concerning the management of the pregnancy, which ensures similar care at different ANCs. We were also able to control for several background characteristics that otherwise could confound the results. There was also a difference in the completion rates between the intervention- and control group. A total of five women in the intervention group dropped out compared with 15 women in the control group. Another limitation is that the number of questionnaires answered at the times of the three assessments differs to some extent between the intervention- and control group. Therefore some caution is advisable when generalizing these results. Furthermore, there are no data on prevalence of symptoms of anxiety and depression among women who declined participation. Despite these limitations there are some important findings in this study. To our knowledge this is the first intervention study that has investigated symptoms of anxiety and/or depression in an obese pregnant population. Since there are few studies that have investigated the relation between anxiety and depression among obese pregnant women especially in relation to a weight gain intervention program there is need for further work on this topic. | Conclusions
In conclusion, neither weight gain nor a weight gain restriction program during pregnancy seems to influence the prevalence of symptoms of anxiety or depression. Obese women who participated in this study do not run a higher risk for anxiety and/or depressive symptoms during pregnancy or postpartum, compared with a general pregnant and postnatal population. | Background
Although studies have shown an association between anxiety and depression and obesity, psychological health among obese women during and after pregnancy has not been carefully studied. The aim of this study was to investigate psychological well-being using symptoms of depression and/or anxiety among obese pregnant women attending a weight gain restriction program and to then compare this group with a control group receiving traditional antenatal care.
Methods
151 obese pregnant women in an intervention group and 188 obese pregnant women in a control group answered the Beck Anxiety Inventory (BAI) and the Edinburgh Postnatal Depression Scale (EPDS). Group differences between the two groups were estimated by using the χ 2 - test on categorical variables. The Student's t -test on continuous, normally distributed variables measuring changes in mean score on BAI and EPDS over time was used. To make a more comprehensive assessment of group differences, between as well as within the two groups, logistic regressions were performed with the BAI and EPDS as dependent variables, measured at gestational weeks 15 and 35 and 11 weeks postnatal. The grouping variable has been adjusted for socio-demographic variables and complications.
Results
The prevalence of symptoms of anxiety during pregnancy varied between 24% and 25% in the intervention group and 22% and 23% in the control group. The prevalence of symptoms of anxiety postnatally was 9% in the intervention group and 11% in the control group. Five percent of the women in the intervention group and 4% of the women in the control group showed symptoms of anxiety during the course of pregnancy and at the postpartum assessment. The prevalence of symptoms of depression during pregnancy varied between 19% and 22% in the intervention group but was constant at 18% in the control group. Postnatal prevalence was 11% in both groups. Six percent of the women in the intervention group and 4% in the control group showed symptoms of depression during the course of pregnancy and at the postpartum assessment. We found no differences between the two groups as concerns demographic characteristics, weight gain in kg, or the distribution of scores on anxiety and depressive symptoms nor did we find differences in the fluctuation of anxiety and depressive symptoms over time between the women in the intervention group and in the control group.
Conclusion
Obese pregnant women attending an intervention program seem to have the same risk of experiencing anxiety and/or depressive symptoms as do obese pregnant and postnatal women in general. | Competing interests
The authors declare that they have no competing interests.
Authors' contributions
I-M C contributed to the design of this study, collection and analysis of the data, and preparation of the manuscript.
AJ and GS contributed to research idea, design and preparation of the manuscript.
All authors have read and approved the final version of the manuscript.
Pre-publication history
The pre-publication history for this paper can be accessed here:
http://www.biomedcentral.com/1471-2458/10/766/prepub | Acknowledgements
This study was supported by grants from: The Research Fund of County Council in the South East Sweden and ALF, County Council of Östergötland. | CC BY | no | 2022-01-12 15:21:46 | BMC Public Health. 2010 Dec 16; 10:766 | oa_package/1f/43/PMC3016389.tar.gz |
PMC3016390 | 21172026 | Background
Men who have sex with men (MSM) who report using methamphetamine are 1.5 to 2.9 times more likely to acquire HIV than men who do not [ 1 - 6 ]. MSM describe sex on methamphetamine as "automatic" and "compulsive" [ 7 , 8 ], and methamphetamine use has been consistently associated with sexual behaviors and biological factors that may facilitate HIV acquisition and transmission [ 9 ].
Reductions in sexual risk accompany participation in substance abuse treatment and peer-support programs [ 10 , 11 ]. In particular, contingency management (CM) has emerged as a promising intervention to reduce methamphetamine use and HIV-related sexual risk among MSM. CM is the practice of providing incentives for meeting a specified behavioral goal (e.g., abstinence from substance use) and withholding incentives when the goal is not met. Scientists have used CM to successfully promote an array of behavioral goals in diverse populations [ 12 - 14 ]. In a randomized trial that enrolled MSM seeking outpatient drug treatment, CM and cognitive behavioral therapies resulted in similar reductions in methamphetamine use and sexual risk [ 15 ]. The results of that trial, as well as data indicating high levels of interest in CM [ 16 ], prompted the San Francisco Department of Public Health to implement the Positive Reinforcement Opportunities Project (PROP), a stand-alone CM intervention among non-treatment-seeking MSM [ 17 , 18 ].
Despite evidence supporting the potential effectiveness of CM, the intervention's effects on methamphetamine use and sexual risk have not been tested using a minimal intervention control. We conducted a randomized controlled trial of CM to reduce methamphetamine use and sexual risk among MSM outside of drug treatment. The goals of the study were to assess the feasibility of conducting a larger, more definitive trial of CM; to define the frequency of behavioral outcomes to power such a trial; and, to compute preliminary estimates of CM's effectiveness. | Methods
Study population
We enrolled MSM from Seattle, WA. Between June 2007 and August 2008, participants were recruited through referrals from an STI clinic; an HIV clinic; community-based organizations (CBOs); printed media; radio; community outreach; and, referral by peers participating in the study.
This study was initially designed to inform the design of a trial to test whether CM could prevent HIV acquisition among HIV-negative men who use methamphetamine. Therefore, from June 2007 to November 2007, we enrolled only HIV-negative men. Because of slow enrollment of HIV-negative men, in November 2007, we opened the study to HIV-positive men and revised the study objective to test whether a CM intervention could reduce HIV acquisition and transmission risk. Several studies illustrate that a considerable proportion of HIV-positive men use methamphetamine and that methamphetamine facilitates HIV transmission behaviors [ 9 ]. Additional inclusion criteria were: age ≥18 years; willingness to be randomized and provide locator information; and, no plans to move from the study catchment area within 6 months of enrollment. Behavioral eligibility criteria were ≥1 episodes of anal sex and ≥2 episodes of methamphetamine use in the month prior to screening. We did not require a methamphetamine-positive urine sample for enrollment as we thought this requirement would limit the utility of CM as a public health intervention in practice. We excluded participants who reported a mutually monogamous relationship with a partner of the same HIV status lasting ≥2 years and men who expressly asked for detoxification, counseling, or drug rehabilitation services. HIV status was the only eligibility criterion modified during the course of the study.
Study intervention and design
The study intervention was a 12-week CM program, adapted from previous studies, in which vouchers of escalating value were offered for consecutive urine samples that did not contain methamphetamine or crack or cocaine (herein referred to as cocaine) metabolites [ 15 ]. Initially, the intervention consisted of thrice-weekly drop-in urine testing visits for a total of 36 visits. Vouchers started at $2.50 for the first stimulant-free sample and increased by $1.25 for every consecutive stimulant-free sample thereafter up to a maximum of $10.00. Participants submitting three stimulant-free urine samples in a row earned a $10.00 bonus. The maximum payout for this program was $453.75, similar to the payout in other programs [ 17 , 18 ]. When participants submitted a stimulant-containing sample, or missed a visit, no vouchers were issued and the value of the voucher for the subsequent stimulant-free sample was reset to $2.50. If a participant submitted a week of stimulant-free samples after submitting a sample containing stimulant metabolites, he returned to the voucher value prior to the stimulant-containing sample ("rapid reset") [ 19 ]. Vouchers were redeemable immediately upon accrual for pre-paid gift cards and goods and services; we never offered cash. All CM participants signed a contract delineating the expectations of the CM program [ 20 ]. Study personnel administering the CM intervention followed a simple, scripted protocol for the reporting of results of urine testing and administering vouchers. Such a protocol was used to avoid the provision of counseling around the results of the urine testing. The protocol, which required no formal training, was developed by one of us (SS) who has extensive expertise in delivering CM interventions.
All seven participants enrolled in the study while the above procedures were in place reported difficulty adhering to the intervention schedule, and only two attended ≥12 of their 36 visits. In response, in September 2007, we reduced the number of weekly urine testing visits to two (24 visits over the 12-week intervention) and increased the value of vouchers for the first stimulant-free urine sample to $7.50; other studies have employed a similar schedule [ 21 ]. As before, these vouchers increased by $1.25 for each consecutive stimulant-free sample to a maximum of $10.00. Additionally, we gave participants a $20.00 bonus for two consecutive stimulant-free samples. The maximum payout for this program was $476.25. We continued to withhold vouchers and reset voucher values to baseline for urines containing stimulants and for missed visits; however, to encourage participants to attend visits, we gave men submitting stimulant-positive samples a voucher worth $2.50.
Under the initial CM intervention schedule, drop-in urine-testing visits were available from 10:00 am to 6:00 pm on Tuesdays, Thursdays, and Saturdays; we offered extended hours for working participants. After the enrollment visit, we sent postcards or e-mails to all participants encouraging participation in the intervention. We reminded participants who did not attend urine-testing visits for the first week by phone or e-mail. We sent postcards, phoned, or e-mailed all CM participants again at the midpoint of the intervention period. Under the revised CM intervention schedule, drop-in urine testing visits were available from 10:00 am to 6:00 pm on Tuesdays and Saturdays with flexible hours for working participants. Postcard, phone, and e-mail reminder strategies remained the same.
We tested urine samples with the QuickScreen Pro Multi-Drug Screening Test (Phamatech, Inc., San Diego, CA), a point-of-care test used to qualitatively detect stimulant metabolites. For this assay, the estimated mean detection time in urine ranges from 43.6 to 66.9 hours for methamphetamine [ 22 ] and is 88.4 hours for benzoylecgonine, a cocaine metabolite [ 23 ]. We repeated 10% of all urine tests; none were discordant. Study staff monitored the collection of all urine samples and tested the samples immediately after their provision.
Participants randomized to both control and CM arms received a printed list of local counseling, treatment, and outreach services at baseline and at each study visit. Study staff offered all participants assistance accessing services. Control participants did not submit twice-weekly urine samples and did not receive vouchers during the first 12 weeks of the study.
The study randomized participants using block sizes of two, four, and eight varied randomly with a pseudo-random number generator, a deterministic process use to generate an effectively unpredictable sequence of numbers [ 24 ]. The randomization list was used to assemble sequentially numbered, sealed, opaque envelopes containing intervention arm assignments. The research coordinator and principal investigator were blinded to the randomization code. From June 2007 to April 2008, participants were randomized 1:1 to the intervention and control arms under a protocol funded by the National Institute on Drug Abuse. In April 2008, we received additional funding from Public Health--Seattle & King County (PHSKC) to deliver the CM intervention, but not to enroll additional control participants. Therefore, between April 2008 and August 2008, we randomly assigned participants 3:1 to the intervention and control arms. We conducted analyses with and without the men enrolled under the 3:1 randomization scheme. Results were similar and we present the results of analyses that included all randomized men in this manuscript. While uneven randomization schemes may affect a trial's statistical efficiency, they have no impact on a trial's validity [ 25 ]. Follow-up ended in February 2009.
The same study personnel administered the CM intervention and conducted the study visits. These personnel also performed data entry of the results of the urine testing and HIV/STI testing. The behavioral endpoints were automatically transferred from the ACASI to a database without the potential for modification. It was not possible to blind those administering the CM intervention to a participant's study arm.
The University of Washington institutional review board approved the study protocol. All participants provided written informed consent.
Study procedures
All study procedures took place at a large, public-transport accessible, community-based AIDS service organization. We screened participants for eligibility by phone and in person. All participants attended scheduled study visits every six weeks for six months. At each study visit, participants completed an audio computer-assisted self-interview (ACASI) that used a 6-week recall period and included questions about sexual behavior and substance use at the respondent- and partnership-level. Participants submitted urine samples for methamphetamine and cocaine metabolite testing at each of these visits.
At enrollment, 12 weeks, and 24 weeks, participants were tested for HIV/STI after completing the ACASI [ 26 ]. Participant-centered risk reduction counseling accompanied all testing [ 27 ], and condoms and lubricant were offered to all participants. Given the connections between substance use and sexual risk, HIV risk reduction counseling often included discussions of substance use.
Study endpoints
The primary outcome was report of unprotected anal intercourse with a partner of unknown or discordant HIV status (non-concordant UAI) in the prior six weeks. The study included four secondary endpoints: the number of non-concordant UAI partners; results of methamphetamine urine testing; self-reported weekly or more frequent use of methamphetamine; and, self-reported use of >8 quarters (two grams) of methamphetamine. We chose non-concordant UAI as the primary outcome because this study was designed as a preliminary test of whether CM can be ultimately employed as an HIV prevention tool. We have previously shown this metric to be associated with HIV acquisition [ 28 ]. The secondary substance use outcomes represent intermediate variables in the causal chain in which methamphetamine use may facilitate the high risk sex that in turn leads to HIV/STI transmission.
Reliability of behavioral study endpoints
We conducted a test-retest reliability study of the behavioral endpoints assessed in this trial [ 29 ]. From December 2006 to March 2007, we enrolled 102 MSM recruited from an STI clinic and an HIV clinic to complete the ACASI employed in this study. Of the 102 men who completed an initial ACASI, 98 (96%) returned to complete a second ACASI in a median of 4 days (range: 2-5). Participants took no more than 38 minutes to complete the initial ACASI. We calculated reliabilities using kappa statistics for binary outcomes and intraclass correlation coefficients by repeated measures analysis of variance (ANOVA) for continuous outcomes. The reliabilities of self-reported non-concordant UAI, weekly or more frequent use of methamphetamine, number of non-concordant UAI partners, and number of quarters of methamphetamine were 0.72 (95%CI: 0.47, 0.87), 0.88 (95%CI: 0.62, 0.99), 0.85 (95%CI: 0.78, 0.91), and 0.96 (95%CI: 0.94, 0.99), respectively.
Statistical analyses
The primary purpose of this trial was to gather data for the design of a phase IIb or III clinical trial. The study was originally designed to follow 60 individuals. Assuming that 14% of participants would report non-concordant UAI at baseline, this sample size was chosen to provide estimates of the proportion of men who report non-concordant UAI in the prior six weeks with an expected width of the 95% confidence interval (CI) of 0.085. We expected to lose 25% of our study population to follow-up and aimed to enroll 80 participants. Additional resources, however, allowed us to enroll 127 participants.
We used chi-squared tests and t -tests to compare retention at the 24-week study visit and CM intervention adherence, respectively, between groups of participants defined by selected characteristics. We used generalized estimating equations (GEE) to estimate the proportion of participants reporting non-concordant UAI in the prior six weeks at each study visit. Analyses estimating the effectiveness of CM were intention to treat, on the basis of a binomial regression models with log links, to calculate relative risks comparing the proportion of men reporting non-concordant UAI between the study groups at study visits during the 12-week intervention and 12-week follow-up periods [ 30 ]. Our analyses of secondary binary endpoints employed the same analytic methods. Finally, we used a GEE model with a log link and negative binomial errors to evaluate the intervention's effect, expressed as a rate ratio, on the number of non-concordant UAI partners. We defined statistical significance at the P < 0.05 level.
We pre-selected baseline covariates that we thought would most strongly predict our outcomes of interest for adjusted analyses, including predictors of retention at the 24-week study visit. Of these covariates, we adjusted for variables that resulted in at least a 10% change in the adjusted RR (aRR) compared to the unadjusted RR [ 31 ]. We chose such strict criteria because of our relatively small sample size and because several potential confounders differed between our intervention and control groups. All analyses included a covariate indicating the baseline value of the outcome of interest. Estimates of the proportion of men reporting non-concordant UAI at each visit and analyses comparing any non-concordant UAI and number of non-concordant UAI partners between CM and control were also adjusted for HIV status and use of other substances (inhaled nitrites, gamma-hydroxybutyrate, ecstasy, or erectile dysfunction medications) in the prior 6 weeks at baseline. Analysis of detection of methamphetamine use by urinalysis was also adjusted for stage of change for methamphetamine use at baseline. We note that duration of methamphetamine use and injection use of methamphetamine were both evaluated as potential confounders; their inclusion in models that already included the above variables produced similar results to analyses without these two predictors.
We assessed participants' stage of change for methamphetamine use by a single question [ 32 , 33 ]: "Have you been trying to cut back or stop using methamphetamine?" This question was added after the first 19 participants were enrolled as it was not a variable we had initially planned to collect. The pattern of missingness, however, is missing completely at random (MCAR) which is unlikely to introduce bias to complete case analyses like the one presented in this manuscript [ 34 ]. | Results
Enrollment, retention, and participant characteristics
Over our 15-month enrollment period, we screened 222 men for eligibility (14.8/month) and enrolled and randomized 127 men (8.5/month). Of these randomized participants, 107 (84%) attended the 24-week study visit; retention at 6-week assessments was similar for both groups (Figure 1 ). HIV-negative men were less likely to attend the 24-week study visit than HIV-positive men (74% v. 93%, P < 0.01) and men who reported methamphetamine use for ≥10 years were less likely to attend the 24-week study visit than men who reported methamphetamine use for <10 years (81% v. 94%, P = 0.05).
The largest difference in follow-up between the CM and control group was for the first visit after baseline (69% v. 77%). Of the 22 CM participants loss-to-follow-up at this visit, one was incarcerated, one was in a drug rehabilitation program, three withdrew from the study, and 17 could not be located for study visit attendance. Among the 13 control group participants who did not complete the first follow-up visit, one had died, two withdrew from the study, and 10 could not be located for study visit attendance.
Table 1 presents characteristics of the study population. The only statistically significant difference between the study populations at baseline was that between the distributions of race/ethnicity ( P = 0.03).
CM intervention adherence
Table 2 provides metrics of intervention adherence among CM participants. Participants who earned <$15,000 per year attended more visits than participants who earned ≥$15,000 per year (41% v. 20%, P = 0.04). Participants who reported using methamphetamine weekly or daily at baseline attended fewer visits than participants who reported less frequent use (31% v. 51%, P = 0.02) and participants whose urine contained methamphetamine at baseline attended fewer urine testing visits than participants whose urine did not contain methamphetamine (26% v. 43%, P = 0.02). Since the mean detection time of the methamphetamine assay is shorter than the interval between urine testing visits, the proportion of metabolite-free urine samples provided by CM participants during the intervention visits may be over-estimated.
Sexual risk
At baseline, 6-weeks, 12-weeks, 18-weeks, and 24-weeks, the adjusted probabilities of reporting non-concordant UAI in the prior six weeks were 31.0% (95%CI: 27.6%, 34.3%), 19.7% (95%CI: 16.3%, 23.1%), 12.1% (95%CI: 9.9%, 14.4%), 9.1% (95%CI: 7.2%, 10.9%) and 10.2% (95%CI: 8.3%, 12.1%), respectively. Figure 2 presents the adjusted proportion of men reporting non-concordant UAI at each study visit by study arm. Comparing participants at study visits six weeks apart, participants at the later study visit were less likely to report non-concordant UAI in both the CM (aRR = 0.80; 95%CI: 0.64, 0.99) and control (aRR = 0.84; 95%CI: 0.71, 1.00) arms than participants at the earlier study visit.
CM and control participants were similarly likely to report non-concordant UAI during the 12-week intervention period. CM participants were less likely than control participants to report non-concordant UAI during the subsequent 12-week follow-up period, and also reported fewer non-concordant UAI partners than control participants during both the intervention and follow-up periods. None of these differences were statistically significant (Table 3 ).
One man in CM arm and one man in the control arm acquired pharyngeal gonoccocal infection (3.4 v. 4.5 per 100 person-years; P = 0.43). There were no incident syphilis, rectal gonococcal, or rectal chlamydial infections. Among the HIV-negative participants, one man in the CM arm and no men in the control arm acquired HIV (8.9 v. 0 per 100 person-years; P = 0.27).
Stimulant use
During the 24 weeks of follow-up at the 6-week study visits attended by both CM and control participants, 28 (49%) control participants and 44 (63%) CM participants submitted ≥1 sample containing methamphetamine metabolites; 26 (46%) control participants and 23 (33%) CM participants submitted ≥1 sample containing cocaine metabolites; and, 43 (75%) control participants and 55 (79%) CM participants submitted ≥1 sample containing methamphetamine or cocaine metabolites.
CM and control participants were comparably likely to submit urine samples positive for methamphetamine at study visits during the intervention period (Table 4 ); however, during the follow-up period, CM participants were somewhat more likely to submit a urine sample containing methamphetamine ( P = 0.11).
Comparing participants at study visits six weeks apart, control participants at the later study visit were less likely to report either weekly or daily methamphetamine use or using >8 quarters of methamphetamine in the prior six weeks than participants at the earlier study visit (aRR = 0.77; 95%CI: 0.68, 0.88 and aRR = 0.69; 95%CI: 0.56, 0.86, respectively); although the frequency and quantity of methamphetamine use declined over time among men in the CM arm, these changes were not as large with respect to frequency (aRR = 0.89; 95%CI: 0.83, 0.96) and not statistically significant with respect to quantity (aRR = 0.93; 95%CI: 0.85, 1.02). CM participants were significantly more likely than control participants to report weekly or daily methamphetamine use and to report using >8 quarters of methamphetamine at each study visit; these differences were statistically significant during the 12-week follow-up period (Table 4 ).
Accounting for baseline stage of change, control group participants were more likely than CM participants to report being in the action or maintenance stage of change for methamphetamine use during the intervention (74% v. 63%, P = 0.57) and follow-up periods (81% v. 66%, P = 0.18); however, these differences were not statistically significant. In the 6 weeks prior to baseline, 19% of control participants and 14% of CM participants participated in a substance abuse treatment program or peer support group ( P = 0.45). After baseline, 26% of control participants and 21% of CM participants reported participating in such interventions ( P = 0.38). | Discussion
In this randomized controlled trial of CM, we enrolled 127 non-treatment-seeking MSM over 15 months and successfully collected follow-up data on over 80% of participants. These findings suggest that a randomized controlled trial that specifically seeks to enroll this population is acceptable and feasible. However, less than half of our participants were HIV-negative and restricting enrollment to HIV-negative MSM who use methamphetamine, the population of interest for a trial using HIV as an outcome, would be challenging. Unlike HIV-negative men, HIV-positive men are often enrolled in ongoing medical care and case management. HIV-positive men may also be more likely to recognize their methamphetamine use as problematic with some attributing their HIV infection to their methamphetamine use [ 35 ]. In addition, the success of peer recruitment (over 50% of HIV-negative participants were recruited through peers) over place-based recruitment strategies that have been successful in other HIV prevention trials [ 36 ] indicates that HIV-negative MSM who use methamphetamine may not share the same social and geographic spaces as HIV-negative men who do not use methamphetamine. HIV-negative men may have also been uncomfortable participating in a trial conducted at an organization well known for its services for HIV-positive individuals.
Unfortunately, our findings related to CM are discouraging. First, a relatively small proportion of men consistently provided urine specimens for the intervention meaning that actual exposure to the intervention among men assigned to CM was quite limited. Several factors may contribute to CM participants receiving such a low "dose" of the intervention, including the location, the magnitude of the incentives, time limitations, and monitored collection of urine samples. Anecdotally, many men reported not attending urine-testing visits because they knew their urine would test positive. This anecdotal evidence is supported by our data. Out of all of the CM visits attended, a median of 85% of urine samples were free of stimulant metabolites. Second, and most importantly, our findings suggest that CM is very unlikely to be effective as a stand-alone intervention among MSM. While the 95% CI around the intervention-period aRR was very wide and does not rule out the possibility that CM could reduce methamphetamine use during the period of actual intervention, our follow-up period aRR and 95% CI exclude the possibility that CM would have a large, sustained benefit on methamphetamine use as a stand-alone intervention in a population like the one we studied. In addition, the control group reported larger declines in self-reported use over study follow-up than the CM arm. Although measures of sexual risk among our CM participants were similar to or lower than those among control participants, it is unclear how CM might result in lower or similar levels of sexual risk if it increases methamphetamine use.
We are aware of six studies evaluating CM among individuals who use methamphetamine in outpatient substance abuse treatment [ 15 , 21 , 37 - 40 ]. Five of those trials found that, compared to other psychosocial interventions, CM leads to the submission of more stimulant-free urine samples and an increased duration of abstinence from stimulants during the CM intervention period [ 41 ]. None of the three studies that followed participants beyond the intervention period reported differential effects of CM compared to other treatments, but all three studies showed that all participants experienced sustained reductions in methamphetamine use for up to 8 months after ceasing to receive CM. Among MSM seeking outpatient substance abuse treatment, CM has been shown to be comparable in decreasing methamphetamine use and receptive UAI as other psychosocial interventions [ 15 ]. However, none of the previously published randomized trials included a no-treatment/minimal intervention control arm making it difficult to determine whether the absence of differences between CM and other interventions reflects comparable levels of efficacy or an absence of effect with all interventions.
Our study differs in four important ways from previous studies of CM. First, while we did not exclude participants who were receiving other substance use interventions, participants were not enrolled because they were seeking drug treatment. Second, we compared CM to a minimal intervention control arm. Third, CM was employed as a stand-alone intervention. Finally, CM participants were only expected to submit urines samples two times per week rather than three times, as has been done in some, but not all studies of CM [ 21 ]. We cannot say whether CM's failure to reduce methamphetamine use in our study reflects a lack of effect among persons who are not expressly seeking drug treatment, CM's ineffectiveness as a stand-alone intervention, the ineffectiveness of our CM schedule, or a more general lack of efficacy of CM that would have been consistently observed in other trials had those studies included no-treatment/minimal intervention control groups.
We found that self-reported methamphetamine use declined among control participants but remained relatively stable among CM participants. We cannot readily explain this effect. We do not believe that CM participants used vouchers to obtain methamphetamine, since the difference in drug use by study arm was only evident after CM recipients ceased to receive vouchers. Likewise, we do not believe our findings simply reflect more accurate reporting of methamphetamine use by CM participants since we found that CM recipients were also more likely to test positive for methamphetamine. Participants in the two study arms reported similar use of outside treatment and support services, making it unlikely that differential use of other treatments affected our results. It is possible that participation in the CM arm may have put participants in contact with other participants and CBO clients in ways that facilitated access to and use of methamphetamine. Similar phenomena have been reported in other studies [ 42 , 43 ]. Also, discontinuation of twice-weekly CM visits, which may have provided a source of support for CM participants, may have led to heavier methamphetamine use among CM participants compared to controls.
Similar to a study of CM and other psychosocial interventions in the substance abuse treatment setting [ 15 ], we found that sexual risk declined in both of our study arms. However, in the case of the current study, our comparison arm was a minimal intervention control group. People who volunteer for studies may be more motivated to change their behavior than those who do not, participants may enroll during a period of high-risk activity and decrease their risk independent of intervention effects, and simply measuring a behavior may change a participant's behavior or reports of that behavior [ 44 ]. Each of these explanations may contribute to why sexual risk in our study, as in past studies, declined in both control and CM arms. Had we not used a minimal intervention control group and only measured participants in the CM intervention, we might have erroneously concluded that CM reduced sexual risk. This finding emphasizes that no-treatment/minimal intervention control groups are essential to rigorous intervention evaluation.
We note that the characteristics of our study population and the outcomes among CM participants were generally similar to those in a San Francisco public health program designed to reduce methamphetamine use among MSM (PROP) from December 2003 to December 2005. Our screening (14.8/month in the current study and 10.3/month in PROP) and enrollment rates (8.4/month and 7.4/month), attendance at CM visits (37% and 41%), and incentives earned (24% and 31% of the total possible) were similar [ 17 ]. Compared to PROP participants [ 18 ], participants in the present study were more likely to have used methamphetamine for >10 years (50% v. 38%), but were less likely to report weekly or more frequent methamphetamine use (64% v. 86%) and to be HIV-positive (55% v. 78%). More than 80% of men in both populations reported methamphetamine use with sex and 54% of participants reported injecting methamphetamine at enrollment. These similarities demonstrate that our trial probably closely replicated what might occur in a non-research setting.
Our study has several important limitations. First, we did not power the current study to detect differences between study groups based on methamphetamine use or sexual risk. Instead, this study was powered to provide precise estimates of the proportion of methamphetamine-using MSM who report non-concordant UAI. Despite this limitation, the precision of our post-intervention period aRR rules out the potential of CM to effect a large, sustained reduction in methamphetamine use as a stand-alone intervention, at least in the population we followed. Second, randomization did not provide us with study arms balanced with respect several pertinent variables. In particular, persons assigned to the CM arm were more likely to use methamphetamine at baseline, and higher levels of drug use have been associated with inferior responses to CM in prior studies [ 45 ]. It is possible that our results are subject to confounding bias stemming from imbalances in variables that we did not measure. Third, we altered the CM intervention after beginning the study. These modifications may have not provided sufficient incentive for abstinence from methamphetamine use [ 20 ]. Finally, our measures of self-reported sexual behavior and substance use are subject to social desirability bias. | Conclusions
In summary, our small, randomized trial found that a 12-week CM intervention was associated with a potential increase in methamphetamine use, but decreases in sexual risk that were not statistically significant. While our experience suggests that a larger, more definitive controlled trial of CM to decrease methamphetamine use and sexual risk may be feasible, our findings suggest that CM would be unlikely to effect large, sustained reductions in methamphetamine use among MSM. | Background
Methamphetamine use is associated with HIV acquisition and transmission among men who have sex with men (MSM). Contingency management (CM), providing positive reinforcement for drug abstinence and withholding reinforcement when abstinence is not demonstrated, may facilitate reduced methamphetamine use and sexual risk. We compared CM as a stand-alone intervention to a minimal intervention control to assess the feasibility of conducting a larger, more definitive trial of CM; to define the frequency of behavioral outcomes to power such a trial; and, to compute preliminary estimates of CM's effectiveness.
Methods
We randomly assigned 127 MSM from Seattle, WA who use methamphetamine to receive a 12-week CM intervention (n = 70) or referral to community resources (n = 57).
Results
Retention at 24 weeks was 84%. Comparing consecutive study visits, non-concordant UAI declined significantly in both study arms. During the intervention, CM and control participants were comparably likely to provide urine samples containing methamphetamine (adjusted relative risk [aRR] = 1.09; 95%CI: 0.71, 1.56) and to report non-concordant UAI (aRR = 0.80; 95%CI: 0.47, 1.35). However, during post-intervention follow-up, CM participants were somewhat more likely to provide urine samples containing methamphetamine than control participants (aRR = 1.21; 95%CI: 0.95, 1.54, P = 0.11). Compared to control participants, CM participants were significantly more likely to report weekly or more frequent methamphetamine use and use of more than eight quarters of methamphetamine during the intervention and post-intervention periods.
Conclusions
While it is possible to enroll and retain MSM who use methamphetamine in a trial of CM conducted outside drug treatment, our data suggest that CM is not likely to have a large, sustained effect on methamphetamine use.
Trial Registration
ClinicalTrials.gov Identifier NCT01174654 | Competing interests
The authors declare that they have no competing interests.
Authors' contributions
TWM participated in study design and coordination, performed the statistical analyses, and drafted the manuscript. DRJ participated in study coordination. JPH participated in the design of the study and provided statistical expertise. GNC and SS participated in the conception and design of the study. MRG conceived of the study and its design and helped draft the manuscript. All authors contributed to the interpretation of the study data, revised it critically for important intellectual content, and read and approved the final manuscript.
Pre-publication history
The pre-publication history for this paper can be accessed here:
http://www.biomedcentral.com/1471-2458/10/774/prepub | Acknowledgements
The authors wish to thank the men who participated in this study for their invaluable contribution to HIV prevention research. The authors are indebted to David Richart, Erick Seelbach, and the rest of the staff at Lifelong AIDS Alliance for opening their organization to the study participants and staff. Don Calsyn, PhD provided helpful feedback on the manuscript. This research was supported by the National Institute on Drug Abuse (R21 DA019420-01) and Public Health--Seattle & King County. TWM was supported by the National Institutes of Health (T32 AI07140). While Public Health--Seattle & King County influenced the 3:1 randomization scheme as noted in the Methods section, the funding bodies had no other role in study design, role in the collection, analysis, and interpretation of data; in the writing of the manuscript; or, in the decision to submit the manuscript for publication. | CC BY | no | 2022-01-12 15:21:46 | BMC Public Health. 2010 Dec 20; 10:774 | oa_package/3d/59/PMC3016390.tar.gz |
PMC3016391 | 21176146 | Background
In 2008, nearly 2.4 million people were incarcerated in United States (US) jails or prisons [ 1 , 2 ]. Furthermore, about 25% of persons living with the Human Immunodeficiency virus (HIV) and about 30% of those living with Hepatitis C Virus (HCV) infection spent time in correctional facilities [ 3 ]. As a result, many have expressed concerns that transmission of blood-borne infections among inmates may be a major source for further spread to the general population [ 4 - 7 ].
Three lines of evidence support this view: (1) the prevalence of HIV infection, viral hepatitis, and sexually-transmitted infections (STI) is typically higher in incarcerated than in non-incarcerated populations; (2) illicit drug injection, unprotected sexual activity, and other risky behaviors are common in prisons and jails; and (3) correlative studies have repeatedly found independent associations between antecedents of incarceration and increased risk for infections such as HIV.
In more details, despite declines since the 1990s, it has been estimated in 2008 that 1.5% of the total US custody population in federal and state prisons was infected with HIV [ 8 ]; that 12.0% to 35.0% had chronic HCV infection [ 3 ]; and that 1.0% to 3.7% had serological markers of chronic HBV infection [ 3 ]. Based on these estimates, the prevalence of HIV was about four times higher among prison inmates than in the general population [ 9 ]. Likewise, the prevalence of chronic HBV infection was 2 to 6 times higher, and that of chronic HCV infection was up to 10 times higher, among prison inmates than in the community.
In contrast, more uncertainty remains about the extent of high-risk behavior taking place within US correctional facilities. Due to important differences between institutions in enabling factors such as overcrowding and understaffing, estimates of interest vary widely. Recent studies suggest that 3.0% to 28.0% of adult inmates use intravenous drugs while incarcerated; 4.0% to 65.0% engage in unprotected homosexual activities [ 3 , 10 , 11 ]; and 0.0% to 15.7% report sexual victimization during incarceration [ 12 ]. Prison entrants, incarcerated inmates, and intravenous drug users (IVDU) also tend to share a number of incarceration-related factors that predict HIV infection, including overall length of time spent incarcerated [ 13 ], repeated incarceration [ 14 ], tattooing in prison [ 15 - 17 ], and history of syringe sharing in prison [ 18 - 20 ]. Comparable observations have been made for other blood-borne infections and STI [ 14 , 21 - 26 ].
As just discussed, high HIV prevalence and high risk behavior within correctional systems are well documented. There is much less evidence, however, to support the notion that the correctional setting increases the incidence of HIV, and thus plays a central role in sustaining or increasing community rates when inmates are released. Many investigators, in fact, have suggested that inmates appear more likely to acquire infection outside than inside correctional facilities [ 3 , 10 , 27 ]. Since determining the role of incarceration in the epidemiology of HIV transmission is a crucial step toward formulating cost-effective public health policies and interventions for US HIV/AIDS control, we conducted a systematic and comparative literature review of HIV, HBV, and HCV incidence among residents of correctional facilities, released detainees, community-living IVDU and community-living men who have sex with men (MSM) in the US. Our aims were twofold: to summarize the published literature on HIV incidence rates and other key blood-borne infections, such as HBV and HCV, in US correctional facilities; and to compare these with infection rates among non-incarcerated individuals who bear high burdens of blood-borne infections and STIs. We hypothesized that inmates experience a lower incidence of HIV than community living risk groups that practice the behaviors which place incarcerated populations at increased risk. We further postulated that comparisons between viruses that share the same routes of transmission in prisons, jails, and high risk groups in the community will provide the comparative basis to more clearly elucidate the role the correctional setting may play in increased risk of infection and the likely avenues for further spread to the community. | Methods
Search Strategy and Selection Criteria
The authors searched Medline/PubMed, PsycINFO/EbscoHost and Embase/Scopus (January 1990 to September 2009) for English language studies conducted in the US. Medline searches used permutations of medical subject headings (MeSH) and subheadings for each risk group and infection outcome of interest. In Embase, searches were developed using Emtrees and author defined keywords for relevant articles identified through the Medline searches. The same keyword strategy was used with PsycINFO (Table 1 ). References of all review articles identified in the search [ 3 , 7 , 10 , 27 - 32 ] and of all articles selected for full review were hand-searched for additional studies. All search strings were developed with the assistance of a qualified librarian.
Two investigators (EG and LG) independently assessed titles and abstracts to identify original research studies eligible for review. If eligibility could not be determined from reviewing titles and abstracts, the full article was retrieved. An article was chosen if it reported an incidence density (or cumulative incidence per year at risk) for one or more infections of interest (HIV, HBV, HCV) among inmates (incarcerated, released, reincarcerated), non-incarcerated high-risk individuals (MSM, IVDU) or both. Articles that did not provide original data, case reports, legal cases, case-control studies, and reports of outbreak investigations were excluded. We also excluded estimates of infection incidence measured among individuals entering the prison system for the first time, since these individuals had not been exposed to the correctional environment yet and, therefore, were representative of their community of origin rather than of the inmate population.
Data Abstraction
Once eligibility was determined, two reviewers (EG and MM) independently extracted data from selected articles using a standardized checklist. Discrepancies were corrected by consensus. For each infection and population of interest, the following information was retrieved: sample characteristics (age, sex and race composition, number of subjects at risk, and prevalence of infection at baseline), risk behaviors (same sex risk behaviors, intravenous drug use), number of infections during follow-up, length of follow-up, and attrition rate. Information on study characteristics was also collected (study period, site of data collection, study design, diagnostic methods, and study limitations). Depending on data availability, data on demographic characteristics and risk behavior described either the total sample tested at baseline or the baseline seronegative sample. Corresponding authors were contacted by email for data of interest not published in the articles. Of the authors contacted, 34.0% (12/34) responded with the data requested, 27.0% (9/34) responded saying that data were no longer available, and the remainder did not respond. When multiple publications reported on the same study cohort, we used the most recent and complete data.
Operational definitions
Baseline prevalence of infection was defined as the percentage of subjects who tested positive for infection at baseline; incidence density of infection as the average number of new infections in baseline seronegative subjects per 100 person-years (py) of follow-up; cumulative incidence of infection as the percentage of subjects diagnosed with a new infection among baseline seronegative subjects who had ≥1 follow-up test; attrition rate as the percentage of baseline seronegative subjects who did not undergo ≥1 follow-up; and predominant race ( sex ) as the ethnic or racial group (sex) that comprised the majority of study subjects. Continuously incarcerated inmate populations (CIIP) were defined as cohorts of inmates tested at entry, or post entry, with follow-up after at least 12 months of incarceration. There was one exception in which inmates were tested at entry or post entry, with follow-up testing at exit, regardless of the duration of their incarceration [ 33 ]. Inmates released and reincarcerated (IRAR) were defined as cohorts of inmates with at least two incarcerations during the study period who were tested at each intake, booking, or time of incarceration, with the exception of one study in which the authors described the cohort as "reincarcerated" [ 34 ].
Statistical analysis
Where person-years of follow-up and median follow-up time were reported neither by the article nor by the contacted author, we estimated total person-years accrued from the reported incidence per 100py and the total number of new infections (13 articles). For two studies [ 35 , 36 ], incidence density was calculated from data on cumulative incidence and median follow-up time. To ensure comparability of confidence limits across studies regardless of sample size, 95% confidence intervals (95%CI) were recalculated for all incidence density estimates using the exact Poisson method [ 37 ].
Due to significant heterogeneity among studies demonstrated in a fixed effects model, pooled estimates of incidence density and 95%CI were obtained for each infection and population of interest using the DerSimonian-Laird random effects method [ 38 ]. Where the number of incident cases was zero, a value of 0.5 was assigned in order to estimate standard error for pooled incidence. Heterogeneity among studies was assessed using the I 2 statistic, which estimates the proportion of total variation that is due to heterogeneity beyond chance [ 39 ]. Publication bias was assessed using Egger's test [ 40 ].
For each infection of interest, we calculated separate pooled estimates for HIV, HBV and HCV incidence among continuously incarcerated inmates and inmates released and re-incarcerated. Similarly, we calculated separate pooled incidence estimates for IVDU recruited through street outreach, IVDU recruited from drug treatment programs or clinics, IVDU recruited using either approach, and all categories of IVDU combined. To explore potential sources of heterogeneity, we conducted a random-effects meta-regression analysis of HIV studies only (the number of published estimates of HCV and HBV incidence was too small to warrant separate meta-regression analyses). The potential sources of inter-study variability were defined a priori and included: sample size, risk population, percent IVDU, percent MSM, mean age, predominant race and sex, geographic location, attrition rate, baseline prevalence, person years of follow-up, and study start year. We also defined potential study design characteristics that could be sources of heterogeneity or bias as suggested by the MOOSE statement [ 41 ]. These included publication year and follow-up design (prospective or retrospective). Finally, we included whether authors had been contacted for unpublished data, and whether they responded, as two proxy variables for the completeness and availability of study data.
A sensitivity analysis was conducted to determine the robustness of HIV incidence results to the inclusion/exclusion of studies that provided incomplete or imprecise HIV incidence data. All analyses were conducted using SAS version 9.2 (SAS Institute, Cary, NC) and Microsoft Excel XP (Microsoft Corp., Redmond, Washington, USA). | Results
Literature Review
The electronic search identified 4,272 titles, of which 4,172 were excluded based on review of titles and abstracts (Figure 1 ). Full text was retrieved for the remaining 100. A backward search of references identified 26 additional titles resulting in 126 articles selected for full review. Of these, 72 were excluded based on study eligibility criteria and an additional 18 were excluded because they provided insufficient information to calculate exact 95%CI and standard errors for meta-analysis [ 42 - 47 ]; did not report annualized incidence or median follow-up time [ 48 - 54 ]; provided potentially biased estimates of HIV incidence based on a self-reported date of last seronegative test [ 55 - 58 ]; or calculated incidence estimates using a mathematical model [ 58 ]. Additionally, data from two locations (Los Angeles and San Jose, California) from one multisite study were excluded because insufficient information was provided to calculate standard errors for meta-analysis [ 34 ].
Study Characteristics
In total 36 unique studies were included in the meta-analysis (Additional file 1 ) [ 33 - 36 , 60 - 91 ]. Numerical data of interest were often difficult to locate (median Kappa statistic of agreement among reviewers for identification of six key variables, 0.46). In contrast, inter-rater agreement on abstracted data was high once the information was found (median intra-class coefficient of 1.0 for same six key variables). Of the 36 studies, 10 reported an incidence from more than one geographic location [ 34 , 64 , 69 - 71 , 75 , 77 , 78 , 81 , 82 ]. Four studies reported one or more incidence estimates for continuously incarcerated inmates [ 34 , 35 , 60 , 83 ], four for inmates released and reincarcerated [ 34 , 36 , 61 , 62 ], 23 for IVDU [ 34 , 63 - 76 , 84 - 91 ], and six for MSM [ 77 - 82 ]. Together, these studies yielded 53 estimates of HIV incidence, 10 estimates of HCV incidence, and 6 estimates of HBV incidence (Figure 1 ). For HIV, data were found on 2 seroconversions in 1,901py of follow-up among continuously incarcerated inmates, 101 seroconversions in 5,253py of follow-up among inmates released and reincarcerated, 650 seroconversions in 37,137py among IVDU, and 777 in 33,096py among MSM. HCV studies reported 4 seroconversions in 733py among continuously incarcerated inmates and 305 seroconversions in 2,544py among IVDU. Finally, HBV studies reported 33 seroconversions in 1,970py of follow-up among continuously incarcerated inmates, and 153 seroconversions in 1,193py among IVDU.
The majority of studies reviewed were prospective cohort studies (86.1%) (Additional file 1 ). Retrospective studies included cohorts of inmates with stored specimens from testing at admission [ 61 , 62 , 83 ], and cohorts of IVDU with stored specimens from repeat testing at drug treatment clinics [ 75 , 76 ]. Inmate and MSM cohorts were predominantly white (40.4%-85.8%), while IVDU cohorts were predominantly African American (41.0%-93.3%) (Additional file 2 ). Cohorts of continuously incarcerated inmates included 94.0%-100.0% of men, whereas cohorts of inmates released and reincarcerated were composed of either men [ 36 ] or women [ 34 , 61 , 62 ] exclusively; cohorts of IVDU included 50.0% to 80.0% of men.
For all infection outcomes and populations, the modal start year for data collection was 1994 (range 1984 to 2000). Most inmate studies began data collection in 1985 (range 1985 to 2000), while most IVDU and MSM studies began data collection later, i.e., in 1994 (range 1985 to 2000) and 1995 (range 1984 to 1999) respectively (Additional file 2 ).
Mean HIV baseline prevalence was 2.1% in continuously incarcerated inmates (n = 2), 6.4% in inmates released and reincarcerated (n = 4), 18.5% among all IVDU studies (n = 16), and 6.1% among MSM studies (n = 1). HCV baseline prevalence was 30.6% among continuously incarcerated inmates (n = 2) and 52.4% among all IVDU studies (n = 8). Finally, HBV baseline prevalence was 20.3% among continuously incarcerated inmates (n = 2) and 26.7% among all IVDU studies (n = 2) (Additional file 2 ).
The mean attrition rate was calculated to be 26.4% across all infection outcomes and risk populations. In inmate populations, the mean attrition rate was 19.0% (n = 6); among all IVDU studies it was 40.0% (n = 31), and among MSM studies it was 4.5% (n = 17) (Additional file 2 ).
In CIIP cohorts, the period of incarceration was at least 12 months [ 35 , 60 , 83 ], with one study reporting 8.5 years as the median [ 35 ]. Horsburgh et al. reported the incarceration period for the 2 seroconverters identified only (20 and 130 days) [ 33 ]. In IRAR cohorts, the periods of incarceration before release were reported as a mean of 4 days [ 61 ], mean 62 days [ 62 ], as "days following their arrest" [ 34 ], or were not reported [ 36 ]. The periods between incarcerations on the other hand were reported as a median of 316 days [ 36 ], median of 527 days [ 62 ], or were not reported [ 34 , 61 ].
Overall, the most common limitation reported across all risk groups and infections of interest was limited generalizability of study results due to non-random sampling methodology or differential loss to follow-up (Additional file 1 ).
Meta-analysis
The pooled estimate of HIV incidence density was lowest among continuously incarcerated inmates (0.08/100py, 95%CI:0.0,0.24), followed by a more than 10-fold higher incidence for IVDU populations recruited from treatment programs (1.14/100py, 95%CI: 0.83,1.45), and highest among MSM (2.12/100py, 95%CI:1.82,2.42), street recruited IVDU (2.78/100py, 95%CI:2.24,3.32), and inmates released and reincarcerated (2.92/100py, 95%CI:2.02,3.82). Comparison of the 95%CIs suggested that HIV incidence rates were significantly lower among continuously incarcerated inmates and treatment recruited IVDU compared to incidence rates in the other three populations (Figure 2 .) Likewise, the pooled estimate of HCV incidence density was lowest among continuously incarcerated inmates (0.75/100py, 95%CI:0.05,1.44) compared to IVDU recruited through a combination of treatment programs and street outreach (13.8/100py, 95%CI:9.48,18.11), and to IVDU recruited exclusively on the street (20.11/100py, 95%CI:13.82,26.41) (Figure 3 ). Finally, the pooled estimate of HBV incidence density was low among continuously incarcerated inmates (1.71/100py, 95%CI:1.62,1.80) and significantly higher in street recruited IVDU (16.06/100py, 95%CI:15.86,16.25) and all categories of IVDU combined (16.54/100py, 95%CI:11.71,21.37) (Figure 3 ).
Inspection of forest plots and I 2 statistics confirmed that there was a high degree of heterogeneity in incidence rates of any given infection across populations; and in incidence of a given infection in a given population across studies (Figure 2 , 3 ). Results of Egger's test suggested that publication bias was present (2-sided P = 0.001).
Meta-regression analyses
After exclusion of sex composition from the list of covariates (because of marked collinearity with the risk population variable), the HIV sub-group meta-regression model explained 52.0% of the variance in pooled incidence of HIV infection. Differences in pooled incidence of infection were independently associated with risk population (2-sided P = 0.03), with predominant race (2-sided P = 0.03), and with person years of follow-up (2-sided P = 0.03), ( data not shown ).
In the model, pooled incidence density of infection was lowest among continuously incarcerated inmates (0.08/100py) and IVDU populations recruited from treatment programs (0.98/100py), followed by MSM (2.12/100py). Higher pooled incidence density of infection were observed among street recruited IVDU populations (2.64/100py; based on 17 published estimates), and inmates who were released and reincarcerated (2.95/100py; based on 4 published estimates). Pooled incidence of infection was higher in study samples that predominantly included African Americans (3.05/100py) compared to Whites (1.79/100py) and other races or ethnicities (1.49/100py) ( data not shown ). Study design and data quality variables included in the meta-regression analysis as previously described were not statistically significant.
Sensitivity analyses
The sensitivity analyses indicated that HIV incidence results were generally robust to the exclusion of studies discarded from consideration in the main meta-analysis. When excluded articles were included in the analysis [ 4 - 44 , 46 , 47 , 57 , 58 , 92 - 94 ], all pooled estimates of HIV incidence remained within 8.0% of the main meta-analysis results, with a few exceptions. Pooled HIV incidence rates increased for treatment recruited IVDU, (3.31/100py; 95%CI:3.0,3.6), and for all IVDU, (3.06/100py; 95%CI:2.7,3.4), when two of the studies that inferred HIV incidence based on self-reported date of last seronegative test were included [ 55 , 56 ]. These estimates were 2.9 and 1.4 times greater than the estimates from the main analysis for these risk groups. Likewise, pooled HIV incidence rates increased for MSM, (3.45/100py; 95%CI: 3.1,3.8), when two retrospective studies that used stored specimens from routine testing of MSM with primary or secondary syphilis at STD clinics were included [ 95 , 96 ]. This estimate was 1.6 times greater than the estimate from the main analysis for MSM. | Discussion
Fueled by reports of HIV and STI outbreaks in correctional facilities in the US [ 11 , 50 , 97 - 101 ] and in other high-income countries (Scotland [ 102 , 103 ], Australia [ 17 ]), the debate about the magnitude of inmate-to-inmate transmission of HIV in the US has spanned more than two decades. The pattern of results that emerged from our comprehensive review and meta-analysis of HIV, HCV, and HBV incidence studies support the notion that the transmission of HIV and other blood-borne infections in US correctional populations occurs at alarmingly high rates during the periods that recidivists spend outside prison. In our study, HIV incidence among inmates released and reincarcerated was much greater (2.92/100py) than in the US general population (0.02 per 100 population in 2006) [ 104 ], while HIV incidence among inmates continuously incarcerated was more similar to the general populace (0.08/100py). Rates of HIV seroconversion among reincarcerated inmates were comparable to those typically observed among non-incarcerated individuals who engage in high-risk injecting and sexual behaviors (street-recruited IVDU, 2.78/100py; MSM, 2.12/100py). HIV incidence among IVDU enrolled in a drug treatment program (1.14/100py) laid between the lower bound observed in the general population and the higher bound observed in the high-risk groups. In contrast, intraprison incidences of HCV (0.75/100py) and HBV (1.71/100py) infection were also higher than in the US general population (0.01 and 0.02 per 100 population, respectively, in 2006) [ 105 ], but several-fold lower than among non-incarcerated IVDU (HCV, 18.53/100py; HBV, 16.54/100py) and MSM (HCV, no data available; HBV, 15.9%) [ 51 ]. The greater incidence of these infections compared with HIV probably reflects the higher prevalence rates of HCV and HBV infection among prison entrants [ 106 ] and higher infectivity of HCV and HBV compared with HIV [ 107 , 108 ].
Despite significant heterogeneity among included studies, our results were consistent across meta-analyses and multiple meta-regression analyses. Sensitivity analyses indicated that results were only sensitive to the exclusion of four studies, two in which incidence was inferred from self reported data [ 55 , 56 ], and two in which MSM with early syphilis infection were tested retrospectively [ 95 , 96 ]. Furthermore, our summary estimates of HIV incidence among inmates were comparable with estimates that were published before 1990 [ 109 - 112 ]; did not report data in the desired format [ 48 ]; and with studies conducted in Europe [ 113 - 115 ].
Low incidence rates of HIV transmission in prison (range, 0.0 to 0.4/100py) have been reported in three US studies published in the late 1980s [ 109 - 111 ]. The extent to which these older studies further our understanding of the current dynamic of HIV transmission in prison is unclear, since HIV prevalence at prison intake was considerably lower in the 1980s compared with the 2000s. Of note however, study start year and publication year did not significantly contribute to the meta-regression analysis model, suggesting that calendar time was not an important source of heterogeneity among HIV studies. Also, in a US-based study that did not provide annualized incidence density estimates of HIV infection, but retrospectively followed 5,265 male inmates from their entry into custody in 1978 until 2000, 0.63% of the detainees were diagnosed with HIV infection during incarceration and 4.6% after release from prison [ 48 ]. Bias may have inflated the difference in incidence during the incarceration and released periods in this study, but no other source of information was found that estimated HIV incidence among inmates released from a US prison as compared to inmates undergoing periods of continuous incarceration.
In three European studies, HIV incidence among detainees ranged between 0 and 1.0/100py [ 113 - 115 ], and was highest among male and female IVDUs recruited in 1987-1988 at a prison remand centre in Sweden (the authors of these studies did not clearly indicate what percentage of inmates had been continuously incarcerated) [ 115 ]. In a small Australian study (n = 90), including inmates of both genders, a higher incidence of HCV seroconversion was found among inmates who underwent a period of release before reincarceration compared with inmates who had been continuously incarcerated (10.8 vs. 4.5/100py; P = 0.07) [ 116 ]. Although the results from these developed nations and from the US seem to be consistent in their documentation of low HIV incidence rates within the prison system and higher rates during post release, international comparisons should be made with great caution given the differences in correctional systems and epidemiological contexts across countries [ 117 ]. Hence, two studies identified from the developing world reported higher HIV incidence rates - Brazil (2.8/100py) [ 118 ] and Thailand (4.18/100py) [ 16 ]. It is clear that an important gap still exists in our understanding of HIV and blood-borne infection transmission in correctional environments throughout the world.
Among MSM, our pooled estimate of HIV incidence is similar to that calculated in a recent study which used a fixed-effects model to calculate a weighted average (2.39%) [ 119 ]. In another study, estimates calculated for MSM (0.7/100py) and IVDU (1.5/100/py) differed in comparison to our pooled estimates [ 120 ], but this citation [ 120 ] did not use meta-analysis methods to estimate HIV incidence in these risk groups.
Our findings are consistent with studies of risky behaviors in correctional populations and hypotheses proposed to explain the apparent paradox of low incidences of HIV, HBV and HCV in the prison system, and high incidence of HIV, HBV and HCV during the post-release periods. As mentioned earlier, the US correctional system offers conditions seemingly favorable to the transmission of blood-borne viruses. There is a large reservoir of potential transmitters in the prison system at any time, and many inmates engage in sexual and drug-mediated risk behaviors regardless of the general lack of condoms [ 49 , 50 ] and sterile injection material [ 10 , 48 ]. For instance, although the frequency of drug use in prison is typically lower than in the general community [ 48 , 113 , 121 ], there is clear evidence that incarcerated drug users often continue to inject; that injecting equipment is frequently shared among inmates; and that the risk of equipment contamination by parenterally-transmitted viruses is higher within the prison system than outside of it [ 20 , 113 ].
Thus, a possible explanation for the low transmission of HIV, HCV, and HBV within the prison system is that inmates' risk networks are on average considerably smaller and more closed within correctional facilities than in the community. Given the de facto segregation of detainees by age, sex, race, category of offense and, historically in some states, by HIV status [ 10 ], it is plausible that the lack of bridges between intra-prison networks, and the small size of the networks, lead to the rapid saturation of the susceptible inmates who have effective contacts with a transmitter [ 106 ]. However, the formation of bridges between inmates' risk networks when an adequate proportion of susceptible inmates exists may lead to efficient infection transmission [ 52 , 99 ]. In contrast, studies have shown that many inmates, following re-entry in the community, revert to pre-incarceration habits and engage in high rates of unsafe sexual and intravenous drug use behaviors [ 122 - 126 ], as suggested, for instance, by high frequency of anal sex reporting [ 124 ], excess occurrence of drug overdose [ 125 - 127 ], and high risk for mortality [ 80 , 128 - 130 ] at post release. Among the four studies on inmates released and reincarcerated, those reporting the highest post-release incidence rates followed recidivist female IVDU [ 34 ] and recidivist MSM [ 36 ]. As such, sub-populations of inmates with risky pre-incarceration behaviors may be at particularly high risk during periods between release and reincarceration.
There was a notable difference in the reported proportion of IVDU and MSM in recidivist inmate populations [ 34 , 36 , 53 , 62 ] as compared to continuous inmates [ 34 , 35 , 60 , 83 ]. Three of the four recidivist studies reported the proportion of IVDU (59.1% and 100%) or MSM (0.0% and 100%); while two continuous inmate studies reported the proportion of IVDU (2.9% and 11%) or MSM (0.0% and 4.5%) (Additional file 2 ). These data suggest that the proportion of IVDU and MSM might be larger in recidivist studies than in continuous inmate studies. If real, such a difference would be another explanation for the observed differences in incidence among the studies in continuous inmates, recidivists and community-living populations.
A meta-analysis of pooled incidence rates stratified by risk behaviors would have further clarified whether the recidivist groups are at increased risk of infection compared to their continuous inmate or community living counterparts. However, available data were not sufficient for this type of analysis. Only one study reported the incidence of HBV (8.2/100py) and HCV (5.5/100py) among continuously incarcerated inmates who reported injection drug use [ 60 ]. For both infections, the incidence in this group of inmates was greater than the pooled estimate for CIIP cohorts, but was lower than the pooled estimate for community IVDU; the study did not report whether injection drug use occurred during or prior to incarceration.
Our study is subject to limitations. Despite the crucial importance of characterizing the relations between incarceration and the HIV epidemic in the US, we found only five incidence studies published between 1990 and 2004, and one published between 2005 and 2009, that reported primary data on the transmission of HIV in US correctional populations. The recent literature on incidence of blood-borne infections among MSM was also sparse. The importance of further studies on these outcomes in these populations cannot be over emphasized. Our observation of high infection rates among inmates who were released and reincarcerated was based on four studies only and incidence of HIV was relatively low in two of these studies (Figure 2 ). Three studies evaluated predominantly IVDU women only; one evaluated MSM only; none measured HBV and HCV incidence; and none included inmates who were not reincarcerated after their release.
In the random effects analysis, the limited number of studies that assessed incidence of HIV, HBV or HCV infection made it difficult to disaggregate the sources of heterogeneity across studies. Several studies lacked data for inclusion in the meta-regression models and, in general, data were insufficient to properly evaluate the influence of key cofactors, such as sex, age, race, and interaction between population and proportions of subjects engaging in risk behaviors. Other unmeasured factors may also have contributed to the observed heterogeneity.
Egger's test suggested that publication bias might have affected our results. Possible sources of publication bias include citation bias, poor methodological quality of smaller studies, and true heterogeneity. Although study selection criteria were clearly defined and study selection was done by two independent reviewers, we cannot entirely exclude the possibility that some studies were missed due to low citation frequency. Most reviewed studies, in particular small studies, shared one or several important methodological shortcomings, including purposive or convenience samples, inconsistent operational definition of risk populations, short follow-up times, high attrition rates, and inappropriate periods of risk assessment. As already indicated, there was significant heterogeneity across included studies, part of which is likely to have been true heterogeneity. Finally, Egger's test can be sensitive to extreme observations and large sample size, both of which were present in this meta-analysis. | Conclusion
Our findings support the notion that comprehensive strategies are needed to control the spread of parenterally and sexually transmitted viruses in US correctional populations. Examples are prevention programs to reduce transmission within prison systems [ 129 ]; transition programs to better prepare inmates for life after discharge; and interventions to ensure continuity of care in the community. While incarceration does not appear to increase the risk for HIV or other blood-borne disease infection for the average inmate, and while rates of transmission in US correctional settings appear to be lower than would be expected outside prison or jail, the lack of sufficient data for meta-regression and sub-group analyses made it difficult to draw definitive conclusions about the increased risk incarceration poses on high risk groups such as MSM and IVDU, or about the increased risk to these groups and the community during periods of release. In addition to supporting innovative intervention studies, our findings also point to the need for further research to update our understanding of the transmission of blood-borne and sexually-transmitted infections in inmate populations, and, most importantly, to clarify the role of the post release period in infection risk and further spread to the general community. The consistency of our results across infectious agents confirms that HBV and HCV infections may be used as sentinel indicators of risk for HIV infection in correctional settings. | Background
High Human Immunodeficiency Virus (HIV) prevalence and high risk behaviors have been well documented within United States (US) correctional systems. However, uncertainty remains regarding the extent to which placing people in prison or jail increases their risk of HIV infection, and regarding which inmate populations experience an increased incidence of HIV. Describing these dynamics more clearly is essential to understanding how inmates and former detainees may be a source for further spread of HIV to the general US population.
Methods
The authors conducted a systematic review and meta-analysis of studies describing HIV incidence in US correctional facility residents and, for comparison, in high risk groups for HIV infection, such as non-incarcerated intravenous drug users (IVDU) and men who have sex with men (MSM) in the US. HIV incidence rates were further compared with Hepatitis B and Hepatitis C Virus rates in these same populations.
Results
Thirty-six predominantly prospective cohort studies were included. Across all infection outcomes, continuously incarcerated inmates and treatment recruited IVDU showed the lowest incidence, while MSM and street recruited IVDU showed the highest. HIV incidence was highest among inmates released and re-incarcerated. Possible sources of heterogeneity identified among HIV studies were risk population and race.
Conclusions
Although important literature gaps were found, current evidence suggests that policies and interventions for HIV prevention in correctional populations should prioritize curtailing risk of infection during the post-release period. Future research should evaluate HIV incidence rates in inmate populations, accounting for proportion of high risk sub-groups. | Competing interests
The authors declare that they have no competing interests.
Authors' contributions
EG designed the study; coordinated its implementation; performed the literature search, selection of articles, data acquisition, data analysis and interpretation; participated in drafting all sections of the article; and gave final approval of the version to be published. MCK designed the study; helped coordinate its implementation, assisted with the review of relevant literature; helped in data interpretation; participated in drafting all sections of the article; revised the article critically for important intellectual content; and gave final approval of the version to be published. LG performed the literature search and selection of articles; participated in reviewing the manuscript; and gave final approval of the version to be published. MM performed the systematic literature review and data acquisition; participated in reviewing the manuscript; and gave final approval of the version to be published. EWH helped with data interpretation; revised the article critically for important intellectual content; and gave final approval of the version to be published. AB performed data analysis; reviewed the manuscript; and gave final approval of the version to be published. EC designed the study; helped coordinate its implementation; assisted with the review of relevant literature, helped in data interpretation; participated in drafting all sections of the article; revised the article critically for important intellectual content; and gave final approval of the version to be published.
Pre-publication history
The pre-publication history for this paper can be accessed here:
http://www.biomedcentral.com/1471-2458/10/777/prepub
Supplementary Material | Acknowledgements
This work received no financial support. The authors would like to thank Lee A. Vucovich, librarian at the University of Alabama at Birmingham, Lister Hill Library, for his expert assistance with the electronic literature search. | CC BY | no | 2022-01-12 15:21:46 | BMC Public Health. 2010 Dec 21; 10:777 | oa_package/26/b6/PMC3016391.tar.gz |
PMC3016392 | 21176191 | Background
Withdrawal and other traditional methods of contraception are still a method of choice for family planning in a number of developing countries such as Iran, although modern methods for family planning are easily available. Withdrawal is known to be associated with high rates of unintended pregnancy, and in turn it is associated with adverse effects including delayed prenatal care, pre-maturity and low birth weight [ 1 , 2 ]. Studies have shown that choosing a method of contraception and using it effectively is a complex issue influenced by more than the simple availability of information [ 3 ].
According to the Iranian Demographic Health Survey (IDHS) 17.8% of couples are using withdrawals, even though the national family planning program does not encourage this method [ 4 ]. In Iran family planning services and primary health care units provide information on contraceptive methods to individuals or couples, and these units offer contraceptives free of charge. Family planning midwives carry out counseling, intrauterine contraceptive device (IUCD) insertion and distribution of oral contraceptives. Condoms, oral contraceptives, emergency pill, injections are available all over the country. In addition, private practice gynecologists and general practitioners, as well as midwives at maternity consultations in hospitals, are among those who provide information about contraceptive methods.
Previous studies from Iran mostly focused on demographic characteristics of women who used withdrawal [ 5 , 6 ]. For instance a study showed that withdrawal users were younger, well educated and from urban areas [ 6 ]. However, despite the importance of these studies, they have been unable to detect the effects of other variables on choice of withdrawal as a birth control method. It is argued that it is crucial for the health care providers to assess women's attitude about family planning prior to educating them and providing them with contraceptive services [ 7 ]. Thus, the aim of this study was to investigate the reasons for using withdrawal by Iranian women and to gain a better understanding of their experiences of and attitudes to contraceptive methods. In addition, we thought the findings from this study might provide baseline information in order to help to design an educational program for use by the Iranian health care system and perhaps in other developing countries. | Methods
Design and participants
This was a cross-sectional study carried out in Tehran, Iran during April to September 2009. Participants were recruited from women attending five family planning clinics of public health services in the Eastern district of Tehran, Iran (the area serves as the training area for the Iran University of Medical Sciences). Interviews were conducted at certain days of the week in each clinic. Weekdays for interviews were selected randomly. Criteria for inclusion were: women aged 18-49 years, current withdrawal users, married, sexually active, and did not intend to be pregnant.
Questionnaire
A structured questionnaire was designed in order to collect data. The questioner contained 31 items derived from the literature [ 8 , 9 ]. It was consisted of three parts: (i) socio-demographic characteristics including age, education, employment status, and women's reproductive health background, (ii) reasons why women were using withdrawal method and, (iii) questions about experiences of and attitudes to the contraceptive methods [Additional file 1 ].
Analysis
Descriptive analyses were carried out to explore the data. Statistical procedure included chi square test for categorical data to examine the relationships between dependent and independent variables. Reasons for using withdrawal method was considered as dependent variable and age, education, employment, time since marriage and numbers of children were considered as independent variables. The SPSS version 16.0 was used to analyze the data.
Ethics
Approval for the study was obtained from the Office for Protection of Research Subjects in Tarbiat Modares University. An oral informed consent was obtained from each participant prior to the study commence ensuring them that their identities will remain confidential. | Results
Sociodemographic characteristics of the study sample
In all 307 women were approached and 300 agreed to interviewed, giving a response rate of 97.7%. The mean age of participants was 31.4 (SD = 6.6) years. The mean duration of marriage was 10.0 (SD = 6.0) years. One out of four women reported that they work outside the home for financial remuneration. Reproductive history showed that nearly one out of four women had 3 or more than three children (25.7%). Overall 86 women (28.7%) reported having had at least one unwanted pregnancy. Of these, 32 women (37.2%) said that they were practicing withdrawal during unwanted pregnancy. One out of five women reported that they underwent induced abortion and they terminated pregnancy because it was unwanted. More than half (52.9%) said that they practicing withdrawal (Table 1 ).
Attitudes toward contraceptive methods
There were several different attitudes towards contraceptive methods. These are presented in Table 2 . In summary concerns about oral pills were nervousness (79.3%), weight gain (74.7%), and irregular bleeding (72.7%). Negative attitudes towards intrauterine device (IUCD) were irregular bleeding (57.0%), infection (34.3%), and pain (24.7%). Concerns related to condom and injection use were worries about unwanted pregnancy (37.7%) and severe irregular bleeding (24.0%), respectively.
Previous experiences of using contraceptive methods
Overall 210 women (70%) indicated that they were using modern contraceptives previously. These were: OCP (n = 111), IUCD (n = 46), condom (n = 44), or injection (n = 9). The most common reasons for stopping contraceptive methods were as follows: 36.9% stopped taking pills because of its side effect, 32.6% stopped using IUCD due to health problems, 36.4% gave up using condom because of their husbands' dislike, and finally 44.4% stopped injection because they thought they might get pregnant. Table 3 shows the results.
Experiences of and attitudes to contraceptive methods
Seventy-two percent of women (n = 216) indicated that they can talk to their spouses about contraception use. Only 34% of the women knew about emergency contraception. Nearly half of the women correctly knew the fertile period of their menstrual cycle. Just over a half of them reported that they were afraid of conceiving while they were using withdrawal method (Table 4 ).
Main reasons for using withdrawal method
Women indicated several reasons for using withdrawal. Most important reasons were: availability without any charge (89%), followed by having fewer side effects (80.7%) and easier to use (70.3%). The findings are summarized in Table 5 .
When association between these main reasons and the respondents' age, employment status, time since marriage and number of children were studied, the results indicted that there were no significant associations between these independent variables and the main reasons for using withdrawal method. In addition, there was no significant association between educational status and two reasons stated by women [availability without any charge (P = 0.1) and easier to use (P = 0.1)]. However, there was a significant relationship between educational status and the statement that withdrawal imposes fewer side effects than modern contraception methods (P < 0.001). The results are shown in Table 6 . | Discussion
The findings of this descriptive study indicated that there are various obstacles to modern contraceptive use among withdrawal users. The main factors were, health concerns and fear of side effects, misinformation related to modern contraception, lack of confidence in modern methods, dissatisfaction with sexual sensation, and unwillingness of their husbands. Surprisingly religious factors were not noted by women in our study to be a reason for inability to choose effective methods.
We found that the most common reasons for using withdrawal were the fact that women believed this method did not involve any costs, had no side effects, and was easy to practice. Other reasons in our study that prevented contraceptive use were dissatisfaction with sexual sensation, and husbands' unwillingness. Similarly, other investigators reported that most couples considered withdrawal due to health problems and side effects of modern methods [ 10 , 11 ]. A study from Turkey found that the reasons for using traditional methods and not effective methods among women were: wrong beliefs and fear of side effects (45.8%), unwillingness of men to use effective methods (37.5%), and cost of the methods (16.7%) [ 12 ]. In addition, the findings from present study indicated that preference of husbands, as a reason for using withdrawal, was relatively high (54.7%). A study from Turkey also showed that 31.2% of women used withdrawal due to the preference of their husbands [ 8 ]. Yet men are still not an important target group in most programs, and inadequate attention is paid to their role and their perspectives on issues of fertility control and they did not attend to health center for birth control. Even, in traditional societies such as Iran asking these questions from husbands is a very difficult task.
The present study showed that the main reasons for giving up oral contraceptives and intra uterine contraceptive device (IUCD) were side effects and health problems, while condoms stopped being used because of spouses dislike. This clearly suggests that two major reasons could be identified for withdrawal use: women-related and husband-related factors. With regard to women-related factors although we believe there should be a right for women, it seems that there is need to provide more support in order to help them to make a right decision. As far as husband-related factors involves, however, the issue of power and gender role might be relevant to discuss. This is consistent with the argument that men sometimes use withdrawal as a way to reinforce their decision-making and sexual control [ 13 ]. It has been suggested that gender-based power relations can have a direct effect on the ability of partners to acquire information relevant to their reproductive health, on their ability to make decisions related to their health, and on their ability to take action to protect or improve their health [ 14 ]. A study reported that side effects or health concerns accounted for a large portion of the relatively high first-year discontinuation rates for pills and injections (21% and 29%) [ 10 ]. The IUCD discontinuation rate was the lowest (%9) among all methods, compared with 38% for withdrawal and 56% for the pill [ 15 ]. In another study it was found that discontinuation rates for method-related reasons varied widely by method: IUCD was associated with the lowest probabilities of discontinuations (11% within 12 months, 30% within 4 years), followed by the pill (22% and 48%, respectively) and discontinuation rates were significantly higher for all other methods (condoms, withdrawal, fertility awareness methods and spermicides) [ 16 ]. Since modern contraceptive use may be associated with transient side effects, therefore for women who not prepared for these effects and not knowing where to go for follow-up and advice; discontinuation in the practice and development of fear regarding the use of modern methods might be expected.
The findings from current study showed that 72% of women could talk to their spouses about family planning. This suggests that the decision related to family planning is usually a jointly negotiated agreement by the couple, rather than a husband's imposition or a woman's choice alone (see Table 4 ). A study from Turkey reported that in 86 to 88% of cases the couple jointly made their contraceptive choices [ 17 ]. In general it is argued that reproductive decision-making is typically a jointly and co-operatively negotiation process by couples [ 18 ].
The scientific assertion that withdrawal has a relatively high failure rate is based on reports from a small number of studies, primarily conducted in North America, and with small sample sizes that may not be representative [ 19 , 20 ]. In one study among typical withdrawal users about 19% failed during the first year [ 21 ]. In another study, it was found that 48.6% unplanned pregnancies occurred while the couples were practicing withdrawal [ 8 ]. Prevention of unintended pregnancy is a significant public health issue and should be focus of health policies as it was the focus of healthy people 2010 in the USA [ 22 ].
Data from Iranian Demographic Health Survey in year 2000 indicated that about one-third of pregnancies were unintended [ 5 ]. Results from the present study showed that 37.2% of unwanted pregnancies occurred when the couples were practicing withdrawal and 62.8% were related to other reasons. Perhaps these pregnancies might lead to abortion. There are no reliable data on abortion in Iran as abortion is illegal except on occasions that the mother's life is in danger or in the case of fetal impairment [ 23 ]. A study of withdrawal users revealed that one out of four women reported that they terminated a pregnancy because it was unplanned [ 8 ]. The results of our study showed that 86 women experienced unwanted pregnancy while using different contraceptive methods. Of these 17 women reported that they terminated a pregnancy because it was unplanned (see Table 1 ). These women usually should pay a large amount of money for illegal abortions in illegal clinics; otherwise it could have serious consequences both for women and practitioners. It is argued that one reason for taking such a risk is that women do not want to have more children. For instance, a study from Turkey reported that women who had sufficient number of children preferred induced abortion instead of using an effective family planning method [ 24 ].
The current study found that 34% of women knew about emergency contraception while a study from Turkey revealed that only a few women (13.4%) knew about emergency contraception [ 8 ]. Emergency contraception has been defined as the use of a drug or a device to prevent pregnancy after intercourse and it has been shown to be safe and effective method to reduce the number of unwanted pregnancies [ 25 ]. Thus, there is a need to improve women's awareness about emergency contraception. The primary health care providers can play a major role in informing their patients about emergency contraception and it needs to become part of routine reproductive health counseling. To improve emergency contraception, awareness campaigns should be designed and implemented.
Limitations
This study has some limitations. First, the study was carried out in one district of Tehran, and thus the findings cannot be generalized to withdrawal users who live in Iran. Secondly, the sample size was small. In addition the study was limited to women. Knowledge and attitudes of men need to be considered as well if we hope to make changes in the use of contraceptives in this population. However, this is the first study that investigated the experiences of and attitudes toward contraceptive methods among women who were using the withdrawal method in Tehran, Iran. | Conclusion
The study results suggest that withdrawal users carry misconceptions about modern contraception and exaggerate its related health problems. The results provide insight into the complexity of the underlying decision-making processes and suggest that multidimensional interventions may needed to reduce the rate of unintended pregnancy. Contraception counseling should provide women with accurate information about contraception, and address any misconceptions women may have about the safety of various contraception methods. It is important to discuss the positive aspects of contraception during counseling so that women feel that their health will benefit more than be adversely affected. These findings might be essential for making evidence-based policy decisions, and for planning, monitoring and evaluating reproductive health programs in Iran and elsewhere. | Background
The aim of this study was to explore withdrawal users' experiences of and attitudes to contraceptive methods in Tehran, Iran.
Methods
This was a cross-sectional study. A sample of women attending a health care center in Tehran, Iran was entered into the study. To collect data structured interviews were conducted.
Results
In all 300 women who were using withdrawal took part in the study. Of these, 210 women (70%) indicated that they were used modern contraceptive methods previously. The mean duration for withdrawal use was 6.5 (SD = 4.9) and for modern contraceptive it was 2.3 (SD = 2.9) years. The most common reasons for using withdrawal were: no cost involvement, did not need medical advice, having fewer side effects and easier to use than other methods. The main obstacles to use modern contraceptives were: health concerns, fear of side effects, misinformation, lack of confidence and sexual dissatisfaction.
Conclusion
The study results suggest that withdrawal users carry misconceptions about modern contraception and exaggerate its related health problems. Indeed these are important issues for the understanding of attitudes and experiences of women in Iran. The findings from this study might be essential for making evidence-based policy decisions, and for planning, monitoring and evaluating reproductive health programs in Iran and elsewhere. | Competing interests
The authors declare that they have no competing interests.
Authors' contributions
PR was the main investigator, designed the study, collected the data, performed analysis and wrote the first draft. AH supervised the study. FA, and AK, were the study advisor. ZGh helped the main investigator to finalize the research project and analyzing the data. AM was honorary advisor, contributed to the analysis, critically evaluated the paper, and responded to reviewers' comments, and provided the final draft. All authors read and approved the final revision of the manuscript.
Pre-publication history
The pre-publication history for this paper can be accessed here:
http://www.biomedcentral.com/1471-2458/10/779/prepub
Supplementary Material | Acknowledgements
This research was originated from a PhD thesis in health education. The authors gratefully acknowledge the Institutional Review Board of the Tarbiat Modares University, which approved and supported this project. They also wish to thank the participants for their cooperation. | CC BY | no | 2022-01-12 15:21:46 | BMC Public Health. 2010 Dec 22; 10:779 | oa_package/91/30/PMC3016392.tar.gz |
PMC3016393 | 21182797 | Background
The repeated findings of low levels of knowledge and behaviour related to preconception care in women of reproductive age [ 1 - 4 ], particularly with regards to adherence to nutrition and lifestyle recommendations for planning a pregnancy [ 4 ] gives rise to questions about the efficacy of public health campaigns aimed at influencing these factors [ 5 ].
Women report that they prefer to obtain preconception advice from their primary care physician, yet only 39% of women recall doing so [ 6 ]. This figure suggests that doctors do not routinely address preconception care in practice or implement existing preconception care guidelines, and also indicates that women do not actively seek such care.
While research to date has provided insights into the factors associated with lower rates of folic acid supplement use, such as unintended pregnancy, age, socio-economic status and ethnic group [ 1 , 7 , 8 ], there has been little published regarding women's views or any exploration of barriers and enablers to the delivery and uptake of preconception care and periconceptional folate supplementation. By ignoring these views we may neglect aspects of care provision which are important from the perspective of consumers of health care [ 9 ].
Preconception care guidelines and recommendations have been developed in many countries [ 10 - 12 ]. In order to capitalise on the preventive 'window of opportunity' of pregnancy, further information is needed to understand how to best implement preconception care guidelines [ 5 ]. As part of a larger study to develop and evaluate an intervention to improve the delivery of preconception care, our aim was to determine women's views on the barriers and enablers to preconception care uptake and periconceptional folate supplementation. | Methods
Study participants
Three focus groups were undertaken between October and November 2007 with 17 women of reproductive age (18-45 years). The total number of focus groups was limited, as their primary aim was to inform the next phase of the study and to complement focus groups undertaken with GPs. Focus groups took place in a low and high socioeconomic area of metropolitan Melbourne and in South Gippsland, a rural area of Victoria, Australia. We used the Index of Relative Socioeconomic Disadvantage [ 13 ] to determine postcodes in the lowest and highest quartiles of the index and recruited women residing at these postcodes through advertisements in local newspapers. Participants in our rural focus group were obtained through a convenience sample at a local playgroup and Division of General Practice due to a lack of response to the newspaper advertisement. Each focus group consisted of 5-7 participants.
Data Collection
The objectives and format of the focus groups were explained to participants before commencement, and their anonymity was assured. Participants gave written consent to participate. To maintain consistency, all focus groups were conducted by the same facilitator (DM) in a conversation-like manner and followed a schedule of guiding questions (Table 1 ). The study was approved by the Monash University Standing Committee on Ethics in Research Involving Humans.
Data analysis
Data from the focus groups were audio-taped, transcribed verbatim and entered into NVIVO 7 software [ 14 ] to organise the data. Initially, transcripts were read and re-read by both authors in order to familiarise themselves with the data. To increase rigour, each transcript was independently coded line by line by both authors. An inductive process of thematic analysis, as described by Braun and Clarke [ 15 ], was employed to identify key issues and themes within the data. For areas where coding differed, agreement of interpretation was reached through meetings between both authors. The thematic results were then presented to the project advisory group (which included content and methodological experts) for discussion and further interpretation.
Quotes representing typical views expressed by the women were extracted from the transcripts and are presented in the results to illustrate the themes identified. | Results
Barriers
Four major barriers to the uptake of preconception care were found and one barrier to the uptake of periconceptional folate supplementation was consistently identified. With the exception of service provider issues, all barriers identified were consistent across groups.
Degree of receptivity
There was a strong sense that women had to be receptive to the information in order to act upon it. This receptivity was dependent on their life stage and whether they were thinking about getting pregnant soon.
"You have to be in the zone." (Rural, 29 years)
Women who had already had a child felt they already knew about preconception care issues and didn't need to access preconception care.
"Because you think you know it. I don't have a child with spina bifida, so why should I take the folate? And I don't need to go the doctor because I've done it all before." (Low SES, 34 years)
Conception is a normal event
Because conception is perceived as an event that most women go through, women felt that it was normal and that there was no need for medical attention or intervention prior to it occurring.
"Some women also think it's a natural thing; that we should just go ahead and get pregnant and all have healthy babies because that is the norm and is what's expected." (Low SES, 34 years)
Interestingly this contrasted with women's sense that it was important to present to a general practitioner (GP) as soon as they knew they were pregnant as they felt medical attention then became necessary.
"I thought the importance [of presenting to a GP] was more on when you were pregnant. When you think about going [to a GP], it's once you have fallen pregnant." (Rural, 29 years)
Perceived risk and lack of awareness of the need for preconception care
Most women were unaware of the need for preconception care in general and said that their GPs did not offer preconception care or inform them of its availability. They were surprised at the breadth of issues that could comprise a preconception care consultation.
"It's a bit daunting when you get all this [preconception care information]. It's a lot to take in, but this is what I would like to have gotten from my GP. The thing is - I never had the concept of preconception care in my mind." (Rural, 26 years)
"Unless you actually go and pursue it [preconception care consultation] nothing will happen. (Low SES, 21 years)
Service Provider Issues
Service provider issues were a major theme elicited in the focus groups and the theme most related to socioeconomic status and rurality. Women living in high socioeconomic areas described attending alternative health practitioners such as naturopaths and Chinese medical practitioners for preconception advice. This was particularly the case for women who had experienced trouble conceiving or who were currently accessing assisted reproductive technologies such as in-vitro fertilisation. Many women had been given specially prepared supplements by their naturopath but were unaware of the contents of these supplements. They nevertheless had implicit trust that they contained 'all that was needed' to optimise conception and pregnancy outcomes.
"I don't have a GP; I have a naturopath, and have a really close relationship. She's been my guiding force behind all the information." (High SES, 33 years)
"If I was looking at information from a dietary perspective I would go to a naturopath. I would ask a GP but that's not their area of specialty and they're not so keen on supplements and vitamins." (High SES, 24 years)
"I am trying lots of things. I'm seeing a Chinese medicine specialist and he knows all the medications - more than my GP. I'm impressed about what he knows."(High SES, 40 years)
For a number of women the GP was not initially thought of as a major provider of preconception care. The reasons given for this were many. In rural areas there was difficulty accessing GPs, cost was a barrier for some women and women across all groups felt that as they were young and healthy, and in some cases yet to have children, they had not established an ongoing relationship with any particular GP and did not identify anyone as 'their GP'. Women also perceived the role of the GP to be one of 'acute care' and did not consider that GPs could or should deliver preventive care. Many felt that other patients who were unwell should have priority in a stretched system.
"You go to a GP because you're sick, not to discuss family planning." (High SES, 40 years)
"If I went in there to have a chat about planning pregnancy, would I be wasting their time?"(Rural, 26 years)
Women also felt that because there wasn't a procedure to carry out (in contrast to having a pap smear), they didn't feel they had to attend their GP to obtain preconception care advice.
"Because it's not active, with your pap smear and breast check you get something done, but for preconception care it's an appointment to go and talk, it's not actually a procedure." (Rural, 35 years)
Women's main source of information regarding preconception care was from friends and family and increasingly from the internet. Broadcast and printed media were not raised by participants.
"If I wanted information like that [preconception care] I would probably just go to an internet source...I wouldn't think of necessarily going to the GP if I can read it elsewhere." (Rural 26 years)
The most widely named internet sites being accessed for preconception related information were sites run by commercial for profit companies (media companies, companies producing nappies and those producing preconception supplements).
Nature and symbolism of folate supplements
Most women expressed confusion about periconceptional folate supplementation: the dose, timing and benefits. There was particular confusion about the nature and efficacy of branded 'preconception' products and how these compared to supermarket brand products. They also voiced concern about the cost of vitamins. The purchase of folate supplements was for many women a clear marker of intent to conceive and they felt that keeping them on a shelf at home was an indication or sign to their partners that they were intending to conceive.
"That was one of the things; I just bought them [supplements] from the supermarket and they all had different milligrams. Do you have a multivitamin or just straight folate? I didn't know which one was better." (Rural, 29 years)
"I've always bought them [folate supplements] and had them ready to go but never really knew why." (Low SES, 36 years)
Enablers
Two major thematic areas were identified in relation to enablers to the uptake of preconception care and periconceptional folate supplementation.
High motivation to optimise pregnancy outcome
In relation to enablers of delivery and uptake of preconception care and periconceptional folate supplementation, there was universal agreement that women had a strong desire to achieve the best outcome possible for their baby and as such were very motivated to access information and engage in preventive care.
"I was really concerned; I wanted to do everything that I could for its [the baby's] welfare." (Rural, 32 years)
Proactive promotion by GPs
While there was general agreement that public health campaigns and the media had a large role to play in raising awareness of folate supplementation and other preconception issues, there was strong support for GPs to be more proactive in making known the availability and need for preconception care.
"If they said to me if you think you might ever want to start planning a family these are some of the things we can discuss with you...I would file that away and when I'd made my decision I would come to talk to them about it." (Rural, 35 years)
Suggestions made were that during consultations for other matters GPs should discuss the availability and need for preconception care and suggest that women make another appointment to return. There were other suggestions made such as sending women of reproductive age a letter from the GP inviting attendance for preconception care, running preconception classes similar to 'antenatal classes', having preconception appointments available with a nurse, having posters in the waiting room to advertise preconception care and having information and patient brochures in waiting rooms. | Discussion
This qualitative study provides new insights into the views of women of reproductive age regarding the barriers and enablers to the delivery and uptake preconception care and periconceptional folate supplementation, taking into account the significant variables of socioeconomic status and rurality. Although a relatively small sample of women was recruited to this study, the findings may give future direction to strategies for the implementation of preconception care guidelines in general practice.
Our findings, consistent with other studies [ 16 ] indicate a great willingness on the part of women to optimise their health in preparation for pregnancy; however several factors are acting as barriers. Major amongst these are the lack of GP 'push' for preconception care, with women reporting that GPs do not inform women of either the need for preconception care or its availability. Additionally, there is lack of patient 'pull' for these services because of a lack of familiarity with the concept of preconception care, the breadth of issues to be covered as part of preconception care or of its availability through general practice. Furthermore, whilst most women felt obligated to present to a GP once pregnant, a number of women stated that the concept of preconception care over-medicalised a natural human event. This dissonance of resisting the medicalisation of planning the occurrence of pregnancy whilst supporting it once pregnancy occurs may comprise a barrier that, if not more fully understood, will continue to impede the success of efforts to promote family planning and preconception care.
In contrast to our results, a small study exploring why women did not respond to an invitation to attend for preconception care found that despite generally subscribing to the value of preventive behaviours and a healthy lifestyle, women perceived themselves as having sufficient knowledge of preconception care issues and/or not being at risk. Some also misunderstood the aim of preconception care as being advice about infertility [ 17 ]. This perception of sufficient knowledge was only consistent for women in our study who had previously had children. For these reasons, asking women to complete existing screening tools for preconception risk factors may assist in overcoming these barriers and facilitate the provision of preconception care delivery and uptake [ 18 ].
An important finding in our study is the way women perceived the role of the GP with regards to prevention. Lower SES and rural women were reluctant to use GPs time for preventive activity that they perceived as only involving counselling, feeling that they would be depriving those in need of more urgent medical care. Higher SES women on the other hand felt that preconception care was not in the remit of practice or skills of a GP and turned to alternative health professionals such as naturopaths. We suggest that these findings might be relevant to the implementation of other forms of prevention, and that patient perceptions of the role of GPs in prevention requires further exploration.
Bille and Anderson (2009) suggest that preconception counselling should be part of a scheduled preventive health program, acknowledging that couples would need to seek out and attend such programs [ 5 ]. In contrast, our study suggests women need to be receptive to preconception care in order to access it and that they would like to have it promoted to them by their GP and by other means such as letters.
Whilst this study highlights the perspectives of women of reproductive age, several limitations exist. Firstly, findings of this study are limited in generalisability, given the small sample size and the specific geographic location of focus groups. Women who participated in this study may not be representative of the population sampled and different perspectives may exist for groups of women not specifically targeted in this study i.e. teenagers, single women, middle income women, and women with co-morbidities or substance addiction. Furthermore, perspectives of women also may be different in areas where active policy and public health campaigns exist that specifically address preconception care and periconceptional folate supplementation. Secondly, as periconceptional folate supplementation is one aspect of preconception care, the findings in relation to it are somewhat lessened as women didn't fully appreciate preconception care in general. | Conclusion
Having identified the views of women and the fact that multiple barriers and enablers to the uptake of preconception care and periconceptional folate supplementation exist, we believe that further research is needed to identify which of these are the most important and amenable to change. Prior to developing an intervention to improve preconception care, it will also be important to; determine the views of GPs and other health professionals about the barriers they experience to preconception care delivery and how these relate to what women describe, and to understand the theoretical basis involved in changing health professionals behaviour. For the delivery and uptake of preconception care to improve, it essential that there is a demand from both parties. This more rigorous approach to designing interventions which better target the barriers to practice change may result in more success in implementing preconception care [ 19 - 21 ]. | Background
Despite strong evidence of the benefits of preconception interventions to improve pregnancy outcomes, the delivery and uptake of preconception care in general and periconceptional folate supplementation in particular remains low. The aim of this study was to determine women's views of the barriers and enablers to the uptake of preconception care and periconceptional folate supplementation.
Methods
Focus groups were undertaken in 2007 with 17 women of reproductive age (18-45 years). To identify key issues and themes within the data, focus groups were analysed using an inductive process of thematic analysis.
Results
Most women were unaware of the need to attend for preconception care and were surprised at the breadth of issues involved. Women also felt general practitioners (GPs) should be more proactive in promoting preconception care availability but acknowledged that they themselves had to be thinking about pregnancy or becoming pregnant to be receptive to it. Barriers to periconceptional folate supplementation included confusion about reasons for use, dose, duration, timing and efficacy of folate use. Enablers included the desire to do anything they could to ensure optimum pregnancy outcomes, and promotional material and letters of invitation from their GP to advise them of the availability and the need for preconception care.
Conclusion
A number of important barriers and enablers exist for women regarding the delivery and uptake of preconception care and periconceptional folate supplementation. It is essential that these patient perspectives are addressed in both the implementation of evidence based clinical practice guidelines and in the systematic design of an intervention to improve preconception care delivery. | Competing interests
The authors declare that they have no competing interests.
Authors' contributions
DM developed the concept and design of the study, and was responsible for data acquisition. DM and AC analysed the data, interpreted the results, drafted the manuscript, and approved the final manuscript. DM is the guarantor.
Pre-publication history
The pre-publication history for this paper can be accessed here:
http://www.biomedcentral.com/1471-2458/10/786/prepub | Acknowledgements
We thank members of the advisory committee for this project for their input: Professors Shane Thomas & Sally Green, Associate Professor Jane Halliday, Dr Bronwyn Harvey, and Ms Elizabeth Logan and Ms Danielle Rigoni for their advice. | CC BY | no | 2022-01-12 15:21:46 | BMC Public Health. 2010 Dec 23; 10:786 | oa_package/05/cc/PMC3016393.tar.gz |
PMC3016394 | 21144017 | Background
The location and pattern of inflammation in inflammatory bowel disease (IBD) are variable. Whereas ulcerative colitis (UC) is limited to the colon with a sharp delineation between the involved and non-involved mucosa, Crohn's disease (CD) can affect any part of the gastrointestinal tract and is associated with patchy distribution of mucosal lesions. Ileal localization occurs in about 80% of CD patients, and about 30% of CD patients have isolated ileal disease. Although it is generally accepted that IBD develops as a result of an altered immune response to luminal content in a genetically susceptible host, the mechanism by which the site of disease is selected remains unknown. Important differences in function, architecture and bacterial distribution between the ileum and colon have been described. Peyer's patches, which consist of aggregated lymphoid tissue and play a central role in the induction of mucosal immune responses, are a hallmark of the terminal ileum. Increased numbers of mucosa-associated E. coli are observed in IBD, and adherent invasive E. coli (AIEC) strains were highly associated with the ileal mucosa in CD patients. Moreover, the reduced number of goblet cells in the ileum results in decreased mucus secretion and increased contact between the mucosa and luminal content [ 1 - 3 ].
Several subsets of T helper (Th) cells contribute to defensive responses at inflammatory sites [ 4 ]. Dendritic cell-derived cytokines skew the differentiation of naïve CD4 + T cells into Th1, Th2, Th17 or regulatory T cell (Treg) subsets. For many years, CD was believed to be mediated by Th1 cytokines, while UC was believed to be mediated by Th2 cytokines; however, recent data have implicated Th17 cells in the pathogenesis of IBD [ 5 - 9 ]. The invasion of extracellular bacteria into the intestinal mucosa triggers the expression of IL-23A, driving Th17 cells to release IL-17A, IL-17F, IL-21, IL-22 and IL-26, which in turn exert a number of proinflammatory effects on intestinal epithelial cells, endothelial cells, macrophages and fibroblasts [ 10 ]. In addition to their proinflammatory functions, IL-17A, IL-17F and IL-22 have been reported to induce increased expression of epithelial barrier protective genes such as defensins, mucins, tight junction proteins and lipopolysaccharide-binding proteins [ 11 - 14 ].
The differentiation of Th17 cells depends on the activation of janus kinase 2 (JAK2), signal transducer and activator of transcription 3 (STAT3) and the transcription factor RAR-related organ receptor C2 (RORC2) and is regulated by a combination of cytokines, including IL-6, IL-1B (IL-1β), TGFB1 (TGFβ), IL-23A, and the autocrine activity of IL-21 [ 4 , 15 - 20 ]. Chemokine (C-C motif) receptor 6 (CCR6) which is expressed on the surface of Th17 cells, contributes to their recruitment to chemokine (C-C motif) ligand 20 (CCL20) produced at the inflamed mucosa [ 21 ].
The important role of Th17 cells in the pathogenesis of IBD is also supported by genome-wide association studies, which have demonstrated that CCR6 , STAT3 , JAK2 , IL23R and IL12B are CD susceptibility genes [ 22 - 24 ]. Interestingly, single nucleotide polymorphisms (SNPs) within IL23R , IL12B , STAT3 , JAK2 , the IL22/IL26 and the IL2/IL21 gene cluster have also been found to be associated with UC [ 22 , 24 - 27 ].
Although the expression of Th17-related genes has been studied previously, most studies included only colonic samples and were focused on a limited number of genes. Increased expression of IL17A , IL17F , IL22 , IL26 , IL21 , CCL20 and CCR6 has been found in inflamed colonic tissues of IBD patients [ 4 , 14 , 28 - 33 ]. In only one study, IL17A and IL23 were mildly increased in active ileal CD samples [ 34 ].
To examine the possible differences in the expression levels of genes involved in the Th17 pathway, we assessed the mRNA levels of the Th17 effector cytokines and genes involved in the differentiation and recruitment of Th17 cells in both colonic and ileal biopsies of healthy controls, UC patients and CD patients. | Methods
Patients and samples
A total of 72 macrodissected intestinal tissue samples from 22 healthy controls, 12 UC patients and 26 CD patients were obtained during colonoscopy with a Single-Use Biopsy Forceps Radial Jaw3 (Boston Scientific, El Coyol, Costa Rica) (Table 1 ). The size of a biopsy specimen was usually between 2-4 mm 2 with an estimated average weight of 6.4 mg. UC and CD patients were diagnosed based on clinical, endoscopic and histological criteria. Patients characteristics, medication intake and the Montreal classification which gives an overview of the subclassification of IBD patients is shown in Table 1 [ 48 ]. Mucosal inflammation was defined as the presence of endoscopic signs of disease activity. Samples from healthy controls were taken from the ileum and sigmoid of patients who underwent colonoscopy to screen for cancer or polyps. All biopsies collected during colonoscopy were immediately stored in RNALater (Ambion, Cambridgeshire, UK) at -80°C. The study was in accordance with the guidelines of the Helsinki Declaration (1964 and amended in 1975, 1983, 1989, 1996 and 2000) of the World Medical Association. Informed consent was obtained from all patients, and the protocol was approved by the local Ethics Committee of Ghent University Hospital (EC UZG 2004/242).
RNA extraction, cDNA synthesis and amplification
Total RNA was extracted from 2-3 pooled mucosal samples using an RNeasy Mini Kit (Qiagen, Westburg BV, The Netherlands) with on-column DNAse treatment (Qiagen). Needle homogenization was performed. The total RNA was quantified using spectrophotometry (Nanodrop; Thermo Scientific, Wilmington, USA) and ranged from 150 ng to 16.8 μg with an average of 5.9 μg total RNA. The quality of the RNA, expressed as RNA quality indicator (RQI), was checked by automated electrophoresis (Experion, Bio-Rad, Hercules, California) and ranged from 7.4 to 10 with an average of 8.6. Starting from 20 ng of total RNA, the WT-Ovation RNA Amplification System (Nugen Technologies Inc., San Carlos, USA) was used to the letter of the manufacturer's instructions, generating approximately 6 μg of cDNA. First strand cDNA is prepared from total RNA using both oligo-dT and random hexamer primers and reverse transcriptase. After the generation of double strand cDNA, a DNA amplification step developed by NuGEN was performed. The cDNA was diluted to 50 μl.
Quantitative real-time PCR
PCR amplification reactions were carried out in a total volume of 8 μl containing 2× SYBR Green I Master Mix (Eurogentec, Seraing, Belgium), 3 μl 1/100 cDNA (~3.75 ng) and 250 nM forward and reverse primers (BioLegio, Nijmegen, The Netherlands). All reactions were performed in 384-well plates (LightCycler 480 Multiwell Plates 384, white and LightCycler 480 Sealing Foils from Roche) on the CFX384 real-time PCR detection system (Bio-Rad, Hercules, California), followed by a regression Cq value determination method. Cycling conditions were as follows: 95°C for 10 min followed by 45 cycles of 95°C for 10 s and 60°C for 30 s, followed by a dissociation curve analysis from 60 to 95°C. Because instrument and liquid handling variations were shown to be minimal using the Tecan Freedom Evo robot for pippeting (< 6% CV in 0.5 μl and <3% CV in 2 μl), and a large number of biological replicates were used, no PCR replicates were carried out. Primers containing neither SNPs nor secondary structures were designed for Glyceraldehyde 3-phosphate dehydrogenase ( GAPDH ), Succinate dehydrogenase complex subunit A ( SDHA ), Hypoxanthine-guanine phosphoribosyltransferase ( HPRT ), IL8 , TNF , TGFB1 , IL1B , IL6 , IL23A , CCR6 , STAT3 , IL17A , IL17F , IL21 , IL22 , IL26 , CCL20 and RORC (Table 2 ). For RORC , two sets of primers were designed, RORC , which detects the mRNA levels of both isoforms, and RORC1 which detects only the full-length transcript. BLAST searches confirmed that only the target genes were 100% covered. A 6 point 4-fold standard dilution series (highest concentration; 32 ng/μl) of a cDNA mixture of all samples included in the study diluted in 5 ng/μl tRNA (Roche) was used to test the PCR efficiency of the primers. The dynamic range had to cover at least 3 orders of dilution. Only primers with an efficiency between 88% and 112% were retained (Table 2 ). Correlation coefficients of the targets were between 0.9843<R 2 < 1, with a mean of 0.9942 (Table 2 ). The PCR efficiency for each gene was calculated according to the equation E = 10 (-1/slope). Each sample has been revised for a melting-curve with a single sharp peak with a high correlation between the observed and the expected Tm (mean variation of 0.9°C). Samples with other patterns than a single sharp peak at the expected Tm, defined as multiple peaks, a single broader peak or a shoulder peak, were omitted. Cq values of samples with flattened melting-curves were set as 45. An amplification signal in the no template control (NTC) was ignored as long as the difference in Cq value between the NTC and the highest Cq >5. Although the pre-amplification method of NuGEN does not amplify genomic DNA, possible gDNA contamination was assessed using intronic primers. We confirmed that gDNA was undetectable in a dilution of up to 32 ng/μl cDNA [ 49 ]. The mRNA expression level of each gene was determined in Excel by using the comparative 2-(delta delta Cq) method and normalized to the geometric mean of the stably-expressed reference genes GAPDH , SDHA and HPRT as determined by geNorm [ 50 ].
According to the MIQE guidelines, the minimum information for publication of quantitative real-time PCR experiments was provided [ 51 ].
Statistical analysis
Statistical differences were assessed using a non-parametrical Mann-Whitney U test (two tailed probabilities). The correlations were analyzed with Spearman's correlation coefficient. P -values less than 0.05 were considered significant. Statistical analysis was performed using SPSS software 11.5 (SPSS, Chicago, USA). | Results
The mRNA expression levels of inflammatory cytokines and Th17-related genes in colonic and ileal samples of healthy controls
The expression level of the proinflammatory cytokine IL8 was equal in colonic and ileal controls, while the expression level of TNF ( TNFα ) was slightly higher in ileal control samples than in colonic control samples (P = 0.037) (Figure 1A ).
The expression levels of the Th17 effector cytokines IL17A , IL17F , IL21 , IL22 and IL26 were comparable between colonic and ileal control samples (Figure 1B ); however, the expression of these cytokines in each individual sample was not consistent. In one ileal sample and none of the colonic samples, all five cytokines were expressed. In one colonic sample and four ileal samples, three to four cytokines were expressed, while in eight colonic samples and seven ileal samples, one to two cytokines were expressed. In four colonic samples and three ileal samples, none of the five cytokines were detected.
We next examined the expression of genes involved in the differentiation of Th17 cells. TGFβ (P = 0.0005) and STAT3 (P = 0.007) were expressed at higher levels in ileal control samples than in colonic control samples, while IL6 , IL1β and IL23A expression levels were similar in colonic and ileal control samples (Figure 1C ).
Expression of genes involved in the recruitment of Th17 cells was also assessed. CCR6 expression was higher in ileal control samples than in colonic control samples (P = 0.0008), while CCL20 expression was similar between colonic and ileal controls (Figure 1D ).
The mRNA expression levels of inflammatory mediators in inflamed colonic and ileal samples of IBD patients
Although all samples were taken from endoscopically inflamed mucosa, we quantified inflammation by measuring the expression of the proinflammatory cytokines IL8 and TNFα . The expression level of IL8 has been shown to be associated with the grade of inflammation [ 35 , 36 ]. IL8 was strongly induced in inflamed colonic samples from UC (P < 0.0001) and CD patients (P = 0.0004) and in inflamed ileal samples from CD patients (P < 0.0001) (Figure 2A ). The expression level of IL8 was significantly higher in UC samples than in colonic CD samples (P = 0.017). Expression of TNFα was only significantly increased in UC samples (P = 0.0002); however, a tendency for increase was observed in ileal CD samples (P = 0.052) (Figure 2A ). Expression levels of TNF α were significantly higher in UC samples than in colonic CD samples (P = 0.028).
The mRNA expression levels of Th17 effector cytokines in inflamed colonic and ileal samples of IBD patients
In UC samples, IL17A (P = 0.0002), IL21 (P = 0.0021), IL22 (P = 0.0003) and IL26 (P = 0.0017) were strongly induced, and IL17F (P = 0.046) was weakly induced, while in colonic CD samples, IL17A (P = 0.0084) and IL22 (P = 0.001) were strongly induced, and IL21 (P = 0.047), IL26 (P = 0.048) and IL17F (P = 0.011) were weakly induced (Figure 2B ). In ileal CD samples, IL22 (P = 0.0015) was strongly induced, while IL17A (P = 0.013), IL26 (P = 0.029) and IL17F (P = 0.019) were weakly induced, and expression levels of IL21 were similar to those in ileal controls. Furthermore, in the colon, expression levels of IL17A (P = 0.005) and IL21 (P = 0.036) were significantly higher in UC samples than in CD samples. In CD samples, expression of IL21 (P = 0.0097) was significantly higher in the colon than in the ileum.
The mRNA expression levels of genes involved in the differentiation of Th17 cells in inflamed colonic and ileal samples of IBD patients
IL1β (UC, P≤0.0001; CD, P = 0.0009), IL6 (UC, P≤0.0001; CD, P = 0.0007), TGFβ (UC, P = 0.0002; CD, P = 0.0001) and IL23A (UC, P = 0.0006; CD, P = 0.003) were strongly induced in both UC and colonic CD (Figure 2C ). STAT3 was only slightly induced in colonic CD (P = 0.015) and UC (P = 0.018). Although a strong induction of the Th17 differentiation factor IL1β (P = 0.0007) and a weak induction of IL6 (P = 0.015) were observed in ileal samples, TGFβ , IL23A and STAT3 were not induced. In the colon, IL23A was expressed at higher levels in UC samples than in CD samples (P = 0.027). In CD samples, TGFβ (P = 0.0008) and IL23A (P = 0.010) were expressed at higher levels in the colon than in the ileum.
The mRNA expression levels of genes involved in the recruitment of Th17 cells in inflamed colonic and ileal samples of IBD patients
Expression levels of both CCR6 (UC, P = 0.0004; CD, P = 0.011) and CCL20 (UC; P = 0.0003, CD; P = 0.0009) were significantly increased in colonic CD and UC samples but not in ileal CD samples (Figure 2D ).
The mRNA expression levels of the master transcription factor RORC in inflamed colonic and ileal samples of IBD patients
Two sets of primers for RORC were developed. The RORC primers detect both the full-length transcript and the shorter T cell-specific isoform, while the RORC1 primers detect only the full-length transcript. A strong correlation was found between RORC1 and RORC expression levels (R = 0.776, P < 0.001). When comparing the RORC1 and RORC levels between healthy controls and UC and CD patients, higher significance levels were achieved using the primer detecting both transcripts. This suggests a stronger role for the T cell-specific RORC2 isoform.
In colonic samples from CD and UC patients, RORC mRNA expression levels were comparable to those in healthy controls, while in ileal CD samples, expression levels of RORC were lower than those in control samples (P = 0.0019) (Figure 3 ). | Discussion
A growing body of human studies and studies in mouse models has shown that Th17 effector cytokines promote chronic intestinal inflammation through the induction of multiple proinflammatory mediators. The role of Th17 cells has predominantly been studied in CD; these cells have rarely been studied in UC and have never been studied in healthy controls. Although it is generally accepted that IBD develops as a result of an altered immune response to luminal content in a genetically susceptible host, the factors influencing the selection of the disease site remain unknown. At present, even the mechanisms controlling development of ileitis and/or colitis in transgenic or gene-targeted mouse models are unclear.
In the present screening study the mRNA expression signatures of Th17 pathway-associated genes were evaluated in colonic and ileal samples of healthy controls, UC and CD patients. We first compared the expression levels in healthy colons to those in healthy ilea. Evaluation of the Th17 effector cytokines revealed no significant differences in expression levels between the colon and the ileum. Notably, these cytokines were not detectable in the majority of control samples, although expression was more frequently observed in ileal samples than in colonic samples. In addition, the expression of one Th17 effector cytokine was not necessarily linked to the expression of the other effector cytokines in a single mucosal specimen. The relative stasis of luminal content in the terminal ileum supports our observation that Th17 effector cytokines were more often present in the ileum than in the colon because IL-17A, IL-17F and IL-22 enhance the production of antimicrobial peptides, which protect the intestinal mucosa against bacterial invasion [ 11 - 14 ]. The diversity of bacteria present among individuals might explain why not all individuals express Th17 effector cytokines.
Recently, a subset of CD4 + T cells that provide help to B cells for antibody production in germinal centers (GC), termed follicular helper T cells (Tfh), were identified [ 37 , 38 ]. Tfh cells produce IL-21, which is necessary for GC formation. The increased number of ileal controls expressing IL21 could reflect the presence of Peyer's patches in the terminal ileum, where B lymphocytes predominate in the GC. The associated increased ileal expression of the downstream signaling molecule STAT3 supports this assumption.
CCR6, which is present not only on Th17 cells but also on Treg cells, B cells, neutrophils and immature dendritic cells, plays a critical role in the migration of these cells to its ligand CCL20, which is produced at inflammatory sites [ 14 , 20 , 33 ]. CCR6-positive cells have been detected in lymphoid organs like Peyer's patches and isolated lymphoid follicles and seem to be more common in the ileum than in the colon. TGFβ was shown to be the main factor for induction of CCR6 mRNA expression in Th17 cells and dendritic cells [ 39 , 40 ]. In our study, the increased expression of CCR6 in ileal controls was correlated with increased ileal TGFβ (r = 0.785, P = 0.0003).
The increased expression of the proinflammatory cytokine IL8 in the inflamed colonic and ileal IBD samples confirmed the endoscopic inflammatory state of our samples. The relationship between the expression of proinflammatory cytokines and the grade of inflammation or disease activity index has been described before and is supported by our observation that IL8 expression levels in samples from patients in remission are similar to those of healthy controls (data not shown) [ 35 , 36 ]. TNFα is less useful as parameter of inflammation because reports about its expression and secretion in inflamed samples are contradictory [ 41 - 43 ]. In support of this view, expression of TNFα was increased in UC patients and not increased in CD patients.
Except for ileal IL21 , significant induction of all Th17 effector cytokines was observed in inflamed colonic and ileal IBD samples. Moreover, induction of IL17A and IL21 was significantly more pronounced in UC than in colonic CD, and this induction was associated with more intense inflammation as defined by increased induction of IL8 . In contrast, this association was not found in ileal CD where only marginal induction of Th17 cytokines was detected in ileal CD samples with IL8 expression levels similar to those observed in UC samples. Marked induction of IL1β , IL6 , TGFβ and IL23A , genes involved in the differentiation of Th17 cells, was observed in colonic inflammation. The downstream signaling molecule STAT3 was only moderately increased in colonic samples. In ileal CD samples, except for a strong induction of IL1β , only a weak induction of IL6 and no increase in TGFβ , IL23A and STAT3 was detected. In parallel, significant increases in CCR6 and CCL20 , genes involved in the recruitment of Th17 cells, were only observed in colonic samples, supporting a less pronounced infiltration of Th17 cells in ileal inflammation.
The observed increase in expression levels of IL17A , IL17F , IL22 and IL26 and the downstream proinflammatory cytokines IL6 and IL1β in the inflamed ileum could originate from cells other than Th17 cells. Lymphoid tissue inducer-like cells, which are important in the development of lymphoid organs, are an innate source of IL-17A and IL-22 [ 44 ]. Paneth cells, which are common in the ileum, also express IL-17A [ 45 ]. Natural killer cells, natural killer T cells and a newly identified T helper cell, Th22 cells, which are involved in inflammatory skin disorders, are sources of IL-22 [ 46 ]. Importantly, IL-17A, IL-17F and IL-22 mediate protective effects through the induction of defensins [ 11 - 14 ].
Although mRNA expression levels are not the optimal way to study the activity of transcription factors, the significantly reduced expression of ileal RORC support a defect in the Th17 pathway in ileal disease.
We should consider the fact that gene expression could be affected by the use of anti-inflammatory drugs. Although 42% of the included patients were on medication, statistical analysis did not show an effect.
Given that SNPs within CCR6 , STAT3 , JAK2 , IL23R , IL12B , the IL22/IL26 and the IL2/IL21 gene cluster have been found to be associated with CD and or UC, we should take in mind the influence of these SNPs on Th17 cytokine profiles. Recently, response to anti-TNF therapy was demonstrated to be modulated by IL23R variants, linking Th17 function to biologicals [ 47 ]. Twenty-two percent of the samples in this study were included in the GWAS of Barrett and thus genotyped for CCR6 , STAT3 , JAK2 , IL23R and IL12B [ 23 ]. Unfortunately, due to a low frequency of patients heterozygous or homozygous for the different risk alleles, such comparisons were not conclusive [ 47 ].
A lot of factors inherent to the heterogeneous nature of biopsies may influence RNA levels. Although we have to be cautious with the extrapolation of mRNA expression data to functional immunological conclusions, biological replicates and the previously demonstrated association between RNA and protein levels of most genes included in this study contribute to reliable conclusions. | Conclusions
In conclusion, this study demonstrated important differences in the expression of Th17-associated genes in colonic and ileal disease. Inflammation of the colon of both CD and UC patients is clearly associated with increased expression levels of Th17 effector cytokines and genes involved in the differentiation, amplification and recruitment of Th17 cells, whereas in the inflamed ileum, essential factors for differentiation and recruitment of Th17 cells are missing. The differential expression of Th17-related genes between the colon and ileum could reflect different immune regulation in the colon and ileum, suggesting different therapeutic approaches for CD patients with colonic versus ileal disease. Future clinical trials of agents blocking Th17-related genes should take into account the disease location in CD patients. In addition, the difference in expression profiles between the colon and ileum also provide the potential of identifying diagnostic biomarkers. | Background
Immunological and genetic findings implicate Th17 effector cytokines in the pathogenesis of inflammatory bowel disease (IBD). Expression of Th17 pathway-associated genes is mainly studied in colonic disease. The present study assessed the mRNA expression levels of Th17 effector cytokines ( IL17A , IL17F , IL21 , IL22 and IL26 ) and genes involved in differentiation ( IL6 , IL1B , TGFB1 , IL23A and STAT3 ) and recruitment of Th17 cells ( CCR6 and CCL20 ) by quantitative real-time PCR analysis of colonic and ileal biopsies from 22 healthy control subjects, 26 patients with Crohn's disease (CD) and 12 patients with ulcerative colitis (UC). Inflammation was quantified by measuring expression of the inflammatory mediators IL8 and TNF .
Results
Evaluation of mRNA expression levels in colonic and ileal control samples revealed that TNF , TGFB1 , STAT3 and CCR6 were expressed at higher levels in the ileum than in the colon. Expression of all the Th17 pathway-associated genes was increased in inflamed colonic samples. The increased expression of these genes was predominantly observed in samples from UC patients and was associated with more intense inflammation. Although increased expression of IL17A , IL17F , IL21 and IL26 was detected in inflamed ileal samples, expression of the indispensable Th17 cell differentiation factors TGFB1 and IL23A , the signaling molecule STAT3 and the Th17 recruitment factors CCR6 and CCL20 were unchanged.
Conclusions
Our findings suggest that immune regulation is different in colonic and ileal disease, which might have important consequences for therapeutic intervention. | Authors' contributions
SB had substantial contributions to the conception, design, execution and analysis of the study, and drafted the manuscript; DL participated in the design and interpretation of the data; HP and MDV carried out the sampling of gut specimens and contributed to the interpretation of the data, LM and KO contributed to the RNA extraction, RNA quality determination, cDNA synthesis and qPCR analysis; JV participated in designing and organizing the qPCR analysis; JV, NB, GV and DE participated in critically revising the manuscript; MDV carefully revised and edited the manuscript with important intellectual contributions and coordinated the research group. All authors read and approved the final version of the manuscript. | Acknowledgements
This study was supported by a concerted action grant GOA2001/12051501 of Ghent University, Belgium. | CC BY | no | 2022-01-12 15:21:46 | BMC Immunol. 2010 Dec 13; 11:61 | oa_package/28/1f/PMC3016394.tar.gz |
PMC3016395 | 21176205 | Background
It is becoming increasingly clear that the microbiota condition and prime immunological function with an unexpected level of interdependence between bacteria and the immune system [ 1 ]. Accumulating evidence suggests that certain bacterial strains provide protective signals while other bacterial strains stimulate aggressive and damaging immune responses [ 2 - 5 ]. In other words, the activity of the mammalian immune system seems to be governed by the balance between symbiotic and potentially pathogenic factors derived from our microbial inhabitants. This raises the possibility that dysbiosis can lead to inappropriate inflammatory responses while on the other hand certain well selected anti-inflammatory microbes may protect against aberrant inflammatory activity.
The most important aspect of immunological function is the ability to protect against infectious microbes. The host response to infection requires innate and acquired cellular and humoral immune reactions, designed to limit spread of the offending organism and to restore organ homeostasis [ 6 ]. However, to limit the aggressiveness of collateral damage to host tissues, a range of regulatory constraints may be activated, such as induction of T regulatory cells [ 7 ]. A successful immune response is characterized by the efficient elimination of the pathogenic organism with minimal inflammatory damage to the host and the associated inflammatory cascades which may promote inflammatory disease. Innate pro-inflammatory signaling in response to microbial exposure is mediated by the activation of transcription factors, such as NF-κB, resulting in expression of a battery of effector molecules contributing to host defense and inflammation [ 8 ]. A number of bacterial products have been identified which directly block activation of the NF-κB pathway in epithelial cells via a range of novel mechanisms including the blockade of Iκ-B poly-ubiquination by non-pathogenic Salmonella strains or the enhancement of NF-κB export from the nucleus by Bacteroides thetaiotaomicron [ 9 , 10 ]. In addition, non-pathogenic microbes such as Bifidobacterium infantis 35624 have been demonstrated to limit excessive NF-κB activation via the induction of T regulatory cells [ 11 ].
Interest in the deliberate administration of microbes, or microbial metabolites, for the treatment of aberrant inflammatory activity associated with an exuberant immune response to pathogens is gaining momentum. The typical microbes which are currently being examined include Bifidobacteria , Lactobacilli , non-pathogenic E. coli and Bacteroides strains [ 12 - 17 ]. The protective effects associated with these microbes are probably mediated by multiple mechanisms involving epithelial cells, dendritic cells and T cells. One such organism, Bifidobacterium animalis AHC7 ( B. animalis AHC7), has been previously shown to protect against Salmonella typhimurium infection in murine models and helps resolve acute diarrhea in dogs [ 18 , 19 ]. However, the host immunological molecular events contributing to this protective mechanism have not been described. Therefore, we have examined activation of the pro-inflammatory transcription factor NF-κB in animals pretreated with B. animalis AHC7 or E. faecium SF68. E. faecium SF68 has been previously described as a probiotic organism with immunomodulatory activity and we used this bacterial strain as a comparator microbe for B. animalis AHC7 [ 20 , 21 ]. In addition, we have examined the influence of B. animalis AHC7 on dendritic cell activation and T cell polarization. Our data suggests that the B. animalis AHC7 protective effect is associated with modulation of NF-κB activity in vivo while in vitro studies demonstrate that dendritic cell recognition of this bacterium is associated with induction of CD25+Foxp3+ T cells. | Methods
Bacterial strains
B. animalis AHC7 was routinely cultured anaerobically for 48 hours in deMann, Ragosa and Sharpe medium, MRS (Oxoid, Basingstoke, UK) supplemented with 0.05% cysteine (Sigma, Dublin, Ireland). Enterococcus faecium SF68 was routinely cultured aerobically at 37°C for 24 hours in on tryptic soya broth, TSB (Oxoid). Freeze-dried powders of these strains were generated for use in subsequent animal studies. Salmonella typhimurium UK1 was generously provided by Prof. Roy Curtiss III (Washington University, USA) and was routinely cultured aerobically at 37°C for 24 hours in TSB (Oxoid).
Murine studies
NF-κB lux transgenic mice on a Balb/c background were obtained from Charles River Laboratories (MA, USA) and bred in-house for salmonella infection studies. Mice were housed under barrier maintained conditions within the biological services unit, University College Cork (UCC). All animal experiments were approved by the UCC animal ethics committee and experimental procedures were conducted under appropriate license from the Irish government. Female NF-κB lux mice were administered freeze-dried B. animalis AHC7 or E. faecium SF68 in their water supply at approximately 5 × 10 8 CFU/day or cryoprotectant carrier (n = 10 per group) for 3 weeks prior to a single oral challenge with 1 × 10 7 S. typhimurium . NF-κB activation was visualized at time 0 and 4 hours following infection using the protocol as previously described [ 11 , 27 ]. Briefly, D-luciferin (120 mg/kg; Biothema AB, Handen, Sweden) was injected i.p. and immediately anaesthetized mice were placed in a ventral recumbent position in an In Vivo Imaging System (IVIS) chamber (Xenogen, Alameda, USA) and imaged continuously for 5 minutes with a medium sensitivity setting starting 2 minutes after the injection of D-luciferin. Photons were quantified using Living Image software (Xenogen) and the luciferase activity quantified as the amount of light emitted per second per cm 2 from the animal. Following imaging, all mice were humanely euthanized.
Wild-type Balb/c mice were obtained from Harlan (Oxon, UK) and bred in-house for bifidobacterial feeding. Female Balb/c mice (n = 8 per group) received either freeze-dried B. animalis AHC7 or placebo (cryoprotectant carrier) administered on a daily basis in their drinking water. Each animal received an approximate dose of 5 × 10 8 CFU/day. Drinking water bottles were changed every day with fresh probiotic freeze-dried powders as previously described [ 19 , 28 ]. After 3 weeks of treatment the mice were humanely euthanized followed by isolation of Peyer's patches and mesenteric lymph nodes.
In vitro Culture of Mucosal Cells
Single cell suspensions from Peyer's patches or mesenteric lymph nodes from Balb/c mice were cultured in vitro for 48 hours with anti-CD3/anti-CD28 antibodies (BD Biosciences, Oxford, UK) or remained non-stimulated. Supernatants were harvested for cytokine analysis and stored at -80°C until quantification by multiplex profiling. IL-6, TNF-α, IL-10, IL-12p70, IFN-γ and MCP-1 cytokine levels were quantified using cytometric bead arrays (BD).
Dendritic Cell Isolation and Culture
Human CD14+ monocytes were isolated from peripheral blood using antibody labeling and magnetic separation (Miltenyi, Gladbach, Germany) and cultured in the presence of IL-4 (200 ng/ml, gift from Novartis, Basel, Switzerland) and GM-CSF (1,000 Units/ml, Peprotech, Hamburg, Germany) for 5 days to generate CD11c+ monocyte derived dendritic cells (MDDCs). 5 × 10 5 MDDCs were incubated with 1 × 10 7 B. animalis AHC7 labeled with carboxyfluorescein succinimidyl ester (CFSE, Invitrogen, Carlsbad, USA). After two hours incubation, MDDCs were stained with anti-human CD11c-PE-Cy5 (BD) and cells were visualized using the Image Stream X system (Amnis Corporation, Seattle, USA) and analyzed with IDEAS software (Amnis Corporation). In addition, non-CFSE stained B. animalis AHC7, or LPS (Sigma), were used to stimulate MDDCs for 24 hours and cytokine levels in the culture supernatants were measured by the multiplex Luminex platform.
MDDC-T Cell Co-Culture
MDDCs were isolated as above and autologous CD4+ T cells were isolated using antibody binding and magnetic separation (Miltenyi). MDDCs were stimulated with B. animalis AHC7, LPS or remained non-stimulated for four hours and washed four times to remove un-bound bacteria. MDDCs were co-incubated with autologous CD4+ T cells for five days (1:20 ratio) in FCS-free medium followed by re-stimulation with anti-CD3/CD28 antibodies for two days. Flow cytometric quantification of CD4+CD25+Foxp3+ T cells (all antibodies from eBioscience, Frankfurt, Germany) was performed on a Gallios flow cytometer (Beckman Coulter) and the data was analysed using Kaluza software (Beckman Coulter).
Statistics
Two way-ANOVA with Bonferroni's Post-test was used to determine statistical significance between treatment groups for NF-κB activation. The differences in cytokine levels between groups was evaluated using a Mann-Whitney t-test. All data were expressed as arithmetic mean±SEM. The level of statistically significance was set at p < 0.05. All statistical evaluations were performed using the statistical software package GraphPad Prism 4.03 (GraphPad Software, Inc., San Diego, CA, USA). | Results
B. animalis AHC7 suppresses NF-κB activation in vivo
Infection of mice with S. typhimurium results in potent activation of NF-κB systemically after 4 hours (Figure 1a ). Representative animals were dissected and individual organs re-imaged for localisation of in vivo NF-κB activity (Figure 1b ). Isolated ileum displayed specific areas of activation which macroscopically co-localized with Peyer's patches. Within the colon, a different pattern was observed with a single foci of high activity associated with a lower level of activation at the proximal and distal segments. Isolated spleen and liver also displayed elevated NF-κB activity following 4 hours of S. typhimurium infection. Placebo-fed animals displayed a significant up-regulation of NF-κB whole body activity after 4 hours infection (Figure 2 ). Pre-feeding with B. animalis AHC7 significantly attenuated the activation of this pro-inflammatory transcription factor. In order to determine if the attenuation of NF-κB activity was bacterium-strain specific, additional mice were pre-fed E. faecium prior to S. typhimurium infection. Up-regulation of NF-κB activity was similar in the placebo-fed animals and E. faecium -fed animals (Figure 2 ).
B. animalis AHC7 mucosal immunoregulatory activity
In order to determine if lymphocytes within the mucosa could be influenced by B. animalis AHC7 consumption, healthy mice were administered the bacterial strain or placebo for 3 weeks. Peyer's patch lymphocytes displayed an altered cytokine secretion profile following in vitro stimulation. Anti-CD3/CD28 antibody stimulation of Peyer's patch cells from B. animalis AHC7 fed animals secreted significant lower levels of IFN-γ and TNF-α (Figure 3a ). In contrast, secretion of IL-10 was unaltered while IL-12p70 levels were increased (p = 0.07) compared to placebo-fed animals (Figure 3b ). No differences were noted for IL-6 or MCP-1 levels (data not shown) or for the un-stimulated cultures. In addition, no alteration in anti-CD3/CD28 stimulated mesenteric lymph node secretion of IL-6, IL-10, IL-12p70, TNF-α, IFN-γ or MCP-1 was observed for B. animalis AHC7-fed animals compared to placebo-fed controls (data not shown).
B. animalis AHC7 interaction with dendritic cells
Human monocyte derived dendritic cells (MDDCs) were co-incubated with CFSE-labelled B. animalis AHC7 in vitro and imaged for evidence of bacterial binding. CFSE labeled B. animalis AHC7 was easily visualized on the cell surface of CD11c+ MDDCs indicating efficient binding of this bacterium by human myeloid dendritic cells (Figure 4 ). In addition, MDDCs secreted IL-10 and IL-12p70 in response to B. animalis AHC7 (Figure 5 ). Indeed, IL-10 secretion in response to this microbe was significantly greater than the MDDC response to LPS while the level of IL-12p70 secretion was equivalent to that induced by LPS.
B. animalis AHC7 conditioned dendritic cells influence T cell polarisation
Following co-incubation with B. animalis AHC7 for 4 hours, MDDCs were extensively washed and co-incubated with autologous CD4+ T cells for 5 days. T cells were re-stimulated with anti-CD3/CD28 antibodies for 2 days and transcription factor levels quantified by flow cytometry. B. animalis AHC7-stimulated MDDCs induced significantly more CD25+Foxp3+ T cells compared to non-stimulated MDDCs or LPS stimulated MDDCs (Figure 6 ). | Discussion
This report illustrates, at a cellular and molecular level, the impact of the commensal microbiota on host immune defense and immune homeostasis. The deliberate consumption of one commensal organism, B. animalis AHC7, resulted in the attenuation of NF-κB activation within mice infected with a pro-inflammatory translocating microbe, S. typhimurium . In addition, B. animalis AHC7 consumption was associated with modulation of cytokine signaling within the mucosa of healthy animals. In vitro , dendritic cells bound B. animalis AHC7, secreted IL-10 and IL-12p70, and dendritic cells stimulated with B. animalis AHC7 enhanced Foxp3 expression in naïve T cells.
Infection by microbes such as Salmonella species results in severe immunopathology characterized by loss of intestinal barrier function, tissue cell death and loss of function, fever and ultimately can lead to death of the host [ 22 ]. The pathology is not only a direct effect of the microbe itself but is a consequence of the inflammatory response induced by the presence of the organism. Therefore Salmonella infection of mice is a good model system to examine the regulatory mechanisms which protect against excessive pro-inflammatory responses to a range of pro-inflammatory stimuli and not only Salmonella itself. The resident microbiota can aid in the protection against aberrant inflammatory activity as suggested by studies in germ-free mice where infection with S. typhimurium results in colitis which was not observed in conventionally colonized animals [ 23 ]. In addition, specific commensal organisms such as B. animalis AHC7 protect against S. typhimurium infection and disease severity [ 19 ]. While B. animalis AHC7 may directly antagonize Salmonella within the gastrointestinal tract, this is unlikely to be the primary mechanism as other commensal organisms which were significantly more effective than B. animalis AHC7 at killing S. typhimurium in vitro , did not protect against infection in vivo [ 19 ]. These studies suggest that the biological activity of certain commensal microbes is exerted well beyond a direct influence on the microbiota within the gastrointestinal tract. Therefore, we investigated whether B. animalis AHC7 could exert an effect on the host via regulation of the pro-inflammatory response as a potential mechanism underpinning its protective effect.
S. typhimurium infects the host via intestinal epithelial cells and dendritic cells. In addition, Salmonella infects via M cells which transport the bacterium to underlying Peyer's patches for immunological processing [ 24 ]. Infection of the Peyer's patches leads to recruitment of a large number of pro-inflammatory infiltrating leukocytes which further aggravate intestinal inflammation and promote systemic dissemination of the pathogen. B. animalis AHC7 consumption leads to an altered cytokine profile within Peyer's patches which may protect against excessive inflammation. Release of TNF-α and IFN-γ by stimulated Peyer's patch lymphocytes in vitro is significantly reduced. In contrast, release of the Th1 cytokine IL-12 is enhanced with no alteration in IL-6, IL-10 or MCP-1 levels. Interpretation of these results is complex as IFN-γ and IL-12 are both considered to be Th1 cytokines. IL-12p70 is released by cells of the innate immune system, such as dendritic cells, while IFN-γ is secreted by T cells suggesting that B. animalis AHC7 consumption may differentially regulate dendritic cell and T cell cytokine production. In vitro , B. animalis stimulated dendritic cells secrete IL-12 and therefore the increased IL-12 release by Peyer's patch cells may be dendritic cell derived. Regardless of the mechanism, it is clear that in vitro stimulated cytokine responses from Peyer's patch cells are modulated by B. animalis AHC7 consumption and it is likely that these altered responses contribute to the anti-inflammatory effect observed in the Salmonella model. In particular, reduced secretion of TNF-α, which is a key pro-inflammatory cytokine, would reduce the inflammatory burden following S. typhimurium infection.
Innate immune activation to Salmonella is mediated via pattern recognition receptors, such as TLR-5, which rapidly up-regulate NF-κB activity [ 25 ]. Both mucosal and systemic NF-κB activation in response to Salmonella infection was noted suggesting that widespread activation of the innate immune system occurs rapidly and this response is modulated by B. animalis AHC7 consumption. The molecular basis for this inhibitory activity is not known and may involve induction of suppressor molecules, induction of regulatory cells (such as Tregs), down-regulation of TLR expression and/or activity and enhancement of the mucosal immunological barrier. Previously we have shown that increased numbers of CD25+Foxp3+ T cells can reduce NF-κB activation in vivo [ 11 ]. We did not assess CD25+Foxp3+ T cell polarization in B. animalis AHC7- fed animals but the in vitro co-culture model using B. animalis conditioned dendritic cells clearly demonstrated that this bacterium can induce a dendritic cell response which induces CD25+Foxp3+ T cells. However, it remains to be determined if this mechanism is responsible for the B. animalis AHC7 anti-inflammatory effect in vivo . | Conclusion
NF-κB is a key pro-inflammatory transcription factor and improved regulation of NF-κB is an important therapeutic target in a wide range of pro-inflammatory states, including sepsis [ 26 ]. This report supports the clinical evaluation of appropriately selected probiotic/commensal micro-organisms, such as B. animalis AHC7, for the promotion of regulatory mechanisms in vivo which limit aberrant pro-inflammatory activity. However, it is clear from this study and others that not all commensal microbes modulate host immunological activity in the same way and the defining features of potent anti-inflammatory microbes remain to be described. | Background
Bifidobacteria and lactobacilli are among the early and important colonizers of the gastrointestinal tract and are generally considered to be part of a normal, healthy microbiota. It is believed that specific strains within the microbiota can influence host immune-reactivity and may play a role in protection from infection and aberrant inflammatory activity. One such strain, Bifidobacterium animalis AHC7, has been previously shown to protect against Salmonella typhimurium infection in mice and helps resolve acute idiopathic diarrhea in dogs. The aim of this study was to investigate the potential molecular and cellular mechanisms underpinning the Bifidobacterium animalis AHC7 protective effect.
Results
Following 4 hours of infection with Salmonella typhimurium , NF-κB activation was significantly elevated in vivo in placebo and Enterococcus faecium -fed animals while Bifidobacterium animalis AHC7 consumption significantly attenuated the NF-κB response. In vitro anti-CD3/CD28 stimulated Peyer's patch cells secreted significantly less TNF-α and IFN-γ following Bifidobacterium animalis AHC7 consumption. Stimulated cells released more IL-12p70 but this difference did not reach statistical significance. No alteration in mucosal IL-6, IL-10 or MCP-1 levels were observed. No statistically significant change in the cytokine profile of mesenteric lymph node cells was noted. In vitro , Bifidobacterium animalis AHC7 was bound by dendritic cells and induced secretion of both IL-10 and IL-12p70. In addition, co-culture of CD4+ T cells with Bifidobacterium animalis AHC7-stimulated dendritic cells resulted in a significant increase in CD25+Foxp3+ T cell numbers.
Conclusion
Bifidobacterium animalis AHC7 exerts an anti-inflammatory effect via the attenuation of pro-inflammatory transcription factor activation in response to an infectious insult associated with modulation of pro-inflammatory cytokine production within the mucosa. The cellular mechanism underpinning Bifidobacterium animalis AHC7 mediated attenuation of NF-κB activation may include recognition of the bacterium by dendritic cells and induction of CD25+Foxp3+ T cells. | Authors' contributions
DOM, FOB, DG and PS performed the Salmonella infection and non-infected murine experiments and the associated cytokine analysis. SM performed the in vivo NF-κB measurements. TB, FS and BK contributed to the design and analysis of all experimental data. PK performed the dendritic cell imaging studies while MZ performed the dendritic cell-T cell co-cultures. JSP contributed to the data analysis and preparation of the manuscript while LOM conceived the studies, contributed to the study design, data interpretation and manuscript preparation. All authors read and approved the final version of this manuscript. | Acknowledgements
Funding Source . The authors are supported in part by Science Foundation Ireland, the Health Research Board (HRB) of Ireland, the Higher Education Authority (HEA) of Ireland, Swiss National Foundation, Alimentary Health Ltd and the Procter & Gamble Company. | CC BY | no | 2022-01-12 15:21:46 | BMC Immunol. 2010 Dec 22; 11:63 | oa_package/9e/cf/PMC3016395.tar.gz |
PMC3016396 | 21246041 | Introduction
Hepatitis B virus (HBV) infects the liver as primary target and may elicit progressive liver injury leading to increased risk of developing liver cirrhosis, liver failure and liver cancer [1] . Chronic infection with HBV is the result of an ineffective anti-viral immune response towards the virus [1] – [3] . The exact mechanism by which HBV escapes immunity is still not known.
In general, the immune system is alerted and evokes a number of mechanisms that are aimed at eradicating the viral attack immediately following viral infection. The initial response to viral infection is the rapid release of type I interferons (IFN), IFNα and IFNβ, which is observed for most viruses studied [4] . These IFN enhance the first defense against viral infections and modulate both innate and adaptive immune cells. Indications of the role of type I IFN during HBV infection are mostly based on studies in chimpanzees, since this is the only animal that can be infected with HBV. In sharp contrast to other viruses including hepatitis C virus, chimpanzees infected with HBV showed a complete lack in the induction of type I IFN and in IFN-response genes during the early stages of infection [5] . It is difficult to study the early events of acute HBV infection in humans. Nevertheless, it was recently shown that type I IFN responses are also lacking in acute HBV patients [6] .
Plasmacytoid dendritic cells (pDC) are the principal producers of type I IFN and play a central role in immune responses against viral infections [7] , [8] . pDC respond to viruses and other pathogens primarily through the recognition of pathogen-associated molecular patterns by two intracellular Toll-like receptors (TLR), TLR7 and TLR9, which recognizes single stranded RNA and unmethylated DNA motifs, respectively [9] , [10] . TLR-triggering activates pDC to rapidly produce high levels of type I interferons, but also other cytokines, including TNF-α and IL-6, and cell surface co-stimulatory molecules. In this way pDC exert a direct anti-viral effect by producing factors that inhibit viral replication, but they also activate natural killer (NK) cells and T lymphocytes allowing further priming and regulation of anti-viral immunity [7] , [11] , [12] .
Circulating blood pDC numbers seem to be unaffected by HBV, but functional deficits in pDC from chronic HBV patients including impaired IFN-α production have been reported [13] . Recently, it was reported that patient-derived HBsAg binds to human pDC in vitro and impairs TLR9-induced IFNα production by pDC [14] . The presence of HBV-DNA in or on pDC in vivo in chronic HBV patients [15] , [16] indicates that at least the whole virus and not only HBsAg interacts with pDC. Whether also HBV particles and/or other HBV-derived proteins present in patient's circulation interfere with pDC function is not known. Given the central role that pDC play in antiviral immune responses, understanding the mechanisms whereby pDC interact with and respond to HBV may provide fundamental insights into the regulation of HBV-specific immunity and the development of HBV chronicity. Therefore, the present study investigated the effect of HBV as whole particles and HBV-derived proteins, i.e. HBcAg, HBeAg and HBsAg, on direct and indirect anti-viral functions of pDC. | Materials and Methods
HBV, proteins and TLR ligands
HepG2.215-derived HBV particles were purified and quantified as described before [38] and unless indicated otherwise used at 100 or 2000 geq/cell for PBMC and pDC cultures, respectively. As a negative control, the same procedure was followed with supernatant from untransfected HepG2 cells. Additionally, patient-derived HBV was purified from serum of a chronic HBV patient (genotype B, viral load 3.28×10 9 IU/ml) using the same procedure. Serum of a healthy control was treated similarly and was used as a control. Recombinant Chinese Hamster Ovary cell (CHO-)derived HBsAg and HBeAg or HBcAg derived from E. coli (Prospec, Rehovot, Israel) were used at 5μg/ml and added, like the HBV particles, either in the absence or presence of Polymyxin B (50 μg/ml; Sigma-Aldrich, St. Louis, MO). G.M.G.M. Verjans and G.F. Rimmelzwaan (both Dept of Virology, Erasmus MC) provided HSV-1 (MOI 10) and Influenza virus (H1a, MOI 0.2), respectively. Synthetic TLR ligands included CpG-2336 (10 μg/ml, Coley Pharma, Düsseldorf, Germany) and Loxoribine (Lox 0.4 mM, Invivogen, San Diego, CA). In all experiments, HBV or viral proteins were added right before the addition of synthetic TLR ligands or other pDC-stimulating agents.
Patients and healthy subjects
Peripheral heparinized blood samples were obtained from 25 patients with chronic hepatitis B ( Table 2 ). All patients were negative for antibodies against hepatitis C, hepatitis D and human immunodeficiency virus, and did not receive treatment at time of blood donation. A matched control group comprised 21 healthy subjects. The study was approved by the local ethics committee, and all patients and controls in the study gave informed consent before blood donation.
Cell purification and culture
PBMC and pDC were isolated from peripheral heparinized blood samples or buffy coats from healthy blood donors using Ficoll density gradient centrifugation. All healthy controls gave written informed consent before blood donation and the institutional medical ethical committee gave declaration of no objection for this study. pDC were isolated by CD19+ cell depletion, anti-Blood Dendritic Cell Antigen (BDCA-4)-PE and anti-PE MACS microbeads (Miltenyi Biotec, Bergisch Gladbach, Germany) or FACSorting (FACS Aria, Beckton Dickinson, Alphen a/d Rijn, The Netherlands). Purity and viability (both >95%) were checked using anti-BDCA2-FITC (Miltenyi) and 7-AAD (eBioscience, San Diego, USA) by flow cytometry. Monocytes were purified with anti-CD14 MACS microbeads and MS-columns (Miltenyi). 1×10 5 pDC, isolated by FACsorting, were co-cultured with NK cells, isolated from the same donor with an NK cell isolation kit (Miltenyi Biotec, Germany), in a 1∶5 ratio in RPMI 1640 containing 10% FCS, penicillin/streptomycin, Hepes, IL-3 and CpG either with or without HBV (200 geq/pDC) for 48h.
Flow cytometric analysis: Surface markers and signalling molecules
pDC were stained with anti-BDCA4-PE, anti-BDCA-2-FITC, anti-CD80-FITC (MAB104; Immunotech, Marseilles, France), anti-CD123-biotin (BD Pharmingen, USA), anti-HBsAg-FITC (Acris Antibodies GmbH, Hiddenhausen, Germany), anti-CD86-APC (2331), anti-HLA-DR-PerCP (243), anti-CD40-APC (5C3) and/or streptavidin-PerCP (all BD Biosciences) in PBS/1%FCS/0.02%NaN 3 . HBsAg binding experiments were performed in the presence or absence of 5μg/ml anti-BDCA2-biotin or anti-BDCA4-PE. NK cells were stained with antibodies directed against CD56 (MY31) and CD25 (2A3, all BD Bioscience). Corresponding isotype-matched control antibodies were used to determine background staining.
To determine intracellular signalling, PBMC (1×10 6 cells/250 μl) or pDC (5×10 3 cells/250 μl) were stimulated at 37°C. After 5–180 min, cells were fixed with 2% formaldehyde, washed with PBS/1%FCS/0,02%NaN 3 and incubated with 0,5% saponin. Antibodies against Phospho-S6 (pSer235/236, Bioké, Leiden, The Netherlands) and phospho-Interferon regulatory factor-7 (P-IRF7; pS477/pS479; K47-671, BD Biosciences) followed by goat-anti-rabbit-biotine (Dako, Glostrup, Denmark) were diluted in 0,5% saponin and added for 15 minutes. Finally, cells were stained with streptavidin-PerCP (BD Biosciences).
Cells were analysed by flow cytometry (FACS CantoII or FACScalibur) and FACS Diva or CellQuest Pro software (all Beckton Dickinson, Alphen a/d Rijn, The Netherlands).
Cytokine production
PBMC (1×10 6 cells/ml) or pDC (2×10 4 cells/ml) were resuspended in RPMI 1640 (Lonza, Basel, Switzerland) containing 10% heat-inactivated FCS (Hyclone, Logan UT), 100U/ml Penicillin, 100μg/ml Streptomycin (Breda, The Netherlands) and IL-3 (20ng/ml, Miltenyi). Neutralizing antibodies (5μg/ml) to IL-10 or IL-10 receptor (IL-10R; both from BD Pharmingen) were added where indicated. 24h culture supernatants were examined for TNF-α (eBioscience), IL-6 (Biosource International, Nivelles, Belgium), IL-8 (Biosource), IP-10 (Invitrogen) and IFNα (Bender Medsystems, Vienna, Austria) by ELISA. IFNγ production by NK cells was also determined by ELISA (eBioScience). The frequencies of IFNα and TNFα producing pDC were quantified by incubating cells during the last 3h of 5h cultures with 10μg/ml Brefeldin A (Sigma-Aldrich). Cells were fixed and permeabilized with Intraprep (Beckman Coulter, Miami, Florida, USA) and stained with anti-IFNα-FITC (Kordia, Leiden, The Netherlands), anti-TNFα-APC (Caltag-Medsystems, Buckingham, UK) and pDC-specific antibodies as described above.
Quantitative RT-PCR
Cells were lysed in Trizol (Sigma-Aldrich) and stored at −80°C until further use. RNA was precipitated with 75% ethanol and isolated using RNeasy mini colums (Qiagen, Venlo, The Netherlands). cDNA was synthesized with iScript cDNA synthesis kit (BioRad laboratories BV). Real-time PCR was performed with the following primer pairs: GAPDH_F 5′-AGG TCG GTG TGA ACG GAT TTG-3′ and GAPDH_R 5′TGT AGA CCA TGT AGT TGA GGT CA-3′ , IFNα2_F 5′-AAT GGC CTT GAC CTT TGC TT-3′ and IFNα2_R 5′-CAG CTT GAC TTG CAG CTG AG-3′ , IFNα8_F 5′-TGG TGC TCA GCT ACA AGT CAT T-3′ and IFNα8_R 5′-TAC CCA GGC TGT GAG TCT GA-3′ under standard conditions (annealing temperature 63°C; 40 cycli; MyIQ iCycler, Biorad). IFNα gene expression was normalized to GAPDH and calculated using the ΔΔCT method [39] . | Results
HBV does not activate pDC
HBV is a DNA virus that replicates via an RNA intermediate. In theory, HBV may thus be able to activate pDC via TLR7, TLR9 and/or cytosolic pattern recognition receptors. However, no evidence exists that HBV replicates in pDC, which makes the direct activation of pDC by HBV via TLR7 maybe not very likely. Known synthetic and viral TLR7 and TLR9 ligands including Influenza virus, HSV-1, CpG and to a lesser extent Lox induced pDC to produce IFNα ( Fig. 1A ). In contrast, HBV did not give rise to IFNα producing pDC ( Fig. 1 ). Similar data were observed for TNFα.
Though HBV did not induce cytokines, the virus might induce pDC maturation. Whereas HSV-1 and Influenza virus as well as the synthetic TLR7 and TLR9 ligands upregulated the expression of CD40, CD80, CD86, and to a minor extent HLA-DR, HBV only marginally increased the expression of HLA-DR without affecting the expression of CD40, CD80 and CD86 ( Table 1 ).
HBV impairs CpG-induced pDC maturation and function
To determine whether HBV mainly behaves as a stealth virus for pDC, or that it has an active role in the regulation of pDC function. pDC were activated with HSV-1, Lox and Influenza virus either with or without HBV. HBV significantly inhibited HSV-induced CD40 expression. Most pronounced inhibitory effects of HBV were observed for CpG-induced pDC maturation as demonstrated by diminished CD40, CD80 and CD86 upregulation ( Table 1 ). As was found for the minor increase in HLA-DR expression upon HBV in non-stimulated cultures, HBV slightly upregulated HLA-DR expression in stimulated pDC cultures, which was not significant for the individual stimuli, but was significantly elevated by HBV when combining all different stimuli.
HBV did not affect the IFNα production induced by these stimuli ( Fig. 2A ), except for CpG-induced IFNα, which was dose-dependently inhibited by HBV as assessed by intracellular flow cytometry after 5h ( Fig. 2A ) and ELISA after 24h ( Fig. 2AB ) and 48h (data not shown) of culture. Since HSV-1 can trigger TLR9, but also other innate immune receptors, and Lox and Influenza are known to trigger TLR7, the inhibitory effect of HBV seemed to be restricted to TLR9 triggering.
To assess whether HBV interferes with IFNα production at the transcriptional level, pDC were stimulated with CpG in the presence or absence of HBV, and analysed by intracellular flow cytometry for the presence of phosphorylated IRF7, the transcription factor involved in the transcription of IFNα genes. HBV reduced CpG-induced IRF7 phosphorylation ( Fig. 2C ) and in line also reduced IFNα2 and IFNα8 mRNA levels as detected by quantitative RT-PCR analysis ( Fig. 2DE ).
HBV inhibits cytokine production and NK cell activation by pDC
Next to IFNα, pDC produce more cytokines involved in anti-viral immunity. According to previous studies, CpG stimulation resulted in the induction of TNFα, IP-10 and IL-6 ( Fig. 3A–C ), but not IL-8 ( Fig. 3D ). HBV inhibited TNFα, IP-10 and IL-6 production, whereas IL-8 production was not significantly affected ( Fig. 3A–D ). In line with this cell-culture derived HBV, also patient-derived HBV particles inhibited CpG-induced production of IFNα (3E), TNFα (F) and IL-6 (G) by pDC.
Besides the direct antiviral effects of the cytokines, pDC also have indirect anti-viral activities via the activation NK cells [12] . Addition of CpG-activated pDC to NK cells resulted in strong NK cell activation as demonstrated by the upregulation of CD69, CD25 and the induction of IFNγ production ( Fig. 3HI ; data not shown). Although HBV did not affect pDC-induced upregulation of CD25 on NK cells ( Fig. 3H ), NK cell-derived IFNγ production induced by pDC was significantly decreased ( Fig. 3I ). Supernatants derived from non-transfected HepG2 cells served as a control and did not influence pDC function (data not shown).
Monocytes support immune regulatory effect of HBV on pDC
Although on a per cell basis more HBV particles seemed to be required for significant immune regulatory effects on purified pDC compared to pDC present in PBMC cultures, the number of HBV genome equivalents per pDC in pure pDC cultures compared to the ±0.3% pDC present in total PBMC cultures was even 10–50 times less ( Fig. 2B , 4A ). Nevertheless, the relative inhibition of CpG-induced IFNα producing pDC was more pronounced when whole PBMC cultures were exposed to HBV ( Fig. 4B ). This was not simply due to positive selection on BDCA4 as suggested before [17] , since negative pDC selection revealed similar results (data not shown). A role for IL-10 produced by other cell types as an explanation for the stronger inhibition found in total PBMC cultures could also be excluded since neutralizing IL-10 or its receptor neither changed IFNα production nor TNFα production by pDC ( Fig. 4C ; data not shown).
To identify the cell type responsible for the enhanced IFNα production by pDC in total PBMC cultures, purified pDC were compared with purified pDC supplemented with CD3 + , CD14 + , CD19 + , or CD56 + cells. Only addition of monocytes to pDC enhanced the frequency of IFNα-producing and pDC ( Fig. 4D ; data not shown). This extra monocyte-mediated IFNα production by pDC was completely abolished by HBV ( Fig. 4D ). Monocytes also enhanced the frequency of TNFα producing pDC upon stimulation with CpG, but to a lesser extent compared to IFNα ( Fig. 4E ). This resulted in a significantly enhanced inhibitory effect of HBV on IFNα-producing pDC in the presence of monocytes (33.4±4.8% inhibition by HBV for pDC vs 64.6±12.2% for pDC+mono), but a relatively unaltered rate of inhibition of TNFα-producing pDC in the presence of monocytes (58.2±12.9% for pDC, 63.1±11.0% HBV-induced inhibition for pDC+mono). These data indicate that the supporting function of monocytes regarding TLR9-induced cytokine production, especially IFNα, is abrogated by HBV.
HBeAg and HBsAg, but not HBcAg diminish pDC function
Next to whole virus also HBV-derived proteins present in patients' circulation, including HBeAg and HBsAg, may affect pDC function [18] . Therefore HBsAg, HBcAg, and HBeAg were investigated for their effect on IFNα production. Exposure of pDC to the viral proteins alone, did not induce IFNα production (data not shown). Interestingly, dose response studies revealed that HBeAg and especially HBsAg, but not HBcAg dose-dependently reduced CpG-induced IFNα production by pDC ( Fig. 5A ).
Similar to the effect of whole virus, HBsAg inhibited CpG-induced TNFα, IP-10 and IL-6 production without significantly altering the production of IL-8 ( Fig. 5B–E ). HBeAg also reduced TNFα production, but did not significantly influence the secretion of the other cytokines. HBcAg did not modulate pDC function ( Fig. 5 ). Of note, neither HBV nor its viral proteins reduced pDC viability as determined by flow cytometric analysis of intracellular active caspase-3 as well as the binding of Annexin-V and 7AAD (data not shown). In addition, experiments performed in the presence of polymyxin B to neutralize possible non-detectable levels of contaminating endotoxins did not affect the regulatory effects of HBV or the viral proteins (data not shown).
HBeAg positive chronic HBV is associated with impaired CpG-induced pDC function
To investigate whether pDC circulating in chronic HBV present a similar functional profile as observed after in vitro exposure to HBV or its viral proteins, CpG and Lox-induced IFNα production by pDC of chronic HBV patients was compared to the IFNα production by pDC of age and gender-matched healthy controls ( Table 2 ). Impaired IFNα production by pDC from chronic HBV patients have been described before (reviewed in Woltman et al. [13] ), but a comparison between TLR7 and TLR9 stimulation is lacking. Both CpG and Lox-stimulation significantly induced IFNα production as measured by ELISA and intracellular flow cytometry ( Fig. 6A–D ), albeit for Lox to a much lesser extent. Interestingly, only the CpG-induced IFNα production was significantly reduced in HBV patients compared to healthy controls, as was found for the in vitro effects of HBV. Additionally, when comparing HBeAg positive with HBeAg negative patients, we found that especially HBeAg-positive patients displayed a defect in the CpG-induced IFNα production ( Fig. 6BD ). This more pronounced impaired IFNα producing capacity of pDC from HBeAg positive patients may reflect the immunosuppressive effect of HBeAg, as demonstrated in the in vitro experiments, but may also be a more indirect effect of the significant higher viral load and/or ALT levels in HBeAg-positive patients ( Table 2 ). When investigating a possible relation between the IFNα producing capacity and viral load or ALT, we found that serum HBV-DNA levels did not correlate to IFNα production (data not shown), whereas serum ALT levels significantly correlated with IFNα ( Fig. 6E ).
HBV/HBsAg diminishes S6 phosphorylation
To investigate the specific interference of HBV with CpG-induced pDC function in more detail, intracellular TLR7 and TLR9 signalling was examined. It was recently found that TLR/Myeloid differentiation primary response protein 88 (MyD88)-induced IFNα production by pDC requires mTOR activation [19] resulting in phosphorylation of IRF7 and subsequent IFNα gene transcription. To assess whether HBV interferes with mammalian target of rapamycin (mTOR)-induced IRF7 phosphorylation, pDC were stimulated with or without CpG or Lox in the presence or absence of HBV, and analysed for the expression of phosphorylated S6, a downstream target of mTOR [20] . Baseline S6 phosphorylation was low and hardly affected by HBV ( Fig. 7A ), HBsAg and HBcAg. Only HBeAg slightly increased S6 phosphorylation to an almost negligible extent (data not shown). As expected, CpG strongly increased S6 phosphorylation ( Fig. 7A ), which was significantly inhibited by HBV and HBsAg, but not by HBcAg or HBeAg ( Fig. 7AB ). Also TLR7 triggering induced phosphorylation of S6, but in contrast to TLR9 stimulation, neither HBV nor HBsAg significantly affected Lox-induced S6 phosphorylation ( Fig. 7C ).
TLR9 triggering increases pDC-HBsAg interaction
Of course, the lack of inhibitory effects of HBV and HBsAg on Lox-induced S6 phosphorylation is completely in line with the absence of immune regulatory effects of HBV and HBsAg on Lox-induced pDC function. However, in case HBV actively induces a regulatory signalling molecule able to inhibit TLR9 signalling, e.g. SOCS proteins, these inhibitory proteins often block both TLR7 and TLR9 signalling, since these signalling pathways are quite similar.
Therefore, the specific inhibition of TLR9-induced function may be more related to specific interaction of HBV and HBsAg with TLR9-stimulated pDC rather than the induction of a specific immune regulatory protein. To investigate the interaction between HBsAg, either as part of the viral envelope or single circulating protein, and pDC was examined by culturing pDC with or without CpG or Lox either in the presence or absence of HBsAg for 4h. In addition, cells were stimulated with CpG and Lox for 2h at 37°C, then cells were put on ice and HBsAg was added. Subsequently, pDC surface binding of HBsAg was determined. In both types of experiments, no or only a very low binding of HBsAg was observed in cultures with medium alone and Lox, whereas pDC stimulated with CpG strongly bound the HBV envelope protein ( Fig. 8A ). Hence, it is tempting to speculate that the preferential binding of HBsAg to CpG-stimulated pDC is responsible for the ability of the virus to interfere especially with CpG-induced pDC function.
Recently, BDCA-2 was proposed as receptor involved in pDC-HBsAg interaction leading to active inhibition of pDC function [14] . However, in contrast to HBV ( Fig. 2A ) and HBsAg (data not shown), BDCA-2 crosslinking also inhibited HSV-induced pDC function ( Fig. 8B ) as described previously [21] . In addition, the negligible interaction between HBsAg and non-stimulated pDC and the increased binding of HBsAg to CpG-activated pDC are in contrast to the high expression of BDCA-2 on non-stimulated cells, which is even downregulated to a limited extent upon activation ( Fig. 8C ) [21] . In this experimental setup, HBsAg binding was not significantly inhibited by the addition of anti-BDCA-2 antibodies ( Fig. 8D ). These data suggest that a CpG-induced co-factor, but not BDCA-2, is involved in the increased binding of HBsAg to pDC. | Discussion
pDC play a central role in anti-viral immunity due to their rapid and profound release of type I IFN upon viral recognition. We demonstrated that HBV does not activate pDC as assessed by cytokine production and co-stimulatory molecule expression. Moreover, HBV interfered with TLR9-induced pDC function, resulting in dose-dependent inhibition of cytokine production and pDC maturation. HBsAg, and to a limited extent HBeAg, which were used in concentrations found in hepatitis B patients' circulation [18] , showed similar immune regulatory effects as HBV thereby demonstrating that the immune regulatory effects of HBV do not require active infection.
The lack of a detectable initial type I IFN response in patients upon HBV infection is not understood so far [5] , [6] . With the exception of a marginal elevated expression of HLA-DR in the presence of HBV, which may lead to enhanced DC-T cell interactions and viral immunity, the overall inability of HBV to activate pDC may underlie this defective innate immune response. Since pDC possess several pattern recognition receptors that are mainly located intracellular, the lack of pDC activation by HBV could be due to the fact that HBV is not taken up by pDC. This does not rule out the possibility that HBV indirectly activates immune receptors in/on pDC, e.g. by uptake of infected hepatocytes or crosstalk with other immune cells.
The finding that HBV-DNA could be detected in peripheral blood pDC of a subset of chronic HBV patients [15] , [16] indicates direct interaction between pDC and HBV in vivo . The lack of pDC activation upon exposure to HBV supports the hypothesis that HBV behaves as a “stealth” virus and does not induce IFN-related genes during acute infection [5] . However, our data also support ideas that HBV possesses immunosuppressive strategies to evade the initial response that could be elicited by the innate immune system of the host. The decreased CpG-induced IFNα production by pDC exposed to HBsAg is in line with a recent study [14] . Here we showed that HBsAg not diminishes all pDC functions, but HBsAg and HBV inhibit more than only IFNα production. Not only the direct anti-viral activities, i.e. the production of cytokines, but also other important more indirect anti-viral immune parameters were influenced by HBV. pDC maturation, important for pDC-T cell crosstalk, was strongly impaired and pDC-induced NK cell function was also significantly diminished. The impaired crosstalk between different immune cells may partially explain the failing induction of effective anti-viral immunity. Whether the impaired DC-NK cell crosstalk could explain the defective IFNγ production by NK cells circulating in patients with chronic hepatitis B [22] , [23] remains unknown.
Of relevance, next to HBsAg and HBV also HBeAg showed immune regulatory effects on pDC. Immunosuppressive effects of the whole virion, as well as HBsAg and HBeAg, have also been reported for TLR3 and TLR4-mediated innate immunity of murine hepatocytes and non-parenchymal liver cells [24] , [25] . The lack of direct pDC activation combined with the inhibitory effect of HBV on the innate immune function of pDC, as well as other intrahepatic cells, may contribute to the enhanced risk of viral persistence upon infection with HBV compared to other viruses such as Influenza. Nevertheless, infection with HBV at adult life most often leads to self-limited acute hepatitis B. This could be explained by direct and/or indirect immune stimulatory effects of HBV towards other cells of the immune system, such as recently demonstrated for Kupffer cells [26] . It is likely that in most HBV-infected adults these immune stimulatory processes overrule the inhibitory effects of the virus leading to adequate anti-HBV specific immunity.
The inhibitory effect of HBsAg was completely overlapping with HBV, as demonstrated by inhibition of S6 and IRF7 phosphorylation and production of IFNα, TNFα, IL-6 and IP-10. HBeAg impaired pDC function to a much lesser extent and seemingly via another yet unknown intracellular signalling mechanism. At what level HBV and HBsAg interfere with mTOR-induced S6 phosphorylation remains to be elucidated. Several putative binding factors have been described for HBsAg, but their exact role in HBV attachment remains unclear [27] – [29] . The increased binding of HBsAg to CpG-stimulated pDC suggests the involvement of a TLR9-induced co-factor, either membrane bound or soluble, that scavenges HBV/HBsAg. Whether TLR9 triggering occurs in HBV-infected individuals is not known. It is tempting to speculate that pDC could be indirectly activated by HBV via interaction with viable or dead HBV-infected cells that contain viral DNA able to trigger TLR9, but also other HBV-related or even unrelated activation signals may induce the expression of this scavenging co-factor. Differential expression of this factor, e.g. related to different disease states or host polymorphisms, may influence pDC function and the regulation of virus-specific immunity.
Since HBV surface antigens are glycoproteins, the involvement of a C-type lectin seems plausible. Crosslinking of the pDC-specific C-type lectin DC immunogenic receptor (DCIR) resulted in a specific TLR9, but not TLR7-mediated inhibition of TNFα and IFNα [30] , which is compatible to the immune modulatory effects observed for HBV. Nevertheless, these C-type lectins are often highly expressed on non-stimulated pDC and downregulated by TLR7 and TLR9 triggering to a similar extent as was observed for BDCA-2 [21] , which is in contrast to the observed increased interaction between HBV and pDC upon CpG stimulation. Altogether, we cannot confirm the binding of HBsAg to BDCA-2 to explain the inhibitory effect of HBsAg on pDC function as suggested before [14] , which indicates that there is at least also another receptor involved. Several other viruses, including HCV and HIV, have been shown to block TLR9-induced pDC function, but not TLR7-mediated activation [31] . These viruses may crosslink similar cellular/soluble receptors as HBV, but whether they also preferentially bind to TLR9/CpG-activated pDC is not documented.
In line with our in vitro observations that HBV significantly interferes with CpG-induced, but hardly affects Lox-induced pDC function, also chronic HBV patients displayed a significant defect in CpG-induced, but not Lox-induced IFNα production by pDC compared to healthy controls. Impaired pDC function, mainly investigated upon exposure to TLR9 ligands, has been reported in several other studies (reviewed in Woltman et al. [13] ) and has recently been associated with decreased TLR9 expression levels in chronic HBV [32] . Whether decreased TLR9 expression levels explain the specific impairment in CpG-induced IFNα production as demonstrated in the present study remains to be elucidated. Previously, the loss of serum HBeAg during anti-viral treatment of patients has been suggested to be responsible for partial restoration of IFN-α production by pDC [33] , [34] . In line, we here observed that especially HBeAg-positive chronic HBV patients displayed impaired pDC function compared to HBeAg-negative patients and healthy controls. These findings may be ascribed to the inhibitory effects of HBeAg on IFNα and TNFα production by pDC and at least fit with the idea that HBeAg possesses immune regulatory properties [35] . However, the HBeAg-positive patients showed higher serum ALT levels compared to the HBeAg-negative patients. Serum ALT levels inversely correlated with pDC function, which indicates that also other regulatory mechanisms than the virus itself may influence pDC function. Despite a significant inhibitory effect of HBV on pDC, both in vitro and in vivo , their function is only partially diminished and hence prevents HBV-infected individuals from generalized immune suppression. As also discussed elsewhere [13] , the interaction between HBV and a subset of the total DC pool may have profound effects on the induction of specific anti-HBV immunity, but the functionality of the total DC pool and the immunocompetence of the patient seems to be retained albeit to a somewhat lower extent compared with healthy individuals.
In addition to the direct effect of HBV on pDC, HBV seems also able to indirectly influence pDC function by interfering with monocyte-pDC interaction. Whereas monocytes enhanced cytokine production by pDC under control conditions, monocytes where unable to do so in the presence of HBV. Since in contrast to HCV, we neither detected HBV-induced TNFα or IL-10 in monocytes (data not shown) nor restored HBV-induced regulation by IL-10 neutralization, it is tempting to speculate that HBV abrogates this monocyte-induced pDC function by interfering with monocyte-pDC interaction rather than active induction of an immune regulatory factor by monocytes. Of note, interaction between HBV/HBsAg and monocytes have been reported [36] , but the mechanism underlying monocyte-mediated regulation of pDC function is not known.
Since a considerable number of patients chronically infected with HBV do not demonstrate an adequate anti-HBV response upon treatment with standard immune modulatory therapy, novel anti-viral strategies are needed. For the treatment of chronic HCV, promising results were obtained with the TLR9 agonist CpG10101 [37] . The finding that HBV strongly interacts with CpG-activated cells may disrupt the anti-viral effects of such immune stimulating agents. Therefore, the data presented here not only provide new insight into the mechanism by which HBV is able to evade anti-viral immunity, but they may also aid in the development of effective immunomodulatory therapies for the treatment of chronic HBV. | Conceived and designed the experiments: AW HJ. Performed the experiments: MO PB CS. Analyzed the data: AW MO PB CS. Wrote the paper: AW. Critically read and approved the manuscript: AW MO PB CS HJ.
Chronic hepatitis B virus (HBV) infection is caused by inadequate anti-viral immunity. Activation of plasmacytoid dendritic cells (pDC) leading to IFNα production is important for effective anti-viral immunity. Hepatitis B virus (HBV) infection lacks IFNα induction in animal models and patients and chronic HBV patients display impaired IFNα production by pDC. Therefore, HBV and HBV-derived proteins were examined for their effect on human pDC in vitro . In addition, the in vitro findings were compared to the function of pDC derived from chronic HBV patients ex vivo . In contrast to other viruses, HBV did not activate pDC. Moreover, HBV and HBsAg abrogated CpG-A/TLR9-induced, but not Loxoribine/TLR7-induced, mTOR-mediated S6 phosphorylation, subsequent IRF7 phosphorylation and IFNα gene transcription. HBV/HBsAg also diminished upregulation of co-stimulatory molecules, production of TNFα, IP-10 and IL-6 and pDC-induced NK cell function, whereas TLR7-induced pDC function was hardly affected. In line, HBsAg preferentially bound to TLR9-triggered pDC demonstrating that once pDC are able to bind HBV/HBsAg, the virus exerts its immune regulatory effect. HBV not only directly interfered with pDC function, but also indirectly by interfering with monocyte-pDC interaction. Also HBeAg diminished pDC function to a certain extent, but via another unknown mechanism. Interestingly, patients with HBeAg-positive chronic hepatitis B displayed impaired CpG-induced IFNα production by pDC without significant alterations in Loxoribine-induced pDC function compared to HBeAg-negative patients and healthy controls. The lack of activation and the active inhibition of pDC by HBV may both contribute to HBV persistence. The finding that the interaction between pDC and HBV may change upon activation may aid in the identification of a scavenging receptor supporting immunosuppressive effects of HBV and also in the design of novel treatment strategies for chronic HBV. | The authors would like to acknowledge G.M.G.M. Verjans and G.F. Rimmelzwaan (both Dept. of Virology, Erasmus MC) for providing HSV-1 and Influenza virus, respectively. The authors also thank D. Turgut and E.T.T.L. Tjwa for technical assistance and A. Boonstra (all from Dept. of Gastroenterology and Hepatology, Erasmus MC) for useful discussions and critically reading the manuscript. | CC BY | no | 2022-01-13 08:14:22 | PLoS One. 2011 Jan 5; 6(1):e15324 | oa_package/a3/28/PMC3016396.tar.gz |
||
PMC3016397 | 21246042 | Introduction
Candida albicans is the most important human fungal pathogen, causing various diseases from superficial mucosal infections to life-threatening systemic disorders [1] – [3] . The number of clinical C. albicans infections worldwide has risen considerably in recent years, and the incidence of resistance to traditional antifungal therapies is also rising. Many existing antifungal therapies have unfortunate clinical side effects; therefore, strategies are needed to identify new targets for antifungal therapy.
In the past few years, it became evident that apoptosis might occur not only in multicellular, but also in unicellular organisms, such as fungi. The induction of cell apoptosis is considered as a new and promising strategy for antifungal therapy. It has been reported that Saccharomyces cerevisiae dies in an apoptotic manner in response to weak acid stress, oxidative stress, salt stress, and UV irradiation [4] – [7] . Ultrastructural and biochemical changes that are characteristic of apoptosis have also been reported in pathogenic fungi. C. albicans can be triggered to undergo an apoptotic cell death response when exposed to environmental stress such as H 2 O 2 , amphotericin B (AmB) or intracellular acidification. However, the mechanism of C. albicans apoptosis has not been fully revealed. Ras–cAMP–PKA was found to be involved in the apoptosis of C. albicans . Mutations that blocked Ras–cAMP–PKA signaling ( ras1Δ, cdc35Δ, tpk1Δ, and tpk2Δ ) suppressed or delayed the apoptotic response, whereas mutations that stimulated signaling ( RAS1 val13 and pde2Δ ) accelerated the rate of entry into apoptosis [8] – [10] . We recently found that CaMCA1 , a homologue of Saccharomyces cerevisiae metacaspase YCA1 , was involved in oxidative stress-induced apoptosis in C. albicans [11] .
Trehalose, a non-reducing disaccharide, plays diverse roles, from energy source to stress protectant, and this sugar is found in bacteria, fungi, plants, and invertebrates but not in mammals [12] . In yeast, trehalose acts both as a main reserve of carbohydrates and as a cellular protector against a variety of nutritional and/or environmental stress challenges (oxidative, heat shock, osmotic and/or saline stress, xenobiotics etc.), increasing cell resistance to such insults [13] . The mechanism of trehalose protection is an active area of research that includes studies of the interaction of sugars with plasma membranes, the effects on cell osmotic responses, and the unique physicochemical properties of trehalose [14] . In yeast, trehalose is synthesized by a large enzyme complex comprising the two catalytic activities of trehalose biosynthesis. Trehalose-6-phosphate (Tre6P) synthase, encoded by TPS1 , synthesizes Tre6P from glucose-6-phosphate and UDP-glucose. Tre6P is then hydrolyzed into trehalose by Tre6P phosphatase, encoded by TPS2 [15] , [16] . In C. albicans, tps1/tps1 mutants are defective not only for Tre6P synthesis but also for growth on glucose or related rapidly fermented sugars and virulence [17] , [18] . Previous work on C. albicans pointed to a specific role of trehalose in cellular protection against oxidative stress. A tps1/tps1 mutant was shown to be deficient in trehalose synthesis and was extremely sensitive to H 2 O 2 exposure [19] . However, the underlying mechanism by which trehalose protects C. albicans from the injuries remains undefined.
Ca 2+ is an important second messenger in developmental and stress signaling pathways. In fungi, Ca 2+ is responsible for the regulation of several processes, including cation homeostasis, morphogenesis, virulence traits, and antifungal drug resistance [20] – [23] . A rise in cytoplasmic Ca 2+ has been found to be responsible for pheromone-induced S. cerevisiae apoptosis [24] . Fungicidal activity of amiodarone is also tightly coupled to calcium influx [25] . A rise in cytosolic calcium activates the calcium-dependent signaling pathway via the phosphatase, calcineurin (consisting of a catalytic subunit A encoded by CMP1 and a regulatory subunit B encoded by CNB1 ) and the calcineurin-dependent transcription factor, Crz1p. In C. albicans , Ca 2+ and its downstream calcineurin/Crz1p pathway are involved in azole resistance, cell morphogenesis and virulence [26] – [29] .
In this study, we show that lack of trehalose can accelerate H 2 O 2 -induced C. albicans apoptosis. Furthermore, this is linked to an increase of Ca 2+ concentration and caspase activity. Addition or depletion of Ca 2+ affected the cell death and caspase activity. Moreover, we investigated the role of Ca 2+ signaling in C. albicans apoptosis, and found that both calcineurin-deleted and Crz1p-deleted cells showed decreased cell death and caspase activity compared to the wild-type cells. Expression of CaMCA1 in calcineurin-deleted or Crz1p-deleted cells restored the sensitivity to H 2 O 2 . | Materials and Methods
Media and Compounds
Yeast media used were YPD (1% yeast extract, 2% peptone, and 2% glucose) and SD [0.67% (w/v) Difco yeast nitrogen base without amino acids]. SD medium was supplemented with a complete synthetic mix containing all the amino acids and bases. For prototrophic selection of yeast, the relevant drop-out mixes were used. Because the capacity of the trehalose-deficient mutant tps1 / tps1 to grow on exogenous glucose and fructose as carbon source is seriously compromised, some experiments were carried out in YPgal medium (1% yeast extract, 2% peptone, and 2% galactose) or SDgal [0.67% (w/v) Difco yeast nitrogen base without amino acids, 2% galactose]. Escherichia coli strain DH5α and LB (0.5% yeast extract, 1% peptone, and 1% NaCl) medium were used for transformation and plasmid DNA preparation. Fluo-3/AM, CaCl 2 , A23187, BAPTA, EGTA, cyclosporin A (Sigma, U.S.A.) were dissolved in either medium or dimethyl sulfoxide (DMSO) and then diluted to the appropriate working concentration.
Plasmids and Strain Construction
The strains ( Table 2 ) were cultivated at 30°C under constant shaking (200 rpm) or incubation. To reintroduce TPS1 to tps1Δ mutant, the ORF of TPS1 was amplified (using upstream primer 5′ ggatccatggttcaaggaaaagtc 3′ and downstream primer 5′ ctgcagctagtccctcaaactcttttg 3′ ) with Pyrobest DNA polymerase (TaKaRa Biotechnology, Dalian, P.R. China). After being purified, the BamHI-PstI digested PCR fragment was cloned into the integrative expression vector pCaEXP ( Table 3 ) to generate the recombinant plasmid pCaEXP-TPS1 [41] . After sequencing, pCaEXP-TPS1 was linearized and used to transform tps1Δ cells, and selected on SD medium lacking uridine, methionine and cysteine. As controls, the empty plasmid pCaEXP was transformed into CAI4 and tps1Δ cell to produce CAI4-EXP and tps1Δ -EXP, respectively. The same expression vector and transformation method were used for reintroducing CMP1 (using upstream primer 5′ ggatccatgtcaggaaatactgttcaa 3′ and downstream primer 5′ ctgcagttaactttgagataatcttct 3′ ) and CRZ1 (using upstream primer 5′ ggatccatgtctaacaatcctcatccc 3′ and downstream primer 5′ ctgcagctaagtaatttcaacaccact 3′ ) genes to their corresponding mutants, and introducing CaMCA1 (using upstream primer 5′ ggatccatgtttccaggacaaggtag 3′ and downstream primer 5′ ctgcagttaaaaaataaattgcaagtt 3′ ) to cmp1Δ and crz1Δ mutants and CAI4. The expression of TPS1 , CMP1 , CRZ1 and CaMCA1 in their host cells was confirmed by real time RT-PCR (data not shown).
Cell Treatment and Apoptosis Measurement
Yeast cells grown to early exponential phase at 30°C were exposed to different concentrations of H 2 O 2 for the required time (range 0–3 hours) and then harvested for apoptosis measurement. A terminal deoxynucleotidyltransferase-mediated dUTP-biotin nick end labeling (TUNEL) assay was performed in order to confirm the occurrence of the apoptosis process [4] . C. albicans cells were washed twice with PBS and fixed with a solution of 3.6% paraformaldehyde in PBS for 1 hour at 20°C. Cells were rinsed twice with PBS and then incubated with permeabilization solution for 2 minutes on ice. The cells were rinsed in PBS and labeled, using a solution of the label and enzyme solutions from an in situ cell death detection kit, fluorescein (Roche Applied Sciences, Mannheim, Germany), with appropriate controls labeled only with the label solution. The cells were incubated for 1 hour at 37°C in a humidified atmosphere in the dark, rinsed in PBS. The staining of the cells was observed by a fluorescence microscopy. Alternatively, the number of cells determined to be positive by the TUNEL assay was quantified using a BD FACSCalibur flow cytometer with excitation and emission wavelength settings at 488 and 520 nm, respectively.
Assay of the Intracellular Content of Trehalose
For analysis of the intracellular trehalose, the cells grown to early exponential phase at 30°C were exposed to 1 mM H 2 O 2 for 3 hours. At the indicated times, aliquots of cells (about 5×10 8 ) were taken and immediately centrifuged and washed with cold distilled water. Samples were microwaved (700 W) for 3×60 seconds with 30 seconds intervals between each, 1 ml of distilled water was then used to extract the trehalose for 1 hour. After centrifugation at 15,000×g for 10 minutes, the trehalose in the supernatants was analyzed by HPLC-MS with a detection limit of 1 ng. An HPLC system (Agilent1100, Wilmington, Germany) equipped with a G1946 mass spectrometer was used in the analysis. The operating conditions were as follows: Extracts were analyzed after separation of an Agilent Zorbax NH2 Column (4.6 mm×250 mm, 5 mm) at a flow rate of 1.0 ml/min. The mobile phase consisted of methanol∶ water 85∶15 (v/v). The HPLC eluant from the DAD detector was introduced into the mass spectrometer via a 1∶3 split. The column temperature was 25°C. A quadrupole mass spectrometer equipped with an ESI interface was used to obtain mass spectra, which were then examined by SIM in negative mode. The nebulizing gas was at 40 psi, and the drying gas temperature was 350°C. The fragmentor was set to 70 V, and the capillary voltage was 3.5 kV. The cell weight was determined as follows: another sample of the same volume of the corresponding cell suspension was filtered through pre-weighed filters (0.22 μm pore size). After washing with PBS, the filters were dried at 37°C for 48 h and then weighed. The trehalose content was showed as nmol/mg.
Measurement of ROS Levels
Intracellular levels of ROS were measured with DCFH-DA (Molecular Probes, U.S.A.). Briefly, cultured cells were collected by centrifugation and washed three times with PBS. Subsequently, the cells were adjusted to 2×10 7 cells/ml. After being incubated with 20 μg/ml of DCFH-DA for 30 minutes at 30°C, the cells were exposed to H 2 O 2 and incubated at 30°C with constant shaking (200 rpm). At specified intervals, cell suspensions were harvested and examined by fluorescence microscope or transferred to the wells of a flat-bottom microplate (BMG Microplate, 96 well, Blank) to detect fluoresence intensity on the POLARstar Galaxy (BMG, Labtech, Offenburg, Germany) with excitation at 485 nm and emission at 520 nm.
Ca 2+ Detection
Cells were loaded with 5 μM Fluo-3/AM for 30 minutes at 37°C. Ca 2+ levels were determined by a fluorescence microscopy. Alternatively, fluorescence intensity values were determined on the POLARstar Galaxy (BMG, Labtech, Offenburg, Germany) with excitation at 488 nm and emission at 525 nm.
Assessment of Caspase Activity
Caspase activity was detected by staining with D 2 R (CaspSCREEN Flow Cytometric Apoptosis Detection Kit, BioVision, U.S.A.) [10] , [11] , [41] . According to the manufacturer's instructions, cells were in D 2 R incubation buffer at 30°C for 45 minutes before viewing and counting under a fluorescence microscope with excitation at 488 nm and emission at 530 nm.
Real-time RT-PCR
RNA isolation and real-time RT-PCR were performed as described previously [42] . The isolated RNA was resuspended in diethyl pyrocarbonate-treated water. The OD 260 and OD 280 were measured, and the integrity of the RNA was visualized by subjecting 2 to 5 μl of the samples to electrophoresis through a 1% agarose-MOPS gel. First-strand cDNAs were synthesized from 3 μg of total RNA in a 60 μl reaction volume using the cDNA synthesis kit for RT-PCR (TaKaRa Biotechnology, Dalian, P.R. China) in accordance with the manufacturer's instructions. Triplicate independent quantitative real-time PCR were performed using the LightCycler System (Roche diagnostics, GmbH Mannheim, Germany). SYBR Green I (TaKaRa) was used to visualize and monitor the amplified product in real time according to the manufacturer's protocol. CaMCA1 was amplified with the forward primer 5′-TATAATAGACCTTCTGGAC-3′ and the reverse primer 5′- TTGGTGGACGAGAATAATG-3′ .
The PCR protocol consisted of denaturation program (95°C for 10 seconds), 40 cycles of amplification and quantification program (95°C for 10 seconds, 60°C for 20 seconds, 72°C for 15 seconds with a single fluorescence measurement), melting curve program (60–95°C with a heating rate of 0.1°C per second and a continuous fluorescence measurement) and finally a cooling step to 40°C. A standard curve for each primer set was performed with 1∶10, 1∶25, 1∶50, 1∶100, 1∶250 and 1∶500 dilutions of the cDNAs. The slopes of the standard curves were within 10% of 100% efficiency. The change in fluorescence of SYBR Green I dye in every cycle was monitored by the LightCycler system software, and the threshold cycle (C T ) above background for each reaction was calculated. The C T value of ACT1 (amplified with the forward primer 5′-CAACAAGGACAATACAATAG-3′ and the reverse primer 5′- GTTGGTGGACGAGAATAATG -3′ ) was subtracted from that of the tested genes to obtain a ΔC T value. The ΔC T value of an arbitrary calibrator was subtracted from the ΔC T value of each sample to obtain a ΔΔC T value. The gene expression level relative to the calibrator was expressed as 2 −ΔΔCT . | Results
Lack of Trehalose Accelerates H 2 O 2 -induced Apoptosis
In C. albicans , TPS1 encodes trehalose-6-phosphate (Tre6P) synthase that is required for trehalose synthesis. A tps1Δ mutant is deficient in trehalose accumulation. The impact of TPS1 mutation on trehalose accumulation is shown in Fig. 1A . Trehalose accumulation was increased in wild-type cells after 1 to 3 hours exposure to 1 mM H 2 O 2 . This increase did not appear in tps1Δ mutant.
Since it has been reported that H 2 O 2 can induce apoptosis in C. albicans and reactive oxygen species (ROS) is an indicator of apoptosis [9] , [22] , we examined ROS generation of the cells with the fluorescent dye DCFH-DA. An increase of intracellular ROS level was observed in both tps1△ mutant and wild-type cells upon H 2 O 2 treatment. However, this increase was even stronger in tps1△ mutant ( Fig. 1B ). Consistent with this, the tps1△ mutant showed a higher percentage of cells demonstrating ROS accumulation than the wild-type cells ( Table 1 ).
To ascertain the role of trehalose in C. albicans apoptosis, we compared the apoptosis rate between the wild-type cells and tps1Δ mutant when exposed to different concentrations of H 2 O 2 . As shown in Fig. 1C , upon H 2 O 2 treatment, the apoptosis rate of tps1Δ mutant was higher than wild-type cells. After 3 hours treatment with 2 mM H 2 O 2 , 78% of the tps1Δ mutant cells were apoptotic, while the apoptosis rate of the wild-type cells was 47%.
Lack of Trehalose Enhances Ca 2+ Elevation And Caspase Activity
In S. cerevisiae , elevation of intracellular Ca 2+ can lead to cell death [25] . We determined the intracellular Ca 2+ upon H 2 O 2 treatment using a fluorescent calcium indicator Fluo-3/AM. In the absence of H 2 O 2 , the intracellular levels of Ca 2+ in both the tps1Δ mutant and wild-type cells were rather low and almost undetectable. After treatment with 1 mM H 2 O 2 for 3 hours, both of the groups showed obvious elevation of intracellular Ca 2+ , while the tps1Δ mutant cells showed a higher level of Ca 2+ than the wild-type cells ( Fig. 2A, 2B ).
Since we previously found that the caspase activity was increased in C. albicans apoptosis [11] , here we investigated the caspase activity by staining the cells with D 2 R, a nonfluorescent substrate, which is cleaved to green fluorescent monosubstituted rhodamine 110 and free rhodamine [10] , [11] , [30] . As shown in Fig. 2C and 2D , after treatment with 1 mM H 2 O 2 for 3 hours, the cell number stainable by D 2 R in the wild-type cells was 26%, while that in the tps1Δ mutant was 51%. Furthermore, the transcript levels of CaMCA1 , which is responsible for caspase activity in C. albicans , were investigated by real time RT-PCR. As shown in Fig. 2E , in the absence of H 2 O 2 , there was no significant difference in the transcript level of CaMCA1 between the tps1Δ mutant and wild-type cells. However, a 4 fold increase of CaMCA1 transcript level was recorded in the tps1Δ mutant compared to that in the wild-type cells when exposed to 1 mM H 2 O 2 for 3 hours.
Adding or Depleting Ca 2+ Affected Apoptosis and Caspase Activity
Since the intracellular Ca 2+ level could be increased by H 2 O 2 , especially in the tps1Δ mutant, we hypothesized that Ca 2+ signaling might regulate C. albicans apoptosis, and the higher sensitivity of tps1Δ mutant to H 2 O 2 might be due to its higher intracellular Ca 2+ level. As shown in Fig. 3A , when we stimulated the intracellular Ca 2+ level by adding CaCl 2 (0.5 mM), the apoptosis rate increased in both the tps1△ mutant and wild-type cells. Similar effects were observed when A23187 (0.5 μM), a calcium ionophore, was added. CaCl 2 and A23187 themselves at the concentrations tested had no effects on C. albicans growth. In addition, the presence of both CaCl 2 and A23187 resulted in an increased caspase activity in both the tps1△ mutant and wild-type cells ( Fig. 3C ).
Furthermore, we tested the effect of depleting Ca 2+ . As shown in Figure 3B , the presence of EGTA (1 mM), an extracellular calcium chelator, attenuated the H 2 O 2 -induced apoptosis in both tps1Δ mutant and wild-type cells, accompanied by the decrease of caspase activity ( Fig. 3D ). Similarly, when BAPTA (1 μM), an intracellular calcium chelator, was added, both the apoptosis rate and caspase activity in the two strains were decreased.
Deletion of Calcineurin or Crz1p Leads to a Decrease in Apoptosis and Caspase Activity
In C. albicans , calcineurin and Crz1p are two major proteins involved in Ca 2+ signaling and play an important role in antifungal tolerance, cell morphogenesis and virulence [20] , [21] , [26] . So it is possible that the effects of Ca 2+ on cell death are mediated by calcineurin and its downstream target Crz1p. To test this hypothesis, we examined the viability of calcineurin and Crz1p mutants [27] upon H 2 O 2 treatment. After 3 hours treatment with 2 mM H 2 O 2 , 52% of wild-type cells were apoptotic while the apoptosis rates of cmp1Δ and crz1Δ mutants were 19% and 25%, respectively. In the cmp1Δ- CMP1 and crz1Δ -CRZ1 cells which contain reintroduced CMP1 and CRZ1 gene, the apoptosis rate was similar to the wild-type cells ( Fig. 4A ). As expected, the caspase activities in both the cmp1Δ and crz1Δ mutants were lower than that in wild-type cells ( Fig. 4B ). Consistent with this, the transcription levels of CaMCA1 in cmp1Δ and crz1Δ mutants were much lower than that in the wild-type cells ( Fig. 4C ). The potential role of calcineurin in H 2 O 2 -induced apoptosis was further examined using the calcineurin inhibitor cyclosporin A. Upon H 2 O 2 treatment, the wild type cells showed lower apoptosis rates and caspase activity in the presence of 0.08 μM cyclosporin A as compared to the absence of this compound ( Fig. 4A, 4B ).
Expression of CaMCA1 in Calcineurin-deleted and Crz1p-deleted Cells Restored the Sensitivities to H 2 O 2
Since the caspase activity was decreased in cmp1Δ and crz1Δ mutants upon H 2 O 2 exposure, we introduced CaMCA1 into the cmp1Δ and crz1Δ mutants and assessed the phenotype. Upon H 2 O 2 treatment, the apoptosis rates ( Fig. 4A ) and caspase activities ( Fig. 4B ) of the CaMCA1 -introduced cells were much higher than the cmp1Δ and crz1Δ mutants. Consistent with this, the transcription levels of CaMCA1 in cmp1Δ and crz1Δ mutants were lower than that in the wild-type cells, while the transcription levels of CaMCA1 in the CaMCA1 -introduced cells were similar to that in the wild-type cells ( Fig. 4C ). In addition, the apoptosis rates and caspase activities of the camca1Δ mutant were lower than the wild-type cells. These data indicated that CaMCA1 could restore the decreased apoptosis and caspase activities of calcineurin-deleted and Crz1p-deleted cells. | Discussion
In yeasts, trehalose acts both as a main reserve of carbohydrates and as a cellular protector against a variety of nutritional and/or environmental stress challenges, increasing cell resistance to such injuries. Trehalose accumulation in C. albicans has been described as a defense mechanism against oxidative stress. A trehalose-deficient tps 1Δ mutant is highly sensitive to H 2 O 2 and prone to undergo phagocytic digestion [31] . However, the mechanism by which trehalose protects C. albicans from injuries remains unclear. Since apoptosis is now considered as one of the important ways of C. albicans death, we assessed the role of trehalose in H 2 O 2 -induced apoptosis using a tps1△ mutant. According to our result, lack of trehalose could accelerate H 2 O 2 -induced apoptosis which was accompanied by an increase of ROS, an apoptosis indicator. This result revealed a mechanism for the protective role of trehalose in C. albicans . Similar results were reported by other researchers. Liu et al. found that trehalose could inhibit the phagocytosis of refrigerated platelets in vitro via preventing apoptosis [32] . Also, trehalose has been found to protect against ocular surface disorders in experimental murine dry eye through suppression of apoptosis [33] .
Our detailed studies on the protective effect of trehalose revealed a role of Ca 2+ signals in C. albicans apoptosis. We observed that there was an increase of intracellular Ca 2+ level in both the tps1△ mutant and wild-type cells upon H 2 O 2 treatment. However, this increase was much stronger in tps1△ mutant, which was consistent with the higher apoptosis rate induced in this strain. When we stimulated the intracellular Ca 2+ level by adding CaCl 2 or A23187, the apoptosis rates in both the tps1△ mutant and wild-type cells were increased. In contrast, when Ca 2+ was depleted by adding EGTA or BAPTA, the apoptosis rates in both the tps1△ mutant and wild-type cells were decreased. These results indicated that apoptosis could be induced in C. albicans through increasing intracellular Ca 2+ level.
The role of Ca 2+ in C. albicans apoptosis was further examined by the experiments with CMP1 and CRZ1 , two genes involved in Ca 2+ signaling. We found that cmp1Δ and crz1Δ mutants showed attenuated apoptosis upon H 2 O 2 treatment, similar to the effect of depleting Ca 2+ in wild-type cells. Consistent with this result, addition of cyclosporin A, a calcineurin inhibitor, could also attenuate apoptosis. Taken together, Ca 2+ and its downstream calcineurin/Crz1p pathway are involved in H 2 O 2 -induced C. albicans apoptosis.
In mammals, apoptosis can be directed by the activation caspases, which cleave specific substrates and trigger cell death. In the past few years, it has become evident that caspases might exist not only in multicellular, but also in unicellular organisms, such as fungi. In S. cerevisiae , YCA1 encodes a single metacaspase, which has caspase activity. YCA1 is involved in the apoptosis of yeast cells exposed to different environmental stresses, such as H 2 O 2 , acetic acid, sodium chloride, heat shock, and hyperosmosis [34] – [36] . In plants, metacaspases have been associated with Norway spruce apoptosis during embryogenesis and tomato plant apoptosis induced by fungal infection [37] – [39] . Using yeast as a heterologous system for apoptosis evaluation, the metacaspases AtMCP1b and AtMCP2b from the plant Arabidopsis thaliana were also found to be involved in apoptosis induced by H 2 O 2 [40] . We recently found that H 2 O 2 -induced C. albicans apoptosis was accompanied with caspase activity, which was encoded by CaMCA1 [11] . In this study, we found that, upon H 2 O 2 treatment, the caspase activities in tps1△ mutant were much higher than those in wild-type cells, similar to the phenomena of intracellular Ca 2+ levels. The positive relation between Ca 2+ level and caspase activity was proved by adding or depleting Ca 2+ . Moreover, both calcineurin-deleted and Crz1p-deleted cells showed lower caspase activity compared to the wild-type cells, indicating that CaMCA1 might be a downstream gene which is blocked in calcineurin-deleted or Crz1p-deleted cells ( Fig. 5 ). As expected, when extraneous CaMCA1 was introduced into these cells, the caspase activity and cell sensitivity to H 2 O 2 were resumed. Previous studies showed that C. albicans CaMCA1 could be activated by Ca 2+ and regulated by calcineurin and Crz1p. Moreover, CDRE (calcineurin-dependent responsive element) was found in the promoter of CaMCA1 [26] . Based on these results, we conclude that CaMCA1 is likely to be one of the downstream genes influenced by the Ca 2+ signaling and involved with the protective role of trehalose against H 2 O 2 -induced apoptosis. | Conceived and designed the experiments: YYC YYJ YFC. Performed the experiments: HL ZYZ LLD XRS. Analyzed the data: ZYZ XMJ LY. Wrote the paper: YYC ZYZ.
Trehalose is a non-reducing disaccharide and can be accumulated in response to heat or oxidative stresses in Candida albicans . Here we showed that a C. albicans tps1Δ mutant, which is deficient in trehalose synthesis, exhibited increased apoptosis rate upon H 2 O 2 treatment together with an increase of intracellular Ca 2+ level and caspase activity. When the intracellular Ca 2+ level was stimulated by adding CaCl 2 or A23187, both the apoptosis rate and caspase activity were increased. In contrast, the presence of two calcium chelators, EGTA and BAPTA, could attenuate these effects. Moreover, we investigated the role of Ca 2+ pathway in C. albicans apoptosis and found that both calcineurin and the calcineurin-dependent transcription factor, Crz1p, mutants showed decreased apoptosis and caspase activity upon H 2 O 2 treatment compared to the wild-type cells. Expression of CaMCA1 , the only gene found encoding a C. albicans metacaspase, in calcineurin-deleted or Crz1p-deleted cells restored the cell sensitivity to H 2 O 2 . Our results suggest that Ca 2+ and its downstream calcineurin/Crz1p/ CaMCA1 pathway are involved in H 2 O 2 -induced C. albicans apoptosis. Inhibition of this pathway might be the mechanism for the protective role of trehalose in C. albicans . | We thank Professor William A. Fonzi for kindly providing the C. albicans strains CAI4. We thank Professor Dominique Sanglard for kindly providing the C. albicans strains DSY2091 and DSY2195. We thank Professor Carlos Gancedo for kindly providing the C. albicans tps1Δ strain. | CC BY | no | 2022-01-13 08:14:22 | PLoS One. 2011 Jan 5; 6(1):e15808 | oa_package/39/bd/PMC3016397.tar.gz |
||
PMC3016398 | 21246043 | Introduction
The importance of structure in understanding biomacromolecular function is well established. Applications of these structures span many disciplines, but a marquee use has been, and will likely continue to be, in the discovery of new therapeutic agents for treatment of human disease. Unfortunately, many biomacromolecules, including some of the most therapeutically relevant targets (e.g., membrane-bound proteins like G-protein coupled receptors, ion channels and efflux pumps), are not amenable to X-ray crystallography, primarily due to the difficulty of obtaining diffraction-quality crystals. NMR, the only other experimental technique that can yield near-atomic resolution models for biomacromolecules, has a different set of experimental limitations [1] , [2] that are particularly evident for single proteins with molecular masses greater than 25–30 kD. Some “diffraction-quality” crystals, especially for high molecular weight or multi-protein complexes, do not diffract to sufficient resolution to produce effective target models for rational drug discovery [3] . In fact, about 25% of the protein crystal structures deposited in the RCSB protein data bank (PDB) [4] , some of modest size, have resolutions of 2.5 Å or worse and the number of such structures has been increasing rapidly since 1993 [5] .
As crystallographic resolution decreases, the parameter-to-observable ratio increases, i.e., the atomic coordinates and other structural model parameters are being fit to fewer experimental data, which then decreases statistical confidence in the accuracy of the refined atomic protein model [6] . Protein structural models based on low-resolution electron density maps may thus lack accuracy, and their proximity to the ”true” protein structure present in the crystal is more uncertain. Ultimately, using atomic protein models refined from low-resolution X-ray data as starting points for further studies such as drug discovery and design may well prove to be problematical or even pointless.
Recently, we coined a term – isocrystallographic – to describe the ensemble of alternate protonation state models for a protein or protein-ligand complex that fits the experimental structural data [7] . This ensemble was independent of resolution unless the structure was collected at high enough resolution to confidently locate all protons – at which point there would only be one valid structure. Here, we propose to expand the definition of an isocrystallographic ensemble to include all structural models consistent with the experimental electron density envelope. This ensemble is resolution-dependent since a large set of structural models will likely be consistent with low-resolution electron density envelopes, compared to a much smaller set of models at higher resolution. All of these models will likely exhibit similar refinement metrics, and it could be exceedingly difficult to choose the most biologically relevant structural model from the isocrystallographic ensemble. The availability of methodologies that assist in this selection of relevant atomistic protein structural models from low-resolution X-ray data will lead to an enhanced understanding of biological structure and function.
Recently, Schröder, Levitt and Brunger reported that the quality of low-resolution structural models was improved by refining against a potential function that incorporated an energy term based on deformable elastic networks [8] . Using specific distance restraints from a reference structural model to supplement standard stereochemical information (bond lengths, angles and atomic van der Waals radii) resulted in refined structural models for low-resolution data that better fit experimental structure factor amplitudes as indicated by lower R free [9] values, and that also had more residues in favored regions of Ramachandran plots [10] . While the reference is ideally a high-resolution experimental structure model of a closely homologous protein, some success was also reported with modeled or predicted structures(8). This is potentially a very powerful tool for refining structural models against low-resolution X-ray data; however, its applicability may be somewhat limited since homologous reference structure models may not always be available and, with moderate-to-weak homology, selecting the most appropriate homolog and constructing optimal alignments are both formidable problems. These caveats suggest that the Schröder, Levitt and Brunger approach provides an valuable tool for a subset of proteins, but may not provide a universal solution for improving the quality of low-resolution structural models.
Here, we present a very different approach that also achieves the goal of improving structure quality; in this case by incorporating terms responsive to hydropathic interactions into the X-ray refinement target function using the empirical HINT (Hydropathic INTeractions) forcefield [11] . Our protocol does not require the existence of a previously determined high-resolution structure of a homolog, and thus is applicable to any structure. It is important to note here the differences between the HINT forcefield and conventional Newtonian molecular mechanics forcefields as used in structure optimization and dynamics annealing. While all non-covalent interactions are represented by either the Coulombic or van der Waals terms in conventional forcefields, HINT summarizes hydrophobic and polar non-covalent interactions in terms of atom-based thermodynamic parameters derived from experimental LogP o/w (1-octanol/water partition coefficient) data from small molecules [11] . Partition coefficients are free energies [12] and thus HINT inherently and implicitly encodes both enthalpy and entropy in its scoring scheme. These atomistic parameters are correlated so as to calculate interaction scores that have been shown to track with free energies of association for numerous biomacromolecular systems [13] , [14] , [15] , [16] .
In the HINT model each atom-atom interaction is categorized as one of the following: (Lewis) acid-base (or the special case of hydrogen bonding) – scored favorably; acid-acid or base-base – both scored unfavorably; hydrophobic-hydrophobic – scored favorably; or hydrophobic-polar – scored unfavorably. The latter represents desolvation energy. There are interesting differences between this paradigm and Coulombic-like terms used in molecular mechanics forcefields. Hydrophobic atoms (or united atoms like –CH 3 ) usually have positive, albeit small-valued, charges. This would suggest that, absent van der Waals, hydrophobic-hydrophobic interactions are unfavorable according to molecular mechanics. Similarly, since some polar atoms (generally those that are Lewis bases) have partial negative charges, their interactions with hydrophobic groups are regarded by molecular mechanics as favorable, while others (involving Lewis acids) are unfavorable. We will show here that high-resolution X-ray crystallographic structures generally support the HINT view of these interactions.
Among the various X-ray structure refinement tools available to crystallographers, CNS (Crystallography and NMR System) [17] is among the most widely used. About 30% of the X-ray structures deposited in the PDB were refined using CNS, with the large majority reporting the use of versions 1.0 or 1.1. This number is significant, as roughly an equal number of PDB entries do not report the software used for refinement and CNS has only been available since 1998. The popularity of CNS 1.1 combined with its open architecture prompted us to augment it with the HINT forcefield in order to develop a modified CNS that incorporates hydropathy in refinement. Although CNS supports the optional modeling of electrostatic interactions, the core Engh and Huber forcefield [18] does not explicitly include hydrogen bonding or electrostatic interactions in crystallographic refinement. Typically, all nonbonded interactions are modeled with a simple quadratic repulsive term in CNS, which does not compromise structural models refined against high-resolution X-ray data where atomic positions are well defined solely by experimental data. However, it likely does compromise structural models refined against low-resolution structural data where atomic positions are less well defined.
In this contribution, we show that these shortcomings can be amieliorated by including the HINT forcefield energy term in the CNS target function. To test this refinement protocol we designed a novel, rigorous test regimen to validate our approach, and, in fact, the details of our regimen are themselves a significant contribution. We demonstrate the quality of our refined structural models by validation with several commonly-used structural analysis tools. | Methods
The analysis and refinement was performed on a dataset of 25 high-resolution X-ray crystallographic structures of proteins in the PDB (see Table S1 ). All 25 protein data sets satisfy the following constraints: a) X-ray resolution 1.5 Å or better; b) structure factor data are available in the PDB; c) less than 30% sequence homology with the other proteins; and d) deposited structure has no missing atoms. All water, ion and cofactor atoms were removed from the structural models for this work. CNS version 1.1 [17] was used for refinement. Its energy function was modified as described above with the HINT energy term, whose weight, w HINT , was manually optimized (to minimum R free ) by performing refinement with incremental values between 10 and 100 (the trivial w HINT = 0 case is the native CNS protocol). Because HINT uses a 6–12 Lennard-Jones potential in its energy function [11] , the 6–12 CNS Lennard-Jones potential was used instead of the normal quadratic potential. The HINT parameters were calculated for protein atoms using the HINT (version 3.12) dictionary method [11] ; E HINT is an intramolecular energy that excludes 1–2, 1–3 and 1–4 interactions. Synthetic low-resolution datasets, ranging from ∼1.5 Å to ∼5.0 Å, were generated from high-resolution structure factor data by applying B-factor smoothing, as suggested by Schröder et al . [8] . Truncation was performed for each resolution at the ratio of mean intensity to the mean of its standard deviation reported at d min in the deposited structure. CCP4 [36] tools were used for file conversions and to apply the B-factor smoothing. Initial coordinates for re-refinement were generated by randomly corrupting the heavy atom positions in the deposited structures by a maximum of ±0.5 Å in each of the x, y and z-directions, before adding hydrogen atoms. Atomic scattering factors for hydrogen atoms were modified in order to eliminate any contribution to E X-ray . Refinement consisted of two cycles of torsion angle annealing followed by B-factor refinements. Grouped isotropic B-factor refinement was performed for resolutions worse than 2.65 Å and individual B factor refinement at higher resolutions. Further details, including refinement statistics, are given in Supporting Information ( Tables S1 and S2 , Figure S3 ). | Results and Discussion
It is obvious and unassailable that current protocols for model-building and refinement based on low-resolution X-ray reflection data produce structural models of poorer quality than those based on high-resolution data. We are testing the hypothesis that these deficiencies can, at least in part, be related to the lack of well-developed hydropathic interaction networks in these models. We have sought to illustrate this point with available crystallographic data, but there is a paucity of directly comparable and unbiased structural data for proteins solved at varying resolutions. Another approach, used in this work, is to synthesize low-resolution data by truncating high-resolution data ( vide infra ) and evaluate structures refined against these data [8] . In Figure 1A (red circles) we present normalized (relative to the crystallographic structure model) intramolecular HINT scores, calculated for 309 structural models for 25 proteins refined against data truncated at resolutions between 1.48 and 4.88 Å. This score is calculated as the sum of all non-covalent intramolecular atom-atom interactions using the paradigm described above, i.e., higher scores represent in toto more favorable high-quality interactions within the structure. Clearly, there is a trend of an accelerating decrease in HINT score, especially for resolutions worse than 3.0 – 3.5 Å, indicating that, just as we hypothesized, these models indeed have poorer quality hydropathic interaction networks. Another evaluation of structure as a function of resolution can be obtained by calculating non-covalent energies of structure models with a molecular mechanics forcefield. The CHARMM [19] electrostatic term ( Figure 1B , red circles) shows a similar trend: between 3.0 and 4.8 Å there is a more than 30% decrease in favorable electrostatic energies, relative to those in the crystallographic models, again in accord with our hypothesis. This theme is repeated with other knowledge-based structural metrics including Ramachandran scores (percentage of residues in the favored regions), as illustrated (red circles) in Figure 1C . All of these data confirm that there is a clear tendency towards decreasing structural quality as the experimental resolution of the data is decreased.
One approach to probe, and perhaps ameliorate, the disparity between structural models refined with high- and low-resolution data, is to include electrostatic interactions in X-ray refinement protocols. If electrostatics substantively improves structural quality, we can assert that compromises to polar interaction networks, e.g., hydrogen bonds or weaker, longer-range acid-base interactions, are the dominant source of structural errors in low-resolution structural models. On the other hand, partial or negligible changes in structure quality would strongly suggest that other factors are at play. In Figures 1A, 1B and 1C (green circles), we present normalized intramolecular HINT scores, normalized electrostatic energies from CHARMM and Ramachandran scores, respectively, for structures refined with the optional electrostatics protocol in CNS, which we are terming “CNS+electrostatics”. While the HINT scores ( Figure 1A ) are higher overall by about 25% after refinement with this protocol, the trend of decreasing HINT score with resolution is essentially unchanged. Electrostatic energy ( Figure 1B ) is likewise stabilized by about 15%, but even this, which essentially reports the same property used in its optimization, trends to lower values (higher energies) with lower resolution. Finally, Ramachandran scores ( Figure 1C ) suggest that refinement with electrostatics only modestly improves structural quality (4% improvement at 3.5–4.0 Å and 2% at ≥4.0 Å) for models from low-resolution data. The lack of significant improvement of the latter is especially notable as it is an independent and universally accepted structural metric. Furthermore, the higher overall HINT scores and lower electrostatic energies, which were both referenced to their deposited high-resolution structural models, suggests that the inclusion of electrostatics in refinement may result in models with non-native (and potentially overweighted) polar interaction networks.
In the remainder of this paper we describe the implementation and testing of a structure refinement protocol enhanced with the HINT hydropathic forcefield. It is our view that, because Coulombic electrostatic terms focus exclusively on polar components of interaction networks, refinement with electrostatics is, at best, inadequate for improving the quality of low-resolution structure models. It is important to also include terms that improve the independent and complementary hydrophobic component of the networks.
Implementing the HINT forcefield in CNS
A modified CNS energy function was implemented: E total = E geom + w a E X-ray + w HINT E HINT , where E geom accounts for the covalent, dispersion, and electrostatic energies (when activated, as in the “CNS+electrostatics” protocol) of the biomolecule, i.e., from an adaptation of the Engh and Huber forcefield [18] , E X-ray represents the energy contributions from the experimental X-ray amplitudes (with relative weight w a ) and E HINT is the HINT energy term (with relative weight w HINT ). E HINT is calculated by applying a standard constant (1 kcal mol −1 = 515 score units [14] , [20] ) to the HINT score, which is the double sum over all atom pairs for two terms, ∑∑ [a i S i a j S j exp(-r ij ) T ij +50 F(r ij )] , where a is the hydropathic atom constant and S is the solvent accessible surface area of atoms ( i and j ), r ij is the distance between these atoms, T ij is a discriminant function for polar-polar interactions, and F is the Levitt [21] implementation of the Lennard-Jones potential. While w a is optimized internally by CNS, the w HINT term was optimized ( vide infra ) for each refinement by identifying the value producing the lowest R free . This protocol will be referred to as “CNS+HINT” throughout this report.
Simulation of low-resolution data sets
Assessing a protocol that purports to improve structural quality for low-resolution models requires known experimental structures of accepted high quality as references. Because there are few, if any, authentically low-resolution data sets for which high-resolution structural data also exists in the same crystal form, our approach was validated within the same data set. We chose 25 high-resolution (≤1.5 Å) and diverse (≤30% homology between any pair) structures from the PDB for which structure factor data was available as our reference sets. Artificial but realistic low-resolution data were synthesized using a protocol adapted from Schröder, Levitt and Brunger [8] . In addition, the validation must simulate the process of refining the structural models at each resolution without introducing bias. We deemed it unacceptable to refine the reference atomic coordinates against the simulated low-resolution structure factor data as that would almost certainly bias the resulting refined structure towards the reference structure. Thus, we generated starting models for each structure by randomly perturbing the coordinates for each atom of the deposited PDB structural model.
Assessing the fit of refined structures to experimental reflection data
An independent metric for assessing structural quality is provided by the fit of the calculated model structure factor amplitudes to experimental structure factor amplitudes. A protocol that aims to improve structural quality should improve the fit to experimental data, or at the very least not degrade it. In Figure 2A , we present a histogram of R free values for structures refined using native CNS, CNS+electrostatics and CNS+HINT. Clearly, for the lowest resolutions, R free values for structures refined using the HINT representation of non-covalent interactions are significantly lower than for structures refined with the other protocols. For higher resolution structures, the inclusion of the HINT term does not increase R free . Together, these results clearly indicate that refinement with CNS+HINT does not overfit the experimental data, and for the lowest resolution structures, improves the fit to experimental data.
Another test of the effect of the HINT term on structure is to monitor the weight assigned to the HINT term, chosen by minimizing R free , yielding the optimal structure. In Figure 2B , we present a histogram of the w HINT values for optimal structures as a function of resolution. There is a clear trend towards increasing W HINT values as resolution decreases, which strongly suggests a more dominant role for the hydropathic energy term in defining atomic positions with decreasing resolution. In other words, the hydropathic term serves to restrain atomic positions in cases where atomic positions are poorly defined on the basis of experimental data alone. It should be noted that the electrostatics term in CNS+electrostatics is incorporated within E geom and its relative weight with respect to covalent terms is fixed. The contribution of electrostatics is only varied as the X-ray weighting, w a , is optimized. However, Figure 1 suggests that increasing the role of electrostatics by adaptations to the CNS energy function may be be counterproductive, while decreasing this weighting would only reduce its already minor effect.
Assessing the model structures by superposition on the high resolution targets
The “gold standard” of structural quality is probably the fit of low-resolution structural models to the experimental high-resolution reference structure. Ideally, low-resolution models should superimpose perfectly on the high-resolution target, at least for residues with well-defined electron density (i.e., those buried or involved in lattice interactions). In Figure 3 , we present a histogram of heavy atom root mean squared deviations (RMSDs) for structures refined using the three protocols (inset, C α RMSDs). It is clear that none of the protocols yields refined structures that superpose on the high-resolution target and that structural deviations increase as resolution decreases. However, for the lowest resolution structures, structures refined with CNS+HINT are, at least moderately (0.18 Å at ≥4.0 Å), closer to the high-resolution target than structures refined with the other two protocols.
Although this 15% improvement in RMSDs between structures refined with CNS+HINT and native CNS at the lowest simulated resolutions is smaller than we might have hoped, it is nonetheless significant, and there are a number of factors that may inflate the observed deviations from the high-resolution target. First, our reevaluations of refinement were automated to be performed identically. Second, surface residues (not involved in lattice contacts) have poorly resolved electron densities and are not refined well, but their atoms are included in RMSD. Third, as the assignment of waters to density can be somewhat arbitrary, we have not considered any crystallographic waters. As resolution decreases the number of waters observed decreases quite dramatically [22] . Compared to the large number (average: 235) of waters observed in the 25 reference targets, few (if any) waters would have been observed in the lower of the resolution ranges we explored. This compromises the quality of polar interaction networks for all models (regardless of protocol) since buried waters usually participate in direct or bridging hydrogen-bonding interactions and can thus affect the atomic positions of their partner atoms. Fourth, we have included explicit protons, which are required by the HINT scoring function, that were not present in any of the deposited structures. Finally, instead of the simple quadratic nonbonded term typically used in CNS crystallographic refinements, we have used the Lennard-Jones 6–12 term. Refining the deposited coordinates in the absence of hydropathic or electrostatic terms, with explicit protons and without waters, results in structures with all heavy atom RMSDs of 0.25 Å from the deposited structures, which effectively sets a floor value for RMSD comparisons. It is instructive, however, to put these RMSDs in perspective by comparison to atomic positional uncertainties, e.g., Cruickshank's Diffraction-component Precision Index (DPI) [23] values, that are also depicted in Figure 3 . Clearly, RMSDs for all low-resolution cases are well within the uncertainties suggested by the DPI.
Assessing the quality of refined structures using knowledge-based metrics
Structural quality can also be assessed by knowledge-based metrics that “rank” a structure with respect to others. Model quality, as reported by indices like the Ramachandran score or MolProbity [24] clashscore, has been shown to worsen with decreasing resolution. Histograms for Ramachandran scores ( Figure 4A ) and clashscores (a measure of the number of unusually short interatomic distances in a structure, Figure 4B ) report the same trend: while inclusion of electrostatics alone has only a modest impact, the inclusion of the HINT representation of non-covalent interactions results in much more significant improvements in structure quality. The HINT potential, which is based on pairwise non-covalent interactions, has no “intrinsic knowledge” of preferred peptide backbone angles, yet the CNS+HINT models have its inclusion a significantly higher fraction (13% larger for resolution ≥4.0 Å) of residues in favored regions of the Ramachandran plot. Clashscores ( Figure 4B ) show an even more dramatic (51% at ≥4.0 Å) improvement for the CNS+HINT structures. In addition, since the clashscores for the native and CNS+electrostatics refined structures are virtually identical, the anomalously low electrostatic energies and increased HINT scores (relative to reference) for CNS+electrostatics models ( Figures 1A and 1B ) are, in part, an artifact of abnormally short interatomic distances between polar atoms. In contrast, the better clashscores from CNS+HINT refinement strongly suggests that this protocol results in better-defined interaction networks.
Deconstructing the effect of the HINT term
Neither the native CNS nor the CNS+electrostatics protocols were able to maintain normalized HINT score or electrostatic energy as model resolutions decreased. Figures 5A and 5B reprise these graphs for models refined with the CNS+HINT protocol. In both cases, this protocol produces more native-like behavior (normalized value close to 1) throughout the range of resolutions. It is interesting that the relatively crude HINT “electrostatics” [11] , largely based on experimental solvent partitioning of small organic molecules, perform measurably better than the CNS partial charge-based Coulombic electrostatics. This is probably because the HINT atomistic parameters are not solely electrostatic, but are scalar quantities that in principle encode all physiochemical interactions in biological media. It is revealing to deconstruct the HINT score into two components (see Figure 5C ): polar, where hydrogen bonds and acid-base interactions have positive scores, while acid-acid and base-base have negative scores; and hydrophobic, where hydrophobic-hydrophobic is positive and hydrophobic-polar is negative. The HINT polar component score is similar for both the CNS+electrostatic and CNS+HINT structures, which suggests that either protocol adequately models these networks. The hydrophobic component shows remarkably consistent values with minimal scatter, but is overall, seemingly small, only 5% on average of the total score. However, this is actually the balanced sum of favorable and unfavorable terms, whose values are much larger – about +60% and −55%, respectively, of the total score (see Figure 6B ). (The corresponding plot for the HINT polar term is shown in Figure 6A .) The structural integrity of these models, as evidenced above, highlights the importance of hydrophobic networks and that the HINT term effectively describes these networks.
Understanding degradation of low-resolution structural models
Although the largest deviations between low(er) resolution models and the target are generally, as expected, in solvent-exposed regions, some significant structural differences, particularly for sidechain orientations, can be found elsewhere. To further explore these differences and to better understand how interaction networks are compromised in low-resolution models, we are focusing here on three structures: 1WPA [25] , 1OI7 [26] and 1RL0 [27] , which are the most polar, most hydrophobic and intermediate hydrophobicity/polarity, respectively, of the 25 structures in this study. Table 1 summarizes structural and quality metrics for these structures as refined at their highest and lowest simulated resolutions. Additional data for these and the other 22 structures are available in Supporting Information ( Tables S1 and S2 ). For all three of these structures, Cα RMSDs between structures refined with CNS+HINT and the deposited structure (see Figure 7 ) were between 0.5 and 0.6 Å, i.e., about 0.1 Å lower than the RMSDs for those structures refined with native CNS. For the most hydrophobic protein, 1OI7, this RMSD dropped with CNS+HINT from 0.62 to 0.50 Å.
While the backbone structures are very similar, even at low-resolution, sidechain orientations are not nearly as well-conserved. Many of the largest deviations are seen for flexible residues that are exposed to bulk solvent. However, sidechain orientations for buried hydrophobic residues in low-resolution models can also differ significantly from those in the target. Two examples are shown in Figure 8A and 8B , which are superpositions centered on residues Phe187 from 1OI7 (4.07 Å) and Leu67 from 1RL0 (4.31 Å), respectively. Inclusion of the HINT term, which explicitly encodes hydrophobic interactions, produces a Phe187 sidechain orientation that is much more similar to that observed in the deposited structure. These structural differences can be traced to differences in the underlying hydrophobic networks. This approach, however, is not a pancea that guarantees preserving the orientation of hydrophobic sidechains in low-resolution models: the orientation of the Leu67 sidechain in models refined with both native CNS and CNS+HINT differ significantly from the deposited structure. Generally, and regardless of refinement protocol, sidechain orientations are largely preserved in models at resolutions 3.0 Å and better. As resolution degrades, there is less conservation; although some, particularly non-polar, sidechain conformations are retained to lower resolutions with CNS+HINT (see Figures S1 and S2 ).
Refinement of “authentic” low-resolution datasets
As a final test, we have re-refined three datasets with resolutions between 3.5 and 4.0 Å: 3GEC [28] (4.00 Å), 1ISR [29] (4.00 Å) and 1SA0 [30] (3.58 Å). The results are summarized in Table 2 where the deposited, native CNS and CNS+HINT models are compared. αβ-tubulin (1SA0), in particular, is a high-profile drug target in which we [16] and others [31] , [32] have an interest, but have been held back by the rather featureless colchicine binding site ascerbated by its relatively poor crystallographic resolution. Re-refinement of the deposited tubulin-colchicine structure resulted in a model with a Ramachandran score about 5% better, and a Clashscore 17% better, than the deposited structure. The R and R free values are higher for our model than for that deposited, but one likely cause is that REFMAC [33] , with per-domain TLS (Translation Libration and Screw) refinement [34] , was used in the original refinement of this particular structure. It has been noted previously that reproducing reported R values for low-resolution structures can be problematical [8] .
Figure 9A shows a superposition of the Cα backbones for refined models of an αβ-tubulin heterodimer, while Figure 9B focuses on the region of the bound colchicine. Cα RMSDs for both the native CNS and CNS+HINT re-refined structures are ∼0.7 Å with respect to the deposited structure – similar to RMSD values ( Table S1 ) between CNS+HINT refined low-resolution models and their high-resolution references. However, re-refinement of the tubulin structure produced some quite significant (∼2.8 Å) local deviations in Cα positions (with concomitant differences in sidechain positions) compared to the deposited structure, of which, intriguingly, the largest are localized near the colchicine binding site. We are currently exploring these new tubulin models as docking targets.
Conclusions
We have implemented a new X-ray data refinement protocol based on CNS that relies on HINT, an empirical hydropathic forcefield, to enforce both polar and hydrophobic interaction networks for low-resolution data. Models obtained with this approach appear to have more native-like interaction networks at resolutions approaching 5 Å, as analyzed with various quality metrics, than conventionally-refined models. As currently implemented, our protocol is only applicable for protein or polynucleotide atoms in a dictionary; thus, all other atoms from ligands, water or other heterogens are refined with the default CNS protocol. Extensions to address these issues, which will likely yield even higher quality models, are currently under development. It has very recently been reported [35] that the parameterization of CNS version 1.3 yields improved low-resolution structures; we are exploring integration of our protocol to this new program. | Results and Discussion
It is obvious and unassailable that current protocols for model-building and refinement based on low-resolution X-ray reflection data produce structural models of poorer quality than those based on high-resolution data. We are testing the hypothesis that these deficiencies can, at least in part, be related to the lack of well-developed hydropathic interaction networks in these models. We have sought to illustrate this point with available crystallographic data, but there is a paucity of directly comparable and unbiased structural data for proteins solved at varying resolutions. Another approach, used in this work, is to synthesize low-resolution data by truncating high-resolution data ( vide infra ) and evaluate structures refined against these data [8] . In Figure 1A (red circles) we present normalized (relative to the crystallographic structure model) intramolecular HINT scores, calculated for 309 structural models for 25 proteins refined against data truncated at resolutions between 1.48 and 4.88 Å. This score is calculated as the sum of all non-covalent intramolecular atom-atom interactions using the paradigm described above, i.e., higher scores represent in toto more favorable high-quality interactions within the structure. Clearly, there is a trend of an accelerating decrease in HINT score, especially for resolutions worse than 3.0 – 3.5 Å, indicating that, just as we hypothesized, these models indeed have poorer quality hydropathic interaction networks. Another evaluation of structure as a function of resolution can be obtained by calculating non-covalent energies of structure models with a molecular mechanics forcefield. The CHARMM [19] electrostatic term ( Figure 1B , red circles) shows a similar trend: between 3.0 and 4.8 Å there is a more than 30% decrease in favorable electrostatic energies, relative to those in the crystallographic models, again in accord with our hypothesis. This theme is repeated with other knowledge-based structural metrics including Ramachandran scores (percentage of residues in the favored regions), as illustrated (red circles) in Figure 1C . All of these data confirm that there is a clear tendency towards decreasing structural quality as the experimental resolution of the data is decreased.
One approach to probe, and perhaps ameliorate, the disparity between structural models refined with high- and low-resolution data, is to include electrostatic interactions in X-ray refinement protocols. If electrostatics substantively improves structural quality, we can assert that compromises to polar interaction networks, e.g., hydrogen bonds or weaker, longer-range acid-base interactions, are the dominant source of structural errors in low-resolution structural models. On the other hand, partial or negligible changes in structure quality would strongly suggest that other factors are at play. In Figures 1A, 1B and 1C (green circles), we present normalized intramolecular HINT scores, normalized electrostatic energies from CHARMM and Ramachandran scores, respectively, for structures refined with the optional electrostatics protocol in CNS, which we are terming “CNS+electrostatics”. While the HINT scores ( Figure 1A ) are higher overall by about 25% after refinement with this protocol, the trend of decreasing HINT score with resolution is essentially unchanged. Electrostatic energy ( Figure 1B ) is likewise stabilized by about 15%, but even this, which essentially reports the same property used in its optimization, trends to lower values (higher energies) with lower resolution. Finally, Ramachandran scores ( Figure 1C ) suggest that refinement with electrostatics only modestly improves structural quality (4% improvement at 3.5–4.0 Å and 2% at ≥4.0 Å) for models from low-resolution data. The lack of significant improvement of the latter is especially notable as it is an independent and universally accepted structural metric. Furthermore, the higher overall HINT scores and lower electrostatic energies, which were both referenced to their deposited high-resolution structural models, suggests that the inclusion of electrostatics in refinement may result in models with non-native (and potentially overweighted) polar interaction networks.
In the remainder of this paper we describe the implementation and testing of a structure refinement protocol enhanced with the HINT hydropathic forcefield. It is our view that, because Coulombic electrostatic terms focus exclusively on polar components of interaction networks, refinement with electrostatics is, at best, inadequate for improving the quality of low-resolution structure models. It is important to also include terms that improve the independent and complementary hydrophobic component of the networks.
Implementing the HINT forcefield in CNS
A modified CNS energy function was implemented: E total = E geom + w a E X-ray + w HINT E HINT , where E geom accounts for the covalent, dispersion, and electrostatic energies (when activated, as in the “CNS+electrostatics” protocol) of the biomolecule, i.e., from an adaptation of the Engh and Huber forcefield [18] , E X-ray represents the energy contributions from the experimental X-ray amplitudes (with relative weight w a ) and E HINT is the HINT energy term (with relative weight w HINT ). E HINT is calculated by applying a standard constant (1 kcal mol −1 = 515 score units [14] , [20] ) to the HINT score, which is the double sum over all atom pairs for two terms, ∑∑ [a i S i a j S j exp(-r ij ) T ij +50 F(r ij )] , where a is the hydropathic atom constant and S is the solvent accessible surface area of atoms ( i and j ), r ij is the distance between these atoms, T ij is a discriminant function for polar-polar interactions, and F is the Levitt [21] implementation of the Lennard-Jones potential. While w a is optimized internally by CNS, the w HINT term was optimized ( vide infra ) for each refinement by identifying the value producing the lowest R free . This protocol will be referred to as “CNS+HINT” throughout this report.
Simulation of low-resolution data sets
Assessing a protocol that purports to improve structural quality for low-resolution models requires known experimental structures of accepted high quality as references. Because there are few, if any, authentically low-resolution data sets for which high-resolution structural data also exists in the same crystal form, our approach was validated within the same data set. We chose 25 high-resolution (≤1.5 Å) and diverse (≤30% homology between any pair) structures from the PDB for which structure factor data was available as our reference sets. Artificial but realistic low-resolution data were synthesized using a protocol adapted from Schröder, Levitt and Brunger [8] . In addition, the validation must simulate the process of refining the structural models at each resolution without introducing bias. We deemed it unacceptable to refine the reference atomic coordinates against the simulated low-resolution structure factor data as that would almost certainly bias the resulting refined structure towards the reference structure. Thus, we generated starting models for each structure by randomly perturbing the coordinates for each atom of the deposited PDB structural model.
Assessing the fit of refined structures to experimental reflection data
An independent metric for assessing structural quality is provided by the fit of the calculated model structure factor amplitudes to experimental structure factor amplitudes. A protocol that aims to improve structural quality should improve the fit to experimental data, or at the very least not degrade it. In Figure 2A , we present a histogram of R free values for structures refined using native CNS, CNS+electrostatics and CNS+HINT. Clearly, for the lowest resolutions, R free values for structures refined using the HINT representation of non-covalent interactions are significantly lower than for structures refined with the other protocols. For higher resolution structures, the inclusion of the HINT term does not increase R free . Together, these results clearly indicate that refinement with CNS+HINT does not overfit the experimental data, and for the lowest resolution structures, improves the fit to experimental data.
Another test of the effect of the HINT term on structure is to monitor the weight assigned to the HINT term, chosen by minimizing R free , yielding the optimal structure. In Figure 2B , we present a histogram of the w HINT values for optimal structures as a function of resolution. There is a clear trend towards increasing W HINT values as resolution decreases, which strongly suggests a more dominant role for the hydropathic energy term in defining atomic positions with decreasing resolution. In other words, the hydropathic term serves to restrain atomic positions in cases where atomic positions are poorly defined on the basis of experimental data alone. It should be noted that the electrostatics term in CNS+electrostatics is incorporated within E geom and its relative weight with respect to covalent terms is fixed. The contribution of electrostatics is only varied as the X-ray weighting, w a , is optimized. However, Figure 1 suggests that increasing the role of electrostatics by adaptations to the CNS energy function may be be counterproductive, while decreasing this weighting would only reduce its already minor effect.
Assessing the model structures by superposition on the high resolution targets
The “gold standard” of structural quality is probably the fit of low-resolution structural models to the experimental high-resolution reference structure. Ideally, low-resolution models should superimpose perfectly on the high-resolution target, at least for residues with well-defined electron density (i.e., those buried or involved in lattice interactions). In Figure 3 , we present a histogram of heavy atom root mean squared deviations (RMSDs) for structures refined using the three protocols (inset, C α RMSDs). It is clear that none of the protocols yields refined structures that superpose on the high-resolution target and that structural deviations increase as resolution decreases. However, for the lowest resolution structures, structures refined with CNS+HINT are, at least moderately (0.18 Å at ≥4.0 Å), closer to the high-resolution target than structures refined with the other two protocols.
Although this 15% improvement in RMSDs between structures refined with CNS+HINT and native CNS at the lowest simulated resolutions is smaller than we might have hoped, it is nonetheless significant, and there are a number of factors that may inflate the observed deviations from the high-resolution target. First, our reevaluations of refinement were automated to be performed identically. Second, surface residues (not involved in lattice contacts) have poorly resolved electron densities and are not refined well, but their atoms are included in RMSD. Third, as the assignment of waters to density can be somewhat arbitrary, we have not considered any crystallographic waters. As resolution decreases the number of waters observed decreases quite dramatically [22] . Compared to the large number (average: 235) of waters observed in the 25 reference targets, few (if any) waters would have been observed in the lower of the resolution ranges we explored. This compromises the quality of polar interaction networks for all models (regardless of protocol) since buried waters usually participate in direct or bridging hydrogen-bonding interactions and can thus affect the atomic positions of their partner atoms. Fourth, we have included explicit protons, which are required by the HINT scoring function, that were not present in any of the deposited structures. Finally, instead of the simple quadratic nonbonded term typically used in CNS crystallographic refinements, we have used the Lennard-Jones 6–12 term. Refining the deposited coordinates in the absence of hydropathic or electrostatic terms, with explicit protons and without waters, results in structures with all heavy atom RMSDs of 0.25 Å from the deposited structures, which effectively sets a floor value for RMSD comparisons. It is instructive, however, to put these RMSDs in perspective by comparison to atomic positional uncertainties, e.g., Cruickshank's Diffraction-component Precision Index (DPI) [23] values, that are also depicted in Figure 3 . Clearly, RMSDs for all low-resolution cases are well within the uncertainties suggested by the DPI.
Assessing the quality of refined structures using knowledge-based metrics
Structural quality can also be assessed by knowledge-based metrics that “rank” a structure with respect to others. Model quality, as reported by indices like the Ramachandran score or MolProbity [24] clashscore, has been shown to worsen with decreasing resolution. Histograms for Ramachandran scores ( Figure 4A ) and clashscores (a measure of the number of unusually short interatomic distances in a structure, Figure 4B ) report the same trend: while inclusion of electrostatics alone has only a modest impact, the inclusion of the HINT representation of non-covalent interactions results in much more significant improvements in structure quality. The HINT potential, which is based on pairwise non-covalent interactions, has no “intrinsic knowledge” of preferred peptide backbone angles, yet the CNS+HINT models have its inclusion a significantly higher fraction (13% larger for resolution ≥4.0 Å) of residues in favored regions of the Ramachandran plot. Clashscores ( Figure 4B ) show an even more dramatic (51% at ≥4.0 Å) improvement for the CNS+HINT structures. In addition, since the clashscores for the native and CNS+electrostatics refined structures are virtually identical, the anomalously low electrostatic energies and increased HINT scores (relative to reference) for CNS+electrostatics models ( Figures 1A and 1B ) are, in part, an artifact of abnormally short interatomic distances between polar atoms. In contrast, the better clashscores from CNS+HINT refinement strongly suggests that this protocol results in better-defined interaction networks.
Deconstructing the effect of the HINT term
Neither the native CNS nor the CNS+electrostatics protocols were able to maintain normalized HINT score or electrostatic energy as model resolutions decreased. Figures 5A and 5B reprise these graphs for models refined with the CNS+HINT protocol. In both cases, this protocol produces more native-like behavior (normalized value close to 1) throughout the range of resolutions. It is interesting that the relatively crude HINT “electrostatics” [11] , largely based on experimental solvent partitioning of small organic molecules, perform measurably better than the CNS partial charge-based Coulombic electrostatics. This is probably because the HINT atomistic parameters are not solely electrostatic, but are scalar quantities that in principle encode all physiochemical interactions in biological media. It is revealing to deconstruct the HINT score into two components (see Figure 5C ): polar, where hydrogen bonds and acid-base interactions have positive scores, while acid-acid and base-base have negative scores; and hydrophobic, where hydrophobic-hydrophobic is positive and hydrophobic-polar is negative. The HINT polar component score is similar for both the CNS+electrostatic and CNS+HINT structures, which suggests that either protocol adequately models these networks. The hydrophobic component shows remarkably consistent values with minimal scatter, but is overall, seemingly small, only 5% on average of the total score. However, this is actually the balanced sum of favorable and unfavorable terms, whose values are much larger – about +60% and −55%, respectively, of the total score (see Figure 6B ). (The corresponding plot for the HINT polar term is shown in Figure 6A .) The structural integrity of these models, as evidenced above, highlights the importance of hydrophobic networks and that the HINT term effectively describes these networks.
Understanding degradation of low-resolution structural models
Although the largest deviations between low(er) resolution models and the target are generally, as expected, in solvent-exposed regions, some significant structural differences, particularly for sidechain orientations, can be found elsewhere. To further explore these differences and to better understand how interaction networks are compromised in low-resolution models, we are focusing here on three structures: 1WPA [25] , 1OI7 [26] and 1RL0 [27] , which are the most polar, most hydrophobic and intermediate hydrophobicity/polarity, respectively, of the 25 structures in this study. Table 1 summarizes structural and quality metrics for these structures as refined at their highest and lowest simulated resolutions. Additional data for these and the other 22 structures are available in Supporting Information ( Tables S1 and S2 ). For all three of these structures, Cα RMSDs between structures refined with CNS+HINT and the deposited structure (see Figure 7 ) were between 0.5 and 0.6 Å, i.e., about 0.1 Å lower than the RMSDs for those structures refined with native CNS. For the most hydrophobic protein, 1OI7, this RMSD dropped with CNS+HINT from 0.62 to 0.50 Å.
While the backbone structures are very similar, even at low-resolution, sidechain orientations are not nearly as well-conserved. Many of the largest deviations are seen for flexible residues that are exposed to bulk solvent. However, sidechain orientations for buried hydrophobic residues in low-resolution models can also differ significantly from those in the target. Two examples are shown in Figure 8A and 8B , which are superpositions centered on residues Phe187 from 1OI7 (4.07 Å) and Leu67 from 1RL0 (4.31 Å), respectively. Inclusion of the HINT term, which explicitly encodes hydrophobic interactions, produces a Phe187 sidechain orientation that is much more similar to that observed in the deposited structure. These structural differences can be traced to differences in the underlying hydrophobic networks. This approach, however, is not a pancea that guarantees preserving the orientation of hydrophobic sidechains in low-resolution models: the orientation of the Leu67 sidechain in models refined with both native CNS and CNS+HINT differ significantly from the deposited structure. Generally, and regardless of refinement protocol, sidechain orientations are largely preserved in models at resolutions 3.0 Å and better. As resolution degrades, there is less conservation; although some, particularly non-polar, sidechain conformations are retained to lower resolutions with CNS+HINT (see Figures S1 and S2 ).
Refinement of “authentic” low-resolution datasets
As a final test, we have re-refined three datasets with resolutions between 3.5 and 4.0 Å: 3GEC [28] (4.00 Å), 1ISR [29] (4.00 Å) and 1SA0 [30] (3.58 Å). The results are summarized in Table 2 where the deposited, native CNS and CNS+HINT models are compared. αβ-tubulin (1SA0), in particular, is a high-profile drug target in which we [16] and others [31] , [32] have an interest, but have been held back by the rather featureless colchicine binding site ascerbated by its relatively poor crystallographic resolution. Re-refinement of the deposited tubulin-colchicine structure resulted in a model with a Ramachandran score about 5% better, and a Clashscore 17% better, than the deposited structure. The R and R free values are higher for our model than for that deposited, but one likely cause is that REFMAC [33] , with per-domain TLS (Translation Libration and Screw) refinement [34] , was used in the original refinement of this particular structure. It has been noted previously that reproducing reported R values for low-resolution structures can be problematical [8] .
Figure 9A shows a superposition of the Cα backbones for refined models of an αβ-tubulin heterodimer, while Figure 9B focuses on the region of the bound colchicine. Cα RMSDs for both the native CNS and CNS+HINT re-refined structures are ∼0.7 Å with respect to the deposited structure – similar to RMSD values ( Table S1 ) between CNS+HINT refined low-resolution models and their high-resolution references. However, re-refinement of the tubulin structure produced some quite significant (∼2.8 Å) local deviations in Cα positions (with concomitant differences in sidechain positions) compared to the deposited structure, of which, intriguingly, the largest are localized near the colchicine binding site. We are currently exploring these new tubulin models as docking targets.
Conclusions
We have implemented a new X-ray data refinement protocol based on CNS that relies on HINT, an empirical hydropathic forcefield, to enforce both polar and hydrophobic interaction networks for low-resolution data. Models obtained with this approach appear to have more native-like interaction networks at resolutions approaching 5 Å, as analyzed with various quality metrics, than conventionally-refined models. As currently implemented, our protocol is only applicable for protein or polynucleotide atoms in a dictionary; thus, all other atoms from ligands, water or other heterogens are refined with the default CNS protocol. Extensions to address these issues, which will likely yield even higher quality models, are currently under development. It has very recently been reported [35] that the parameterization of CNS version 1.3 yields improved low-resolution structures; we are exploring integration of our protocol to this new program. | Conclusions
We have implemented a new X-ray data refinement protocol based on CNS that relies on HINT, an empirical hydropathic forcefield, to enforce both polar and hydrophobic interaction networks for low-resolution data. Models obtained with this approach appear to have more native-like interaction networks at resolutions approaching 5 Å, as analyzed with various quality metrics, than conventionally-refined models. As currently implemented, our protocol is only applicable for protein or polynucleotide atoms in a dictionary; thus, all other atoms from ligands, water or other heterogens are refined with the default CNS protocol. Extensions to address these issues, which will likely yield even higher quality models, are currently under development. It has very recently been reported [35] that the parameterization of CNS version 1.3 yields improved low-resolution structures; we are exploring integration of our protocol to this new program. | Conceived and designed the experiments: VNK JNS GEK. Performed the experiments: VNK JNS GEK. Analyzed the data: VNK JNS GEK. Wrote the paper: VNK JNS GEK.
Background
The quality of X-ray crystallographic models for biomacromolecules refined from data obtained at high-resolution is assured by the data itself. However, at low-resolution, >3.0 Å, additional information is supplied by a forcefield coupled with an associated refinement protocol. These resulting structures are often of lower quality and thus unsuitable for downstream activities like structure-based drug discovery.
Methodology
An X-ray crystallography refinement protocol that enhances standard methodology by incorporating energy terms from the HINT (Hydropathic INTeractions) empirical forcefield is described. This protocol was tested by refining synthetic low-resolution structural data derived from 25 diverse high-resolution structures, and referencing the resulting models to these structures. The models were also evaluated with global structural quality metrics, e.g., Ramachandran score and MolProbity clashscore. Three additional structures, for which only low-resolution data are available, were also re-refined with this methodology.
Results
The enhanced refinement protocol is most beneficial for reflection data at resolutions of 3.0 Å or worse. At the low-resolution limit, ≥4.0 Å, the new protocol generated models with Cα positions that have RMSDs that are 0.18 Å more similar to the reference high-resolution structure, Ramachandran scores improved by 13%, and clashscores improved by 51%, all in comparison to models generated with the standard refinement protocol. The hydropathic forcefield terms are at least as effective as Coulombic electrostatic terms in maintaining polar interaction networks, and significantly more effective in maintaining hydrophobic networks, as synthetic resolution is decremented. Even at resolutions ≥4.0 Å, these latter networks are generally native-like, as measured with a hydropathic interactions scoring tool. | Supporting Information | The structure factor data for 1SA0 was kindly provided by Dr. Marcel Knossow (CNRS-LEBS, Gif-sur-Yvette). | CC BY | no | 2022-01-13 08:14:22 | PLoS One. 2011 Jan 5; 6(1):e15920 | oa_package/7b/57/PMC3016398.tar.gz |
PMC3016399 | 21246044 | Introduction
Coral reefs are biologically diverse ecosystems providing habitat for a wide range of marine organisms. The growth of corals and their ability to form the calcium carbonate substrate reflects their endosymbioses with photosynthetic dinoflagellates belonging to the genus Symbiodinium [1] . Nine divergent lineages, clades A–I, have been described in Symbiodinium based on nuclear ribosomal DNA (rDNA) and chloroplast 23S rDNA [2] with each clade containing multiple genetic varieties often resolved using the internal transcribed spacer (ITS) regions [e.g. [3] – [6] ].
Symbiodinium diversity is partitioned by a variety of factors including biogeographical barriers, host species, colony depth, irradiance, and host symbiont transmission strategy [7] – [10] . Biogeographic patterns in Symbiodinium are evident between reefs in different oceans (Pacific versus Atlantic) [9] , among reefs within an ocean (e.g. Pacific reefs in Japan and the Great Barrier Reef Australia) [11] , [12] , and from reefs across a latitudinal gradient (e.g. eastern Australia coastline) [12] , [13] . The same coral species from inshore and offshore reefs within the same reef complex (e.g. in the central Great Barrier Reef or in Panama) can also associate with different Symbiodinium [12] , [14] , as can colonies of the same species from the same reef environment [5] , [10] , [14] , [15] . Fidelity in the association between some coral species and Symbiodinium has lead to a degree of co-evolution resulting in host-symbiont specificity [9] , [16] . For example, the ITS2 Symbiodinium genotype C42 associates with Pocillopora and C31 with Montipora [9] . Attributed to levels of irradiation, Symbiodinium in corals such as Montastraea spp. and Madracis pharensis in Panama [8] , [17] and Pocillopora damicornis in the Great Barrier Reef [18] partition as a function of depth and/or location on individual colonies [8] . Host symbiont acquisition strategy also affects Symbiodinium assemblages with hosts that acquire their symbionts from the environment (horizontal symbiont acquisition strategy) primarily associating with a similar pool of symbionts, and those that acquire their symbionts from the parent colony (vertical symbiont acquisition strategy) harboring their own unique suite of symbionts specific to a host genus [9] , [10] .
Understanding the factors that affect distribution and specificity patterns in coral-dinoflagellate symbioses and the physiological range of host-symbiont combinations is important for understanding how corals will respond to environmental change. In this regard, functional variability in isolated Symbiodinium types and specific coral- Symbiodinium symbioses have been correlated with numerous factors. Variation in the photophysiology of Symbiodinium [17] , [19] , [20] , growth rate of coral colonies [21] , symbiont carbon fixation and translocation to the host [22] , [23] , symbiont thermal tolerance [24] , and host disease susceptibility [22] all provide evidence for range thresholds in physiological performance of different host-symbiont assemblages as a response to the environment. As coral bleaching and disease are predicted to impact coral reef ecosystems in the future and have recently increased in severity and occurrence [25] , [26] , the different host-symbiont combinations that can occur and the environmental tolerance of those symbioses will provide the framework for predicting future shifts in coral reef communities.
The number of unique Symbiodinium that reside in individual coral hosts is an area of ongoing debate [27] , [28] . Heterogeneous mixtures of Symbiodinium have been identified in a variety of host species e.g. [7] , [8] , [15] , [18] , and more sensitive molecular techniques such as quantitative real time PCR have enabled the detection of Symbiodinium clades in low abundance [29] – [31] . However, the number of Symbiodinium species and their occurrence among marine hosts remains a central issue that is highly relevant to our understanding of the capacity of coral-algal symbiosis and reef ecosystems to adapt with changes in the environment [32] . The nuclear internal transcribed spacer region 2 (ITS2) is currently most often utilized to resolve Symbiodinium diversity within the phylogenetic clades A–I e.g. [2] , [5] , [12] , [16] , [18] , [33] , and is being promoted as a species level marker [9] , [30] , [34] . However, the multi-copy nature and intra-genomic variability of the ITS2 [35] , [36] often results in the isolation of more than one ITS2 sequence type from an individual Symbiodinium cell, and this interpretational complexity combined with low genetic divergence among ITS2 sequences [e.g. 9] makes the application of this marker in species assignment problematic [16] , [37] .
In order to further investigate the partitioning of Symbiodinium in corals and the utility of the ITS2 marker in describing Symbiodinium diversity, we set out to investigate the Symbiodinium communities in colonies of Montipora capitata at similar depths over a spatial scale of meters to kilometers in Kāne'ohe Bay, O'ahu Island, Hawai'i. As M. capitata exhibits vertical transmission of its symbionts, we also set out to examine whether patterns of Symbiodinium ITS2 diversity map onto the M. capitata atpsβ and nad5 genotypes. The data reveal that Symbiodinium ITS2 diversity is different among colonies of M. capitata and does not reflect host genotype. These data highlight both the complexity of the Symbiodinium ITS2 sequence diversity in corals, and are used as a framework to discuss the problems inherent in using this marker to designate species in the genus Symbiodinium . | Methods
Ethics Statement
This study was conducted under the research guidelines of the University of Hawaii Executive Policy E5.211 and corals collected under the State of Hawaii Special Activity Permit number 2007-02 issued to the Hawaii Institute of Marine Biology.
Sample collection and sites
The sampling for this study was conducted in June 2007 in Kāne'ohe Bay, on the island of O'ahu. 52 colonies of Montipora capitata (brown branching morph) were sampled from one location at the same relative position on each colony (upper region) using a hammer and chisel at a depth of 1–2 m from three sites nested in three regions of the bay (sites 1–9; Figure 1 ) that lie on a northerly environmental gradient from nearshore to offshore. Region 1 was located near the Kāne'ohe Stream mouth (Sites 1–3), Region 2 in the centre of the bay (Sites 4–6), and Region 3, near the outer barrier reef (Sites 7–9). Latitudinal and longitudinal coordinates for Sites 1–9 are 21.24.902N and 157.46.826W, 21.25.271N and 157.47.255W, 21.25.574N and 157.47.336W, 21.26.039N and 157.47.497W, 21.26.200N and 157.47.518W, 21.26.265 and 157.47.440W, 21.27.026N and 157.47.585W, 21.26.992N and 157.47.762W, 21.27.112N and 157.47.820W, respectively. Six M. capitata colonies were sampled from Sites 1–9. Two samples from Site 9 failed to amplify in PCR, reducing the sample number at that site to four.
DNA extraction
For extraction of nucleic acids, the coral fragments (≈5 mm 2 of tissue from verrucae and surrounding corallites including entire polyps) were removed from each colony and stored at 4°C in 400 μl of DNA extraction buffer [50% (w/v) guanidinium isothiocyanate; 50 mM Tris pH 7.6; 10 μM EDTA; 4.2% (w/v) sarkosyl; 2.1% (v/v) β-mercaptoethanol] at the time of collection, until processed (up to 2 weeks). The coral samples in DNA extraction buffer were then incubated at 72°C for 10 min and centrifuged at 16,000 g for 5 min. The supernatant was mixed with an equal volume of 100% isopropanol to precipitate the DNA and chilled at −20°C overnight. The precipitated DNA was pelleted by centrifugation at 16,000 g for 15 min, and washed in 70% ethanol before resuspension and storage in Tris Buffer (0.1 M pH 8).
PCR, cloning, and sequencing of Symbiodinium
The Symbiodinium partial 5.8S, ITS2, and partial 28S region was amplified in PCR using the forward its-dino ( 5′ GTGAATTGCAGAACTCCGTG 3′ ) and reverse its2rev2 ( 5′ CCTCCGCTTACTTATATGCTT 3′ ) primers [38] . The products of these amplifications are referred to from here as Symbiodinium ITS2 sequences. Each 25 μl PCR reaction contained 1 μl of DNA template, 2.5 μl of 10x ImmoBuffer, 0.1 μl IMMOLASETM Hot-Start DNA Polymerase (Bioline, MA), 3 mM MgCl 2 , 0.5 μl of 10 mM total dNTPs (2.5 mM each), 5 pmol each primer, and deionized sterile water to volume. PCR was performed on a BioRad iCyclerTM using the following conditions: 95°C for 7 min, followed by 35 cycles of 45 s at 95°C, 45 s at 52°C, and 45 s at 72°C, with a final extension at 72°C for 5 min. PCR amplicons were purified using the QIAquick® PCR Purification Kit (Qiagen, CA), ligated into the pGEM®-T Easy vector (Promega, WI), transformed into α-select gold efficiency competent cells (Bioline, MA), and grown overnight on selective LB media (ampicillin 50 μg/ml, 0.1 mM IPTG, 50 μg/ml X-gal). Positive clones were grown overnight in Circlegrow® (MP Biomedicals, CA) and plasmids purified using the Perfectprep® Plasmid Isolation Kit (Eppendorf, Hamburg). Clones from PCR products (3 clones from 1 coral colony, 5 clones from each of 36 coral colonies, 6 from each of 13 coral colonies, and 7 from each of 2 coral colonies) were sequenced in both directions using BigDye Terminators (PerkinElmer, MA) on an ABI-3100 automated sequencer at the University of Hawai'i. Additional clones were sequenced from two colonies sampled from each region (six colonies in total, 35–55 clones per colony). Sequences were inspected, aligned, and edited using MacVector® 8.0.2 software. Symbiodinium ITS2 sequences used for downstream analyses were edited as described in Stat et al. [16] . For all analyses, Symbiodinium ITS2 was categorized by clade (C or D) [34] , ITS2 secondary structure (folding), and ITS2 sequence. The secondary structure of all ITS2 sequences were estimated using 4SALE and the ITS2 database website [39] – [42] using published Symbiodinium ITS2 structures as templates [16] , [36] , [43] .
PCR and sequencing of Montipora capitata genes
To determine whether Symbiodinium ITS2 composition is a factor of host lineage, the host Montipora capitata colonies were genotyped using both the mitochondrial NADH dehydrogenase 5′ intron ( nad5 ) and the nuclear ATP synthetase subunit beta intron ( atpsβ ). M. capitata nad5 was amplified with primer pair ND51a ( NAD5_700F : 5′ YTGCCGGATGCYATGGAG 3′ and NAD1_157R : 5′ GGGGAYCCTCATRTKCCTCG 3′ ) as outlined in Concepcion et al. [44] , and atpsβ was amplified with a primer pair redesigned from Jarman et al. [45] to be specific for M. capitata (F: 5′ TGATTGTGTCTGGTGTAATCAGC 3′ and R: 5′ CGGGCACGGGCGCCGGGGGGTTCGTTCAT3′ ) [46] . For both markers, each 25 μl PCR contained 1 μl of DNA template, 2.5 μl of 10x ImmoBuffer, 0.1 μl IMMOLASETM Hot-Start DNA Polymerase (Bioline Inc.), 3 mM MgCl 2 , 0.5 μl of 10 mM total dNTPs (2.5 mM each), 13 pmol each primer, and deionized sterile water to volume. PCR amplification was performed on a BioRad iCyclerTM as follows: 95°C for 7 min, followed by 35 cycles at 95°C for 30 s, 53°C for 30 s, 72°C for 30 s, and a final extension at 72°C for 10 min. All successfully amplified PCR products were “cleaned” with 0.75 units of Exonuclease I: 0.5 units of Shrimp Alkaline Phosphatase (Exo:SAP) per 7.5 μl PCR product at 37°C for 60 min, followed by deactivation at 80°C for 10 min prior to being cycle-sequenced in both directions using Big Dye Terminators (Applied Biosystems) and run on an ABI-3130XL automated DNA sequencer. atpsβ alignments were confirmed by eye and trimmed to 252 bp. Since computational phasing of diploid nuclear loci can be more accurate than cloning in separating alleles from heterozygous individuals [47] , gametic phases for atpsβ were inferred using P hase [48] , [49] as implemented in D na SP [50] .
Statistical parsimony networks
Statistical parsimony networks of Symbiodinium ITS2 sequences were constructed using the software TCS 1.21 [51] . The cladogram estimation was performed under a 95% connection limit and gaps were treated as a 5 th state with the alignment edited so that each indel was considered a single mutation.
Analysis of spatial partitioning in Symbiodinium and Montipora
We set out to determine the spatial scale(s) at which Montipora capitata and Symbiodinium composition partition across Kāne'ohe Bay: meters (Coral Colony), 10's of meters (Site), and 100's to 1000's of meters (Region). Due to the sampling design, Sites are nested within Regions, denoted as Site(R), and M. capitata colonies are nested within Sites, denoted as Colony(S(R)). We used the PERMANOVA+1.0.2 software add-on for PRIMER 6 [52] to run three-level hierarchical analyses of molecular variance (AMOVA) [53] to test for spatial structuring. PERMANOVA+ was run using Type I sums of squares, unrestricted permutation of raw data, and significance was determined by permutation test (10,000 permutations) of the pseudo-F statistic. Post hoc pairwise comparisons were conducted among Regions, Sites, and Colonies using an alpha of 0.05 while controlling the family-wise false discovery rate at or below 0.05 [54] . Φ statistics (analogous to Wright's [55] F-statistics) were calculated from the PERMANOVA+ output following Excoffier et al. [53] . Φ ranges from 0 to 1, where 0 indicates that genetic composition among samples is identical and 1 indicates that at least one sample is completely differentiated and fixed for a single unique genetic sequence or type. We used PERMANOVA+ because the standard AMOVA software, A rlequin 3.1 [56] , cannot run analyses on data sets with more than two hierarchical spatial levels with non-diploid data. PERMANOVA+ was not developed with AMOVA in mind, consequently, some calculations were required prior to and following the analysis. Prior to analysis, the AMOVA matrices of genetic distance were generated in A rlequin 3.1, the square root of each distance was taken, and the matrices were imported to PERMANOVA+. For Symbiodinium ITS2 and M. capitata atpsβ sequences, the simple pairwise genetic distance was used. For Symbiodinium ITS2 secondary structure, the average simple pairwise genetic distance among sequences coding for each folding group was used. For Symbiodinium ITS2 clades, because sequence divergence has no impact on the analysis of two categories (clade C or D), the only possible distances were zero or one.
AMOVA uses certain statistical terms and notations that carry accepted biological meanings based on loci with either two bi-parentally inherited alleles or one maternally inherited haplotype per individual. Symbiodinium ITS2 is a multi-copy intra-genomically variable marker and we are drawing sequences from multiple individuals of Symbiodinium , therefore we incorporate this assumption into our AMOVA analysis. We thereby negate any traditional biological inferences, such as the inbreeding coefficient Φ IS , that are calculated when each sequence represents a single haplotype or one of two alleles [55] . The lowest level of inference that can be made here for Symbiodinium is the variation in ITS2 sequences within Colonies(S(R)) relative to the variation among Colonies(S(R)), denoted as Φ C(S(R)) . Φ C(S(R)) carries biological meaning, just not that of Φ IS . In the interest of clarity, we similarly avoid other standard AMOVA notation laden with biological implications such as Φ CT , Φ SC , and Φ ST [53] in order to focus on the statistical inference of AMOVA in ITS2. If there is a significant difference in the ITS2 composition detected by the AMOVA, this implies that the Symbiodinium assemblages are partitioned, regardless of the actual number of individuals represented.
Diversity Indices
“True diversity”, D , [57] was calculated using the Shannon and Weaver [58] diversity index, H′ , as follows, where p is the proportion of ITS2 sequence i out of s sequences in the sample. True diversity represents the effective number of elements, which in this case is the effective number of ITS2 sequences [57] . Coverage estimates of clone libraries were calculated using the equation: where n is the number of unique Symbiodinium ITS2 sequences and N is the total number of clones sequenced from the library [59] . Rarefaction analyses [60] , [61] were performed using Analytic Rarefaction v2 [62] . | Results
Symbiodinium identified in Montipora capitata from Kāne'ohe Bay
A total of 275 Symbiodinium ITS2 sequences belonging to clades C and D were recovered from the colonies of M. capitata . Seventeen different Symbiodinium ITS2 sequences were identified; 14 in clade C and 3 in clade D ( Table 1 ). In addition to the previously published ITS2 sequences C3, C17, C17.2, C21, C31, D1, and D1a [5] , [16] , [33] , [34] , [63] , nine novel clade C sequences and one novel clade D sequence were recovered (C3.14, C21.6, C21.11, C21.16, C31.1, C31.5, C31.6, C31.9, C31.10; and D1.6, accession numbers HQ630872-HQ630881). Statistical parsimony analysis resolved single networks for Symbiodinium ITS2 sequences in clade C and D ( Figure 2 ). Conformational changes to the ITS2 secondary structures occur within stems I and II for sequences in clade C and in stem II for sequences in clade D ( Figure 2 , Figure S1 ). Five putative ITS2 folding structures were identified for sequences in clade C; Group A contains C3 and C3.14, Group B contains C17, C21, C21.6, C21.11, and C21.16, Group C contains C17.2, Group D contains C31.9 and C31.10, and Group E contains C31, C31.1, C31.5, C31.6 ( Figure 2 ). Two folding structures were identified in clade D; Group F contains D1a, and Group G contains D1 and D1.6.
Spatial structure and diversity of Symbiodinium in Kāne'ohe Bay
We set out to determine if there is any partitioning of Symbiodinium composition at the nested scales of Region, Site(R), and host Coral Colony(S (R)) using AMOVA. In most analyses, data organized by clade, secondary structure group, or ITS2 sequence gave concordant results ( Table 2 ), therefore we present the analyses of ITS2 sequences and note when differences occurred in secondary structure and clade analyses from here forward. Spatial partitioning of Symbiodinium ITS2 sequence composition was detected at the scales of Site(R) (P<0.01) and Colony(S (R)) (P<0.01; Table 2 ). The greatest structuring in ITS2 composition occurred among Coral Colonies(S (R)) (Φ C(S(R)) = 0.87), as opposed to Sites(R) (Φ S(R) = 0.27). Because there was no spatial structure in ITS2 by Region, Regions were pooled for post hoc pairwise comparisons of ITS2 among all Sites and Colonies(S). Zero of 36 pairwise comparisons among Sites and 42 of 126 comparisons among Colonies(S) indicated statistically significant differences in Symbiodinium ITS2 sequence composition when controlling the family-wise false discovery rate, but there was no apparent spatial pattern to these differences. Among pairwise comparisons of Colonies(S), grouping the sequences by clade resulted in the detection of fewer statistically significant differences (33 of 42) than when grouping by secondary structure (42 of 42).
As results from the hierarchical AMOVA indicate that the majority of the spatial structure in Symbiodinium ITS2 composition within M. capitata in Kāne'ohe Bay occurs at the scale of Coral Colony, we sequenced additional clones from two colonies haphazardly selected from each Region (6 colonies with a total of 35–55 clones per colony) to further explore inter-colony Symbiodinium sequence diversity. Symbiodinium from clade C was recovered from four colonies, clade D from one colony, and clades C and D from one colony ( Figure 3 ). The number of sequence types recovered from each colony varied from two in Colony 1 to nine in Colony 9. The “true diversity” of Symbiodinium ITS2 within each colony was also variable (Colony 1: D = 1.9; 9: D = 5; 25: D = 2.2; 31: D = 2.6, 44: D = 5.3; 49: D = 1.9). AMOVA-based pairwise comparisons of ITS2 sequences in the six colonies indicate that the clone libraries from each colony are different from one another with the exception of those from Colonies 1 and 25 ( Table 3 ). Despite the fact that all clones from Colonies 9, 31, 44, and 49 are from clade C, they represent unique non-random distributions of Symbiodinium ITS2 sequences. The coverage estimates indicated that the obtained sequences covered a high percentage of the diversity in each clone library ( C = 94%, 83%, 94%, 94%, 84% and 95% for Colonies 1, 9, 25,31, 44, and 49 respectively), and are supported by rarefaction curves reaching an asymptote for libraries from four colonies (1, 25, 31, 49), and approaching an asymptote for the remaining two (9, 44; Figure 4 ). For Colonies 9 and 44, additional sequencing would have recovered minimally more diversity that would not have affected the result. Therefore, given that; 1) the hierarchical AMOVA indicated Coral Colony as the level at which most variation in Symbiodinium ITS2 sequence composition occurs, and 2) pairwise comparisons of the six colonies with increased clone sampling indicates variation in ITS2 composition between colonies, we conclude that the Symbiodinium assemblage in Montipora capitata from Kaneohe Bay is mostly partitioned at the level of Coral Colony.
The Symbiodinium ITS2 composition in Montipora capitata in Kāne'ohe Bay from all colonies (3–7 clones from 52 colonies) compared to the six colonies with additional clones (35–55 clones from 6 colonies) was assessed to determine whether a similar sequence diversity (not distribution) could be recovered using these two approaches. Of the 17 Symbiodinium ITS2 sequences identified in M. capitata from Kāne'ohe Bay, 13 were recovered from the six colonies with increased clone sequencing ( Figure 5 ). The four that were not identified (C21.6, C21.16, C31.6, and D1.6) represent rare or low frequency in the grouped sequences. The true diversity of Symbiodinium ITS2 sequences was the same for all colonies sampled in the Bay grouped and the six colonies grouped ( D = 7.2). A high coverage of sequences from the clone libraries pooled for the two groupings was achieved ( C = 93% and 95% for all colonies and six colonies respectively) and is further supported by rarefaction analyses ( Figure 6 ). There was also no significant difference in the Symbiodinium ITS2 sequence composition between the groups using AMOVA (Φ = −0.07, P = 0.487). These data suggest that the total Symbiodinium sequence diversity (not distribution) present in shallow water M. capitata in Kāne'ohe Bay can be recovered with either sequencing a few clones from many coral colonies or by sequencing a large number of clones from a few coral colonies.
Spatial Structure of Montipora capitata in Kāne'ohe Bay
All corals sampled in this study had the same host nad5 haplotype, which was identical to accession DQ351257 of Montipora capitata from NCBI [44] . Because there was no sequence variation among samples, this marker is not discussed any further.
Four polymorphic sites with no indels in the region aligned for atpsβ accounted for 11 unique alleles (Genbank accession numbers HQ630861-HQ630871) and 23 unique single-locus genotypes among our coral host samples (host genotype A–W, Table 1 ). We set out to determine if there is any partitioning of Montipora capitata atpsβ composition at the nested scales of Region, Site(R), and Colony(S (R)) using AMOVA. As we expected, there was no partitioning of M. capitata by Region (Φ CT = 0.01, P = 0.34) or Site(Region) (Φ SC = 0.04, P = 0.21). There was, however, a significant difference among Colonies(S(R)) (Φ IS = 0.46, P<0.001).
Structure of Symbiodinium by Montipora capitata genotype
We tested whether Symbiodinium composition is related to the host coral genotype using AMOVA based on host genotypes represented in more than one colony (11 genotypes, 40 colonies). There is no indication that Symbiodinium ITS2 sequence composition is related to M. capitata 's atpsβ genotype (Φ = −0.14, P = 0.91). | Discussion
Spatial partitioning of Symbiodinium in Montipora capitata across Kāne'ohe Bay
The absence of Symbiodinium community structure in Montipora capitata among Regions in Kāne'ohe Bay contrasts with the partitioning of Symbiodinium in corals between oceans, reefs at different latitudes, inner and outer lagoonal environments, and on a single reef as a function of depth [e.g. [5] , [7] , [12] , [13] , [14] , [17] , [18] ]. Differences between sites within Kāne'ohe Bay in the Symbiodinium community of M. capitata were evident primarily as a dominance of either clade C or D (colonies at Sites 2 and 5 contained more clade D than other sites). Garren et al. [14] reported that an increase in clade D Symbiodinium abundance in the Montastraea annularis species complex on Panamanian reefs was attributed to increased levels of suspended solids present in inner lagoonal environments relative to the outer lagoonal environment where clade C was dominant. Some symbionts in clade D Symbiodinium appear to be associated with corals that are exposed to “stressful” environmental conditions (e.g. elevated sea surface temperature and increased sedimentation) [64] , [65] . Similarly here, Site 2 is close to the outlet of the Kāne'ohe Bay Stream and has low salinity (Palmer et al. unpubl. data), which may represent a stressful environment for corals at this site. However, Symbiodinium clade D was also more abundant than other clades at site 5, which is situated approximately 3 km from the stream outlet where there is no indication of environmental stressors (temperature, salinity, sedimentation) that are harmful to corals (Palmer et al. unpubl. data). Even though the presence of Symbiodinium clade D is mostly attributed to factors causing a more stressful environment, its occurrence may not be strictly correlated with such factors as has been shown over regional scales with temperature anomalies [66] . Also, the scale at which Symbiodinium diversity is recorded and the spatial scale at which environmental factors are measured may influence results investigating correlations between clade D Symbiodinium and stressful environments.
Spatial partitioning of Symbiodinium diversity in M. capitata across Kāne'ohe Bay was most evident at the level of Colony(S(R)). It is noteworthy that here, one coral sample was collected from a uniform location on each coral colony to allow for comparison of Symbiodinium assemblages among coral colonies. This strategy was adopted to minimize the sampling impact on the 52 coral colonies and to make the analytical work feasible in terms of cost and effort. However, it is possible that samples taken from multiple locations on the same coral colony might resolve spatial heterogeneity of Symbiodinium assemblage in Montipora capitata colonies, as has been demonstrated in Montastraea spp. from the Caribbean [7] , [67] . Although very few studies examining Symbiodinium diversity in corals consider this issue, the complexity of Symbiodinium ITS2 assemblages resolved here suggest that it would be a valuable subject to examine in future studies. That said, inter-colony variation in Symbiodinium within the same host species has been observed over broad geographic scales (e.g. different latitudes and oceans) [9] , [11] , and as a function of depth on the same reef [e.g. [17] , [18] ]. Similarly, variation in Symbiodinium within the same host species within the same reef environment has been shown for a few host species [e.g. 15] . However, it has previously been reported that shallow water M. capitata (brown morph) around O'ahu engaged in a highly specific symbiosis with Symbiodinium ITS2 C31 [33] . Similarly here, ITS2 C31 was recovered from M. capitata colonies with the highest frequency across all Regions ( Figure 3b ) confirming the prevalence of Symbiodinium containing this ITS2 sequence. An unexpectedly high diversity of other Symbiodinium ITS2 sequences were also retrieved from M. capitata (brown morph) here, including C3, C17, C21, D1, and D1a, with some colonies containing four sub-clade C ITS2 sequences. It is important to note that these ITS2 sequences have previously been described as representing ecologically dominant endosymbionts of corals (i.e. they occupy a distinct ecological niche, either specificity to a host species or biogeographic region and hence interpreted as different species) based on fingerprint profiles of amplified Symbiodinium ITS2 using denaturing gradient gel electrophoresis (DGGE) from colonies sampled in nature [5] , [9] , [34] , [63] . This high number of potential endosymbiont “species” within individual coral colonies previously reported to contain a single specific endosymbiont “species” highlights the fact that additional sampling, and/or the application of different analytical methods significantly influences the interpretation of the taxonomic nature and composition of Symbiodinium diversity in individual coral colonies and species. In this context, a greater understanding of the spatial scale at which Symbiodinium ITS2 sequences vary (among and within colonies, and among polyps from the same colony), and the extent of intra-genomic variation in individual Symbiodinium cells is needed.
The forces driving differences in Symbiodinium assemblages among the M. capitata colonies described here are unknown, but likely reflect some combination of host-symbiont specificity, environmental, and stochastic processes [68] . Although no evidence of specificity between Symbiodinium ITS2 and host mitochondrial NADH dehydrogenase 5′ intron ( nad5 ) and nuclear ATP synthetase subunit beta intron ( atpsβ ) genotypes was detected, it is possible that alternate host (or Symbiodinium ) markers with different taxonomic resolution might reveal a correlation between host genotype and their endosymbiont communities.
Interpreting Symbiodinium diversity using ITS2
Identifying heterogeneous Symbiodinium communities is relatively easy at the cladal level because the high level of genetic variation that exists between lineages allows their presence (in high or low abundance) to be determined using sensitive molecular techniques such as Quantitative Real Time PCR [e.g. [29] – [31] ]. However, defining the number of sub-clade Symbiodinium present in heterogeneous endosymbiotic communities using a marker like ITS2 is not as straightforward. ITS2 is a multi-copy marker that is intra-genomically variable within Symbiodinium [35] , [36] . In an attempt to overcome these issues, the dominance of an ITS2 sequence amplified in PCR and the accompanying DGGE fingerprint is currently being used to describe the Symbiodinium type present in a sample and delineate species within the genus [e.g. [5] , [9] , [12] , [30] , [33] , [63] , [69] ]. This methodology and interpretation emphasizes dominance of a sequence in a sample and disregards low abundant sequences (<5–10% in abundance) as intra-genomic variants that are not important [18] , [29] , [70] . However, in addition to the dominant sequence type C31, many of the M. capitata colonies in this study associated with multiple Symbiodinium ITS2 sequences that have previously been described as ecologically dominant and representative of independent biological entities (i.e. species). The most extreme examples of this are M. capitata colonies 9 and 44 ( Table 1 , Figure 3 ) harboring Symbiodinium ITS2 C3, C17, C21, C31, and other novel types, that collectively encompass almost all of the secondary structures in ITS2 recovered here. As the statistical parsimony network of clade C Symbiodinium depicts a step-wise evolution from the ancestral clade C sequence, ITS2 C3 [9] , to the most derived, C31, and as the rDNA is multicopy and is variable in a Symbiodinium genome [35] , [36] , there are three possible biological interpretations of the sequence diversity recovered here that lie at the extremes and at some point along the continuum from intra-genomic to inter-genomic diversity. The first is that every sequence recovered represents an individual Symbiodinium cell type or species (i.e. the highest Symbiodinium diversity possible). The second is that the corals contain a single Symbiodinium cell type or one species that contains intra-genomic variants encompassing all the sequence diversity recovered (C3 to C31; i.e. the lowest Symbiodinium diversity possible). The third, and in our opinion the most likely, is some combination of possibilities 1 and 2. With the data in hand, it is impossible to distinguish which of these scenarios explains the Symbiodinium sequence diversity in M. capitata reported here. We can say, however, that because the Symbiodinium ITS2 sequence composition among colonies is variable, the Symbiodinium communities in these corals are different. The problems of interpreting exactly what the endosymbiotic ITS2 sequence data from an individual coral means in terms of species diversity are well illustrated when considering the recently nominated species Symbiodinium trenchi and Symbiodinium glynni [30] , [69] . The species Symbiodinium trenchi is identified using the ITS2 D1a DGGE fingerprint, however, this fingerprint always contains a band that corresponds to the D1 sequence. The D1 sequence can occur independently of D1a, and when D1a is absent, the D1 DGGE fingerprint is used to define the species Symbiodinium glynni . A study by Thornhill et al. [36] , however, clearly demonstrates that the D1 and D1a sequences are intra-genomic variants in an isoclonal cell line. Therefore, when the D1a ITS2 DGGE fingerprint (with its companion D1 sequence) is detected in an endosymbiotic sample, it is impossible to distinguish whether these sequences represent intra-genomic variants of one cell type, or co-occurring populations of two Symbiodinium species, S. trenchi and S. glynni . Thus, the use of ITS2 sequences that are known to be intra-genomic variants to delineate different species is problematic when assessing the diversity of species in endosymbiotic Symbiodinium communities in corals.
That said, defining cryptic Symbiodinium types and their prevalence is fundamentally important when considering endosymbiont shifting/shuffling in corals as a response to changes in the environment [32] , [64] , [71] . One solution to the problems encountered in interpreting ITS2 diversity in environmental samples (ie. host organisms) of Symbiodinium is to develop and apply a new marker(s) that has a similar level of resolution to the ITS2, but that exhibits a one to one relationship between sequence type and an individual Symbiodinium cell. In our opinion, the power of applying DGGE of Symbiodinium ITS2 to coral endosymbionts lies in comparing fingerprint patterns among samples to determine whether or not the signatures are the same or different, an approach widely used in the field of microbial ecology. However, the properties of ITS2 as a marker clearly make it a suboptimal choice for species assignment in Symbiodinium .
Endemicity and distribution ranges of Symbiodinium types have mostly been inferred using the ITS2 in studies generally constituting 1–2 colonies per host species [e.g. [5] , [9] , [12] , [63] ]. The utility of small host sample sizes is to enable a ”snapshot” of Symbiodinium diversity from various host species from numerous reef environments. However, replicate sampling of host species on reefs previously targeted in “snapshot” Symbiodinium diversity studies often reveal missed diversity among endosymbiont communities within a host. For example, Pocillopora damicornis , Stylophora pistillata , Acropora palifera and Goniastrea favulus have all been shown to associate with a higher diversity of Symbiodinium than originally perceived around Heron Island in the Great Barrier Reef [5] , [10] , [18] , as was Porites lobata in Hawai'i [72] , and Montastraea franksi and Siderastrea siderea in the Caribbean [73] . Similarly, a Symbiodinium ITS2 sequence previously considered to be Caribbean-specific was reported from Acropora at Johnston Atoll in the central Pacific [16] . Symbiodinium ITS2 C17 and C21 were not previously reported from marine invertebrates hosts in Hawai'i [33] , yet they were all recovered here from increased sampling of one host species, at a single depth, from a single bay. As such, some of the generalized biogeographic and host specificity patterns of Symbiodinium may simply reflect a gross under-sampling of endosymbiont communities in marine invertebrates [9] . The higher Symbiodinium diversity and among colony endosymbiont variation shown here and in the studies described above, shows that some of the biogeographic patterns in Symbiodinium distribution and host specificity do not hold with increased sampling effort. As such, a much greater depth of sampling on a global scale will be required to accurately describe radiation within the genus, understand host specificity and the environmental thresholds of symbioses, and define biogeographic patterns in Symbiodinium diversity.
Sampling strategy to recover Symbiodinium diversity
The high sequence diversity of Symbiodinium reported here from colonies of Montipora capitata was recovered by screening a small number of clones from a large number of colonies, or the inverse, screening a large number of clones from a small number of colonies. When additional parameters are included in the experimental design (e.g. sampling, depth, multiple hosts, larger biogeographic region), a greater number of colonies will need to be investigated. Also, we show that there is no standard number of Symbiodinium ITS2 clones that need to be sequenced from all clone libraries to accurately assess endosymbiont diversity in M. capitata colonies. For some colonies (e.g. Colony 1, 25, 31, 49; Figure 4 ) Symbiodinium ITS2 diversity can be captured with <10 clone sequences, while for others (e.g. Colony 9 and 44) a higher number of clones need to be sequenced to get an accurate estimation of endosymbiont ITS2 diversity. Similarly, Stat et al. [16] showed that only Symbiodinium ITS2 C15 was recovered from Porites lobata at Johnston atoll, while a higher sequence diversity (2–7 sequences) was recovered in other coral species at the same location. Therefore the number of coral colonies analyzed and number of clones sequenced per colony will need to be tailored to each study and will reflect some combination of the host species investigated and the environment from which the coral was sampled.
Conclusion
Symbiodinium ITS2 sequence assemblages found in M. capitata are variable among individual colonies. The driving force behind these differences is unknown, but likely reflects some combination of host-symbiont specificity, environmental, and stochastic processes. The multi-copy nature and known variability of ITS2 within individual Symbiodinium cells (intra-genomic) make it impossible to distinguish how many independent biological entities these sequence assemblages represent. However, the intricacy of this dataset highlights both the complexity of coral Symbiodinium associations, and innate problems in interpreting ITS2 sequence types that question the assumptions and validity of using the ITS2 to delineate Symbiodinium species. | Conclusion
Symbiodinium ITS2 sequence assemblages found in M. capitata are variable among individual colonies. The driving force behind these differences is unknown, but likely reflects some combination of host-symbiont specificity, environmental, and stochastic processes. The multi-copy nature and known variability of ITS2 within individual Symbiodinium cells (intra-genomic) make it impossible to distinguish how many independent biological entities these sequence assemblages represent. However, the intricacy of this dataset highlights both the complexity of coral Symbiodinium associations, and innate problems in interpreting ITS2 sequence types that question the assumptions and validity of using the ITS2 to delineate Symbiodinium species. | Conceived and designed the experiments: MS RDG XP. Performed the experiments: MS XP MT GC LJC LC DL. Analyzed the data: MS CB GC. Contributed reagents/materials/analysis tools: RDG RT. Wrote the paper: MS CB GC MT RT RDG.
Endosymbiotic dinoflagellates in the genus Symbiodinium are fundamentally important to the biology of scleractinian corals, as well as to a variety of other marine organisms. The genus Symbiodinium is genetically and functionally diverse and the taxonomic nature of the union between Symbiodinium and corals is implicated as a key trait determining the environmental tolerance of the symbiosis. Surprisingly, the question of how Symbiodinium diversity partitions within a species across spatial scales of meters to kilometers has received little attention, but is important to understanding the intrinsic biological scope of a given coral population and adaptations to the local environment. Here we address this gap by describing the Symbiodinium ITS2 sequence assemblages recovered from colonies of the reef building coral Montipora capitata sampled across Kāne'ohe Bay, Hawai'i. A total of 52 corals were sampled in a nested design of Coral Colony(Site(Region)) reflecting spatial scales of meters to kilometers. A diversity of Symbiodinium ITS2 sequences was recovered with the majority of variance partitioning at the level of the Coral Colony. To confirm this result, the Symbiodinium ITS2 sequence diversity in six M. capitata colonies were analyzed in much greater depth with 35 to 55 clones per colony. The ITS2 sequences and quantitative composition recovered from these colonies varied significantly, indicating that each coral hosted a different assemblage of Symbiodinium . The diversity of Symbiodinium ITS2 sequence assemblages retrieved from individual colonies of M. capitata here highlights the problems inherent in interpreting multi-copy and intra-genomically variable molecular markers, and serves as a context for discussing the utility and biological relevance of assigning species names based on Symbiodinium ITS2 genotyping. | Supporting Information | This is Hawai'i Institute of Marine Biology contribution number 1426, publication number UNIHI-SEAGRANT-JC-08-41, and 2007 Pauley Summer Program Contribution number 5. | CC BY | no | 2022-01-13 08:14:22 | PLoS One. 2011 Jan 5; 6(1):e15854 | oa_package/d1/ad/PMC3016399.tar.gz |
PMC3016400 | 21172035 | Background
Hepatocellular carcinoma (HCC) is linked to the interaction between genetic, immunologic, environmental, dietary, and life style factors. Its incidence and distribution vary widely among ethnic groups, sex, and geographic regions. Hepatocellular carcinoma is the third most common cause of cancer-related deaths worldwide with about 600,000 patients dying from the disease annually [ 1 ]. Asian countries account for nearly 78% of the roughly 600,000 cases of hepatocellular carcinoma (HCC) reported globally each year [ 2 ]. China alone accounts for more than 50% of the world's cases [ 3 ]. HBV and HCV infection, liver cirrhosis, male gender, and old age are important risk factors of HCC. The clustering of HCC within families raises the possibility that genetic factors are also involved in susceptibility to HCC.
The Major Histocompatibility Complex (MHC) plays a key role in anti-virus and tumor defense. Human leukocyte antigens (HLA) function in the regulation of immune response to foreign antigens and discrimination of self from non-self antigens. They are encoded by a series of closely linked genetic loci found on chromosome 6 [ 4 , 5 ]. HLA polymorphism is implicated in conferring genetic susceptibility to a large number of immune mediated diseases, including some cancers. Given the pivotal role of HLA molecules in the immune system, several studies have been performed to investigate the association between specific HLA alleles and HCC. However, the association between HLA-DRB1 alleles and HCC in different ethnic populations that has been reported is controversial. Many conflicting reports have been published to date; thus, we performed a systematic review of all of the relevant studies published in the literature to evaluate the association between HLA-DRB1 alleles and HCC. Our principal objectives were to clarify the specific HLA-DRB1 alleles that conferred susceptibility to or which protected against HCC. | Methods
Search strategy
Electronic databases (PubMed, EMBASE, Cochrane Library and China National Knowledge Infrastructure) were used to search for all genetic association studies evaluating the HLA-DRB1 polymorphism and HCC in humans in all languages up to January 2010. The search strategy was based on combinations of the terms: HLA-DRB1 AND (Hepatocellular carcinoma or HCC) AND (variants or polymorphism or alleles). We also performed a full manual search from the bibliographies of selected papers. We also contacted the authors of studies containing relevant information, who did not report the results necessary for this analysis. Unpublished data were also accepted if an abstract was available and further information was obtained from the author.
Selection criteria
In the meta-analysis, the following inclusive selection criteria were set and reviewed by two independent investigators: (1) each trial is an independent case-control study; (2) the purpose of all studies and statistical methods is similar; (3) it supplied enough information to calculate the odds ratio (OR);(4) HLA-DRB1 alleles were molecularly typed (high or low resolution level); (5) inclusion of patients according to the diagnosis standard of HCC defined in 2002, based on at least one of the following criteria: classical histological characteristics or serum a-fetoprotein (AFP) level higher than 400 ng/ml together with radiological findings (ultrasound and/or CT) consistent with HCC [ 6 ]. A single study, Donaldson et al, done before 2002, was included in the meta-analysis given that the inclusion criteria of patients were similar to the diagnosis standard. The following exclusive selection criteria were set: (1) incomplete raw data; (2) repetitive reports (if more than one version of the same study was retrieved, only the most recent is used); (3) materials and methods were not well-described and reliable.
Although assessment of study quality is considered important for systematic reviews and meta-analyses, scoring methods have been considered problematic [ 7 ] and may not accurately assess the quality measures of interest [ 8 ]. Therefore, we used reliability of patient selection, molecular typing method, and statistical analysis method as quality variables.
The frequency of HLA-DRB1 alleles varies according to ethnic and racial background, with some alleles being extremely rare. Therefore, articles were not required to identify all alleles for inclusion.
Data extraction
The studies were independently evaluated by two researchers. Discrepancies in the evaluations of some studies were resolved by discussion between the reviewers. The following data were collected from each study: authors, publication year, journal, publication type and language, HLA genotyping method, allele genotyped, allele frequencies, numbers of cases and controls, definitions criteria used for HCC, HCC sample description, controls sample description. Allelic frequency was calculated as the number of cases or controls harboring at least one allele type (HLA-DRB1) divided by the total number of chromosomes included in each of the corresponding groups.
Statistical analysis
The literature review conformed to PRISMA statement standards, and our research fit the minimum set of items for reporting in systematic reviews and meta-analyses (Additional file 1 ). Heterogeneity was calculated by means of Cochran's Q test (α = 0.05) and Higgins's ( I 2 ) tests. I 2 values of 25%, 50% and 75% were assigned as low, moderate, and high estimates, respectively. If the results of the Q test had no significant heterogeneity, the Mantel-Haenszel fixed effect model (Peto method) was used for the combination of data; If the results of the Q test had significant heterogeneity, the Dersimonian-Laird random effects model (DL method) were used for the combination of data [ 9 ]. A pooled OR was presented as a standard plot with 95% confidence intervals (95%CIs). Meta-analysis was performed using fixed-effect or random-effect methods, depending on absence or presence of significant heterogeneity. As a measure of association between HCC and HLA-DRB1 alleles, we combined odds ratios (ORs) with 95% confidence intervals (95%CIs) stratified by gene subtype of patients and controls in a study. Funnel plots and the Egger's regression asymmetry test were used to evaluate publication bias [ 10 ]. All P values presented are two-tailed. To reduce heterogeneity and to evaluate whether there was a different genotype effect in predefined subgroups of studies, we performed subgroup analysis according to the ethnicity. We performed a sensitivity analysis to assess the stability of the results by sequential omission of individual studies. The analyses were performed using Revman 5.0 provided by the Cochrane Collaboration. | Results
Literature assessment
Figure 1 shows the flow chart of publications identified by the literature search. Search strategy allowed us to identify 84 studies for potential inclusion in the meta-analysis. Finally 8 case-control studies relating to HLA-DRB1 alleles polymorphism and susceptibility to HCC qualified on the basis of our selection criteria [ 11 - 18 ]. A total of 957 subjects were studied (including 341 patients and 616 controls). The main features of the studies included in the meta-analysis are shown in Table 1 . Among the eight studies, five studies were conduct in Asian countries; two were conduct in European countries, and one was conduct in African countries. Mean or median age was not stated in 4/8 reports and sex in 4/8. HIV status was determined in only one report [ 12 ].
HLA-DRB1 alleles were molecularly typed (high or low resolution level). Five studies used low resolution molecular typing for HLA, while three used high resolution molecular typing for HLA. Low resolution molecular typing methods for HLA could not identify the specific alleles. Accurate methods for HLA class II typing should involve the combination of PCR-SSOP, PCR-SSP, and PCR-SSCP [ 19 ].
Meta-analysis: Association between HLA-DRB1 alleles with HCC
A total of 13 HLA-DRB1 alleles were studied in the 8 case-control studies, but only 3 alleles (DRB1*07, DRB1*12, DRB1*15) was extracted from the studies to investigate their association with HCC, which were reported at least six of the eight case-control studies. Statistics calculated for each study are shown in the forest plot (Figures 2 , 3 and 4 ).
In the meta-analysis, overall the frequencies of HLA-DRB1*07 allele was 20.1% (59 of 281) in HCC and 15.2% (71 of 466) in controls. The heterogeneity test indicates that the variation of trial-specific ORs was not statistically significant (χ 2 = 10.79, I 2 = 54%, P = 0.06 and >0.05). Under the fixed effect model, the combined OR for the association of HLA-DRB1*07 allele with the risk for HCC in the whole populations was determined to be 1.65 (95% CI: 1.08-2.51; p = 0.02), and under the random effects model was 1.77 (95%CI: 0.88-3.56; p = 0.11). In sensitivity analysis, the exclusion of individual studies did not change this significant result, except for exclusion of the study by Pan et al and El-Chennawi et al, which produced a non-significant association. Subgroup analysis by ethnicity showed that HLA-DRB1*07 allele significantly increased the risk of hepatocellular carcinoma in Asians under the fixed effect model (OR = 2.10, 95% CI: 1.06-4.14, P = 0.03).
Overall the frequencies of HLA-DRB1*12 allele was 26.3% (74 of 281) in HCC and 16.1% (83 of 516) in controls. The heterogeneity test indicates that the variation of trial-specific ORs was not statistically significant (χ 2 = 9.00, I 2 = 44%, P = 0.11 and >0.05), so the fixed-effect method was used to combine the results. The combined OR for the association of HLA-DRB1*12 allele with the risk for HCC in the whole populations was determined to be 1.59 (95% CI: 1.09-2.32), and was statistically significant (P = 0.02 and <0.05). In sensitivity analysis, the exclusion of individual studies did not change this significant result, except for exclusion of the study by Donaldson, et al and Sun, et al, which produced a non-significant association. Subgroup analysis by ethnicity showed that HLA-DRB1*12 allele significantly increased the risk of hepatocellular carcinoma in Asians (OR = 1.73, 95% CI: 1.17-2.57, P = 0.006).
Meta-analysis for HLA-DRB1*15 allele was carried out, but it did not show any statistical effect in the whole populations. Subgroup analysis by ethnicity showed that HLA-DRB1*15 allele significantly increased the risk of hepatocellular carcinoma in Asians under the fixed effect model ( OR = 2.88, 95% CI: 1 .77-4.69, P < 0.001 ).
Figures 5 , 6 and 7 show the funnel plot analysis to detect publication bias of each study for DRB1*07, DRB1*12 and DRB1*15, respectively. The shape of the funnel plot seemed to be asymmetrical, suggesting that publication bias might affect the findings of our meta-analysis. | Discussion
The aim of the present study was to ascertain whether specific HLA-DRB1 alleles are associated with the development/protection of HCC. We analyzed the published studies investigating the association between HLA-DRB1 alleles and HCC. Studies concerning this possible association have been undertaken since the early 1996s [ 20 ].
To our knowledge, this is the first published meta-analysis investigating this association. Our meta-analysis of eight studies revealed that DRB1*07 and DRB1*12 were significantly associated with the risk of HCC in the whole populations (OR = 1.65, 95% CI: 1.08-2.51, P = 0.02 and OR = 1.59, 95% CI: 1.09-2.32, P = 0.02, respectively). Subgroup analysis by ethnicity showed that DRB1*07 and DRB1*12 alleles also significantly increased the risk of hepatocellular carcinoma in Asians (OR = 2.10, 95% CI: 1.06-4.14, P = 0.03; OR = 1.73, 95% CI: 1.17-2.57, P = 0.006, respectively). No significant association was established for DRB1*15 allele with HCC in the whole populations. Subgroup analysis by ethnicity showed that DRB1*15 alleles significantly increased the risk of hepatocellular carcinoma in Asians under the fixed effect model ( OR = 2.88, 95% CI: 1 .77-4.69, P < 0.001 ).
Epidemiological survey showed that Asian countries account for nearly 78% of hepatocellular carcinoma (HCC) reported globally each year, and Hepatitis B Virus (HBV) is the major etiology of HCC in these areas. Although HBV infection plays an important role in HCC, HBV infection alone is not sufficient for progression to HCC. Several lines of evidence suggest that cellular immune surveillance is important in the control of HBV infection and the development of HCC. In 2007, Yang and his colleagues found that HLA-DRB1*07 were markedly higher in the HBV-infected group among people in northwestern China (17.6% of HBV-infected patients vs 9.3% of spontaneously cleared controls, OR = 2.09, P < 0.05) [ 21 ]. In 2006, Zhang and his colleagues found that the frequency of HLA-DRB1*12 was significantly higher in the HBV persistent group than in the recovered group among Chinese (0.230 versus 0.063, P = 0.004, OR = 2.09) [ 22 ]. In 2003, Amarapurpar and his colleagues found that a positive association of HLA-DRB1*15 to persistence of HBV among Indians (57.6 vs. 25%) [ 23 ]. As we know that clearance of acute hepatitis B virus (HBV) infection is associated with a vigorous CD4+ T-cell response focusing on the core protein. HLA class II glycoproteins present viral peptides to CD4+ T cells and influence the immune responses. Binding affinities of overlapping peptides covering the core and envelope proteins of HBV were measured to HLA glycoproteins encoded by some HLA-DRB1 molecules and compared with published peptide-specific CD4+ T-cell responses [ 24 ]. So we have a hypothesis that HLA-DRB1*07, DRB1*12 and DRB1*15 alleles may be the key host factors to determine the development of diseases from HBV infection to HCC in Asians, basing on our results that HLA-DRB1*07, DRB1*12 and DRB1*15 alleles significantly increased the risk of hepatocellular carcinoma in Asians. Furthermore, the importance of environmental factors and gene-environmental interactions in the development of HCC should not be ignored and is beginning to be delineated.
Moreover, HLA-DRB1 alleles polymorphism have been reported to be associated with other cancers and autoimmune diseases, including cervical squamous cell carcinoma, rheumatoid arthritis, systemic lupus erythematosus, autoimmune hepatitis, inflammatory bowel disease, multiple sclerosis and type 1 diabetes, and meta-analyses have been done for these diseases [ 25 - 31 ].
Additionally, because the information used in our research was based on data from observational studies, some limitations should be discussed in this meta-analysis and the results should be considered with caution. A primary cause for the difference in results by different authors may be related to the great variability of the frequency of HLA alleles in different populations. It is quite possible that one ethnic group may have some specific alleles in development/protection of HCC compared to other ethnic groups. Many studies were conducted on relatively smaller samples. Insufficient number of individuals might decrease the power to detect a difference in the distribution of DRB1*07, DRB1*12 and DRB1*15 alleles between HCC patients and controls, though a true difference exist. Lack of an association may not mean that associations do not exist. Many studies did not control for the matching variables in the analysis, and the possible confounders could be among the potential causes of variation in the studies' estimates. Different types of control groups could also be among the potential causes of variation in the studies' estimates. Effects of interactions in other environmental/behavioral and/or viral factors may be inevitable. A complex interplay between various genes is likely to modulate the development of HCC rather than a single allele. HLA genotyping techniques must be taken into consideration because these methodologies have different sensitivities. HLA genotyping were mainly tested for by PCR with probe hybridization, but targets were not always verified by sequencing. This could hinder an effective comparison between the studies and influence the combined results. The shape of the funnel plot seemed to be asymmetrical, suggesting that publication bias might affect the findings of our meta-analysis. Furthermore, although we tried to maximize our efforts to identify all relevant published studies in peer-reviewed journals, it is possible that some escaped our attention.
In spite of these, our meta-analysis also had some advantages. First, substantial number of cases and controls were pooled from different studies, which significantly increased statistical power of the analysis. Second, the quality of case-control studies included in the current meta-analysis was satisfactory based on our selection criteria. Third, the patients were phenotypic homogeneous subjects, diagnosed either by gold standard or by biochemical and imageology combined methods, which could reduce the heterogeneity in some extent. | Conclusion
Our meta-analysis suggests that HLA-DRB1*07 and DRB1*12 alleles are risk factors for HCC in the whole populations, especially in the Asians; DRB1*15 allele is only associated with an increased risk of HCC in Asians (under the fixed effect model). However, it is necessary to conduct large trials using standardized unbiased methods, homogeneous HCC patients and well matched controls, with the assessors blinded to the data. Moreover, gene-gene and gene-environment interactions should also be considered in the analysis. Such studies taking these factors into account may eventually lead to a better, more comprehensive understanding of the association between HLA-DRB1 polymorphism and HCC. | Background
HLA-DRB1 allele polymorphisms have been reported to be associated with hepatocellular carcinoma susceptibility, but the results of these previous studies have been inconsistent. The purpose of the present study was to explore whether specific HLA-DRB1 alleles (DRB1*07, DRB1*12, DRB1*15) confer susceptibility to hepatocellular carcinoma.
Methods
Case-control studies on HLA-DRB1 alleles association with HCC were searched up to January 2010 through a systematic review of the literature. The odds ratios (ORs) of HLA-DRB1 allele distributions in patients with hepatocellular carcinoma were analyzed against healthy controls. The meta-analysis software REVMAN 5.0 was applied for investigating heterogeneity among individual studies and for summarizing all the studies. Meta-analysis was performed using fixed-effect or random-effect methods, depending on absence or presence of significant heterogeneity.
Results
Eight case-control studies were included in the final analysis. Among the 3 HLA-DRB1 alleles studied, DRB1*07 and DRB1*12 were significantly associated with the risk of HCC in the whole populations (OR = 1.65, 95% CI: 1.08-2.51, P = 0.02 and OR = 1.59, 95% CI: 1.09-2.32, P = 0.02, respectively). No significant association was established for DRB1*15 allele with HCC in the whole populations. Subgroup analysis by ethnicity showed that DRB1*07, DRB1*12 and DRB1*15 alleles significantly increased the risk of hepatocellular carcinoma in Asians (OR = 2.10, 95% CI: 1.06-4.14, P = 0.03; OR = 1.73, 95% CI: 1.17-2.57, P = 0.006 and OR = 2.88, 95%CI: 1 .77-4.69, P < 0.001 , respectively).
Conclusion
These results support the hypothesis that specific HLA-DRB1 alleles might influence the susceptibility of hepatocellular carcinoma. Large, multi-ethnic confirmatory and well designed studies are needed to determine the host genetic determinants of hepatocellular carcinoma. | Abbreviations
HLA: human leukocyte antigens; HCC: hepatocellular carcinoma; HBV: hepatitis B virus; OR: odds ratio; CI: confidence interval.
Competing interests
The authors declare that they have no competing interests.
Authors' contributions
ZHL and YNX carried out the design of this meta-analysis, conducted a searching, extracted data, analyzed the data and drafted the manuscript. SYX participated in study design and the critical revision of the manuscript. QJD, QW, SHZ, XJJ and YS participated in the critical revision of the manuscript. All authors read and approved the final manuscript.
Pre-publication history
The pre-publication history for this paper can be accessed here:
http://www.biomedcentral.com/1471-230X/10/145/prepub
Supplementary Material | Acknowledgements
This work was partially supported by grants from the Natural Science Foundation of Shandong Province, China (No. ZR2009CQ031) | CC BY | no | 2022-01-12 15:21:46 | BMC Gastroenterol. 2010 Dec 21; 10:145 | oa_package/ed/5b/PMC3016400.tar.gz |
PMC3016401 | 21246045 | Introduction
Insertional mutagenesis is a modification of target DNA via incorporation of additional bases. Insertion of long DNA fragments naturally happens during retroviral infection and transposition of mobile elements. It is also a byproduct of some common techniques of genetic engineering and gene therapy. In the latter case, it has been implicated as the cause of therapy-associated malignancies [1] , and serendipitous activation of growth-promoting genes by insertion of a retroviral vector may be responsible for successful expansion of genetically engineered cells in gene therapy patients [2] .
Importantly, insertional mutagenesis could be used to generate pools of randomly genetically-altered cells or organisms for forward genetics applications. In this case, the mutants with a phenotype of interest are selected, and the genetic loci tagged by inserts in such mutants are further investigated as candidate regulators of the mutant phenotype. For example, the genes at the sites of retroviral insertions in mouse tumors are treated as likely oncogenes or tumor suppressors [3] .The yield of phenotypically-detectable mutants is greatly increased when the inserted fragment carries a strong promoter, which drives transcription of the adjacent host DNA, and such mutants could be distinguished from the spontaneous ones by virtue of their dependence on the promoter function [4] . Overall, insertional mutagenesis provides an efficient, cost-effective, unbiased and broadly applicable functional approach to identification of regulators of various biological processes (discussed elsewhere [4] , [5] ).
In our prior work [4] , we relied on retroviral vectors for efficient delivery of a mutagenic regulated promoter cassette as a means of generating insertional mutants for gene discovery studies. This was achieved by placing the regulated promoter internally in a self-inactivating virus backbone. While the LTRs in such a vector are transcriptionally inactive following integration, they still retain the original polyadenylation site, which plays an important role in preserving the defined structure of the viral transcript. The presence of the polyadenylation site in the retroviral LTR creates an apparent problem for the production of fusion transcripts: one may expect the outbound transcript to be cut and polyadenylated prior to reaching the host DNA. Possible solutions to this problem include orienting the internal outbound promoter opposite to the retroviral LTRs [4] , which may create a problem during production of the virus; removing the polyadenylation signal from the LTRs [6] , which requires very extensive modification of the vector backbone and may facilitate additional structural changes during viral replication; or giving up the use of retroviral backbones in favor of other vectors, such as DNA-based transposons [7] , [8] .
However, there are some indications that the problem may be not so serious. First, there are evidence that at least some retroviral LTRs permit a considerable amount of read-through transcription [9] . Second, there are reports that polyadenylation signals within introns could be co-transcriptionally removed and thus rendered inactive by splicing machinery [10] , [11] . Therefore, if transcription continues through the LTR, and the splice donor site ends up matched with an appropriate splice acceptor, then the polyadenylation signal of the LTR could be lost during splicing, and a stable hybrid transcript encompassing both the vector-and host-derived sequences could be formed. In the present report, we describe our experience with a vector design that relies on this phenomenon and the effect that the elements of this construct have on genes at a considerable distance from the integration site. | Materials and Methods
Cell culture, drug treatment and viral transduction
All cells were cultured in Dulbecco's modified Eagle's medium with 10% FBS, and 1% penicillin and streptomycin at 37°C in the presence of 5% CO 2 . Zeocin and ganciclovir were used as described earlier [4] . Hygromycin (Roche) was used at 100 μg/ml.
All viruses were packaged in 293T cells (originally known as 293tsA1609neo [27] ). Virus-containing media were collected at 48 h, filtered, supplemented with 5 ug/ml Polybrene, and applied to the target cells for an overnight incubation.
Prior to mutagenesis, HEK293ZeoTK cells were cultured in the presence of ganciclovir for about a week to remove any pre-existing mutants with activated NF-κB, and then infected with virus on 6 cm plate. Medium was changed after 24 h of infection. Next day the cells were replated into 15 cm plate, where they were treated with Zeocin for 12 days to select for the mutant clones.
Plasmids
Full length RIPK1 expression construct, shRNA against RIPKI expressing plasmid and shRNA against p53 were as described [7] . pBabeHygro [28] was used a vector backbone for expression of Cre recombinase as described [3] . The cDNA for tetracycline-controlled transcription silencer (tTS), as well as the vector used for its expression (pLPCX) were from Clontech, Inc. All other constructs for full-lengths cDNA expression were purchased from Origene.
Reporter assay and Western Blotting
NF-κB-dependent reporter plasmid (pE-selectin-luciferase) was transiently transfected using Lipofectamine Plus method (Invitrogen), and luciferase activity was measured and normalized for that of a co-transfected constitutively active â-galactosidase expression vector, as described earlier [4] .
For the detection of RIPK1 protein, total cell lysates were prepared from 95% confluent plates and analyzed by the Western blotting using anti-RIPK1 primary antibodies (BD Transduction Laboratories), goat anti-mouse secondary antibodies (Sigma) and SuperSignal chemiluminescent substrate (Thermo Scientific).Chemiluminescence was recorded using FluoroChem HD2 camera (Alpha Innotech).
PCR
The hybrid mRNA fragments were revealed using nested ligation-mediated PCR. The cDNA prepared from individual infected clones was digested with MboI and ligated to an MboI adapter (constructed by annealing of GATCACCTGCCC and CTAATACGACTCACTATAGGGCTCGAGCGGCCGCCCGGGCAGGT to enable the PCR suppression effect [29] ). Subsequently, nested PCR was done using primers GTFwd1 ( ACTCGGATAATACGACGCACGAGA ) and AP1 ( CCATCCTAATACGACTCACTATAGGGC ) for the first stage, followed by the second stage using GTFwd2 ( AAGGCTCAGGAGAGGGCAGAGGAA ) and AP2 ( ACTCACTATAGGGCTCGAGCGGC ). The PCR products were separated on a gel, excised, and sequenced.
For insert mapping using inverse PCR, MspI- digested genomic DNA form the mutant clones were circularized by self-ligation, followed by nested PCR. The first round was conducted with HTOmspS1 ( ACAGTGCAGGGAAAGAATAG ) and HTOmspAS1 ( ATGGTGAATTGATCCCATCTTG ) primers; the second - with primers HTOmspS2 ( TTCGTTGGGAGTGAATTAG ) and HTOmspAS2 ( CAGGGGAAAGAATAGTAGAC ). The final PCR product was cloned into pCR8/GW/TOPOTA (Invitrogen) and sequenced.
RIPex1S ( CTTCCTGGAGAGTGCAGAAC ) and RIPex2AS2 ( CTCCATCACCAGGGAGTACTTC ) primers were used to assay RIPK1 mRNA expression by RT-PCR. A housekeeping gene (GAPDH) was used as an internal control (Forward: GGCTCTCCAGAACATCATCCCTGC , Reverse: GGGTGTCGCTGTTGAAGTCAGAGG ). | Results
HIV LTR is inefficient in preventing splicing of vector- and host-derived sequences
To test the prediction that successful pairing of an internal splice donor site with a host splice acceptor may result in removal of the LTR-encoded polyadenylation signal, we constructed a vector (designated pAIM) based on an HIV-1 backbone ( Figure 1 ) that should be suitable for reversible insertional mutagenesis. In the plasmid form, the 5′-LTR is promoter-competent, while the 3′-LTR is promoter-deficient. During the natural replication cycle, both LTRs become identical, carrying the inactivating deletion in the promoter. The presence of Cre recombinase recognition sites (loxP) in the LTR permits excision of the promoter cassette. An internal CMV promoter is collinear with the viral genome. It is followed by the coding region of Pontellina plumata GFP (copGFP) [12] and an unpaired splice donor site. The region of copGFP ends with a sequence that codes for picornaviral 2A peptide. When encountering this sequence, ribosome is known to proceed without forming a peptide bond. Consequently, two physically separate peptides could be created from a single open reading frame [13] . If splicing is successful, host exons and, eventually, polyadenylation signals will be linked to the vector-encoded fragment, and a host-encoded peptide may be produced. CopGFP, expressed either from a vector-encoded or a hybrid transcript, is expected to mark all transduced cells whenever CMV promoter us functional. The CMV promoter is also preceded by an array of tetracycline operators. This feature could be used to reversibly target the promoter for inactivation using tTS, a tetracycline-controlled transcription silencer constructed by fusing the tetracycline repressor protein (TetR) with a KRAB silencing domain (SD Kid-1 ) [14] .
The vector was used to infect a culture of immortalized mouse embryonic fibroblasts at a low infection rate (<10%; judged by the incidence of copGFP-expressing cells), and individual clones were established. Ten clones that displayed green fluorescence were chosen for further analysis. Quantitative PCR on their genomic DNA confirmed single-copy integration (data not shown). We relied on nested ligation-mediated PCR on the cDNA from these clones to look for the presence of hybrid transcripts. Prior to adapter ligation, the cDNA was digested with frequently cutting enzyme MboI. Due to the presence of an MboI site in the vector between the splice donor and the 3′-LTR, the unspliced transcript is expected to produce a distinct band, while bands of variable sizes are expected from the spliced fusion transcripts. Importantly, out of the ten clones, at least four demonstrated prominent expression of putative fusion products.
The PCR products corresponding to the four putative fusion transcripts were sequenced, confirming the presence of both the host- and vector-derived sequences. Two of the fragments were unambiguously mapped to unique sequences in mouse genome and contained precise fusion of the vector sequences to exons of mouse TSC22D2 and LRBA genes ( Figure 2 ). The precise maintenance of the exon boundaries confirms splicing as the mechanism responsible for the production of the hybrid RNAs. We concluded that in our construct the combination of read-through transcription and splicing may alleviate the potential hurdle of polyadenylation site in the modified HIV-1 LTR.
Long-distance transactivation of RIPK1 gene by insertional mutagenesis
HEK293ZeoTK is a cell line derived from HEK293 that carries Zeocin-resistance marker and HSV-1 thymidine kinase gene as two transgenes, each under the control of an NF-κB-depended E-selectin promoter [15] . The cells represent a convenient system to screen mutants with alterations in NF-κB pathway: the cells with active NF-κB could be selected in the presence of Zeocin, or could be selectively eliminated in the presence of ganciclovir. In this manner, numerous mutants with either increased or decreased activity of this pathway have been selected [6] , [7] , [8] , [15] , [16] , [17] . Infection of these cells with the construct described in Figure 1 resulted in multiple Zeocin-resistant clones. As expected, the mutants had constitutively elevated activity of NF-κB, and this activity, as well as the pattern of Zeocin and ganciclovir resistance, was readily reverted when the inserted cassettes were removed by site-specific recombination or shut-down by expression of the tet-repressor protein (see Figure 3A and Figure 3B for examples). The observations allowed us to attribute the mutant phenotypes to the activity of the inserted fragment.
Three of the mutant clones (2B2.1, 2B1.4 and 2B3) attracted our interest: in each of them a single insert was located in p25 region of chromosome 6. The three mutants behaved identically in all the tests. Interestingly, RIPK1 gene, which codes for a known modulator of NF-κB signaling, also maps to 6p25, and mutations affecting RIPK1 were previously identified in similar screens [7] . However, the inserts in the current study were located considerably further upstream of any known RIPK1 exon. In fact, the current annotation of the human genome places several other genes in the close vicinity of the inserts ( Figure 4 ). Nevertheless, RIPK1 expression was elevated in the mutant clones at the level of RNA and protein ( Figure 5 ). In contrast, expression of the products of the neighboring genes, including those closest to the inserts, was undetectable or unchanged, and their forced overexpression in naïve cells failed to affect NF-κB activity ( Figure 6 and data not shown). Importantly, RIPK1 expression was reduced in the mutants upon physical removal or transcriptional shut-down of the inserted cassette. The latter observation indicates that the effect is dues to the activity of the inserted fragment, rather than by destruction of some negative control elements. Interestingly, the insert orientation differed between the mutants. Thus, it is highly unlikely that the mutant phenotype could be accounted for by a vector-derived transcript fusing with the RIPK1 sequences, and our attempts at detecting such a fusion transcript were unsuccessful (data not shown).
In order to confirm that NF-κB activation in these cells is indeed dependent on the overexpression of RIPK1, we measured the activity of NF-κB-dependent luciferase reporter in the presence of shRNA directed against RIPK1. As before [7] , in order to control for non-specific effects of RNA interference, we used an shRNA against p53, which is present, but inactivated in these cells. The example in Figure 7 demonstrates that interference with RIPK1, but not an unrelated protein (p53), reduced the activity of NF-κB in the mutants. Therefore, we concluded that the activity of the inserted cassette caused constitutive activation of NF-κB through up-regulation of RIPK1 expression without formation of a fusion transcript. | Discussion
Various structural and biological features of retroviruses make retroviral vectors useful for studies the properties of mammalian transcription (e.g. [18] ). In the described experiments, we observed that, despite the presence of a polyadenylation signal, the modified LTR of HIV-1 fails to efficiently prevent read-through transcription and accumulation of ensuing fusion transcripts. This is consistent with observations from other models, where splicing takes precedence over polyadenylation [10] , [11] . It is worth noting that our observation on the efficiency of this process (4 out of 10 cases) is a conservative one: detection of the hybrid product was contingent upon the proximity of MboI site and the absence of secondary structures, which could potentially interfere with RT-PCR, in the hybrid RNA. Also, accumulation of the hybrid transcript, in addition to the removal of the LTR, might require trapping of a functional polyadenylation site in the downstream sequence.
Formation of hybrid RNAs via read-through is considered one of the major mechanisms of that generates transducing retroviruses [19] . The classical scheme [19] proposes formation of a read-through transcript, which originates at the 5′-LTR, incorporates an entire viral genome, and terminates in the adjacent host sequences. Such a transcript may be incorporated into the viral particle (albeit inefficiently, because of its size), but, due to the presence of an LTR in the middle of the RNA, has to undergo extensive re-arrangements during reverse transcription in order for a transducing virus to be formed. Our observations suggest that if internal sequences of a virus or a viral vector could act as splice donors, the 3′-LTR could be removed from such a transcript during splicing. The resulting RNA may retain the packaging signal, and could be converted into a transducing virus upon a single recombinational event during reverse transcription. This may be especially important for MLV-based vectors, which are often used without inactivation of the LTRs, and which may regain replication potential in the presence of a natural human retrovirus [20] .
The fusion transcripts generated via splicing have an important difference from the ones generated via read-through alone: the host sequences are positioned much closer to the 5′-end of the RNA and, hence, are much more likely to be expressed, which could lead to undesirable consequences for experimental or therapeutic transduction. Consequently, the risk of a combination of read-through transcription and splicing has to be seriously considered when vectors are designed for such applications.
At this time we can only speculate about the molecular mechanism, which underlies the long-distance effect of the inserts on RIPK1 expression. Since the effect requires the function of the CMV fragment and is orientation-independent, it most resembles that of a classical enhancer. The question is how it can act at such distances and without significantly affecting the genes between the insertion site and RIPK1 ? One explanation is that it generates a widespread continuous signal (e.g. via chromatin modification) over hundreds of kilobases, but the promoters of intermediate genes are for some reason irresponsive to that particular stimulus (e.g. missing recognition sites for the factors that could sense it). Another explanation, which we find more plausible, is that the three-dimensional organization of chromosome 6 is such that the sites of integration are physically close to the beginning of RIPK1 gene. This could occur via looping of a chromosome, a phenomenon, which was characterized in great detail, for example, in regard to the regulation of proto-oncogene Myc [21] , [22] . Interestingly, structural aberrations or retroviral insertion sites that are located tens to hundreds of kilobases away from Myc gene are associated with elevated expression of this gene and are functionally equivalent to the ones occurring in direct proximity of that gene, but the possibility of their direct effect on Myc used to be dismissed based on the assumption that they are positioned too far and the effects of enhancers cannot “leapfrog” over long distances [23] . Of note, we failed to detect in the mutant cells expression of the putative shorter RIPK1 product (data not shown), which may originate from an internal promoter [7] . This is consistent with the notion that the upstream and the internal promoters are subject to distinct regulatory mechanisms.
Besides providing new insights into transcriptional regulation of RIPK1 , our observations have important implications for the use of insertional mutagenesis in gene discovery. Insertional mutagenesis is typically conducted by insertion of a DNA fragment, which carries a complete potent viral promoter. A common approach would be to map a large number of insertion sites in the DNA from a pool of cells enriched for the phenotype of interest, and then argue that certain genes are significant for this phenotype because there are more commonly targeted in this pool than what may be predicted based on random integration. In most cases, publicly available lists of such genes are quite extensive and do not undergo individual validation [3] . An obvious drawback of this approach is that integration is known to be non-random, but there is no model to account for this bias, and, especially, for possible variation in this bias as a function of transcriptional state of individual loci and physiological state of the cell. On top of this, our experience suggests that even bona fide commonly targeted sites may be deceptive: the genes truly responsible for the phenotype could be located at a considerable distance away from the nominal integration target. Unlike Myc, which is positioned in a “gene desert” and may be considered the closest known gene even to relatively distant inserts, the phenomenon presented in this report takes place in a segment of a chromosome, which is packed with known or predicted genes. Thus, a purely bioinformatics approach would have misidentified the relevant gene. In view of this, we would like to argue that veracity of predictions unsupported by experimental validation cannot be taken for granted.
We have previously constructed insertional mutagens [4] that relied on the use of a minimal promoter controlled by binding sites of tetracycline transactivator protein (tTA), rather than an enhancer-containing fragment of CMV. Interestingly, the many mutants generated by those constructs were always attributable to the production of the fusion transcript, rather than an enhancer-like effect ( [4] and unpublished data). This observation gives a hope that one may produce a more accurate gene-discovery system by carefully selecting a promoter that is included in the insertional mutagen.
Similar concerns are valid for inadvertent insertional mutagenesis during experimental and therapeutic transduction. Indeed, although some gene therapy patients have developed cancer following insertion of the therapeutic construct into or very close to LMO2 gene [24] , another case was associated with an insertion 35 kbps upstream of (and in an opposite orientation to) the same proto-oncogene [25] . In this regard, vectors devoid of enhancer elements [26] may offer an attractive avenue for the design of safer gene-delivery vehicles. | Conceived and designed the experiments: ESK AAC. Performed the experiments: RS XD AAC ESK. Analyzed the data: RS XD AAC ESK. Contributed reagents/materials/analysis tools: RS XD AAC ESK. Wrote the paper: RS ESK.
Background
Most common systems of genetic engineering of mammalian cells are associated with insertional mutagenesis of the modified cells. Insertional mutagenesis is also a popular approach to generate random alterations for gene discovery projects. A better understanding of the interaction of the structural elements within an insertional mutagen and the ability of such elements to influence host genes at various distances away from the insertion site is a matter of considerable practical importance.
Methodology/Principal Findings
We observed that, in the context of a lentiviral construct, a transcript, which is initiated at an internal CMV promoter/enhancer region and incorporates a splice donor site, is able to extend past a collinear viral LTR and trap exons of host genes, while the polyadenylation signal, which is naturally present in the LTR, is spliced out. Unexpectedly, when a vector, which utilizes this phenomenon, was used to produce mutants with elevated activity of NF-κB, we found mutants, which owed their phenotype to the effect of the insert on a gene located tens or even hundreds of kilobases away from the insertion site. This effect did not result from a CMV-driven transcript, but was sensitive to functional suppression of the insert. Interestingly, despite the long-distance effect, expression of loci most closely positioned to the insert appeared unaffected.
Conclusions/Significance
We concluded that a polyadenylation signal in a retroviral LTR, when occurring within an intron, is an inefficient barrier against the formation of a hybrid transcript, and that a vector containing a strong enhancer may selectively affect the function of genes far away from its insertion site. These phenomena have to be considered when experimental or therapeutic transduction is performed. In particular, the long-distance effects of insertional mutagenesis bring into question the relevance of the lists of disease-associated retroviral integration targets, which did not undergo functional validation. | The authors would like to thank Dr. George Stark and the members of Stark laboratory for the discussions of issues related to insertional mutagenesis. | CC BY | no | 2022-01-13 08:14:22 | PLoS One. 2011 Jan 5; 6(1):e15832 | oa_package/3d/cb/PMC3016401.tar.gz |
||
PMC3016402 | 21246046 | Introduction
Information theory has proven to be a useful tool in the analysis and measurement of network complexity [1] . In particular, many researchers have investigated the application of entropy measures to graphs, see [1] – [4] . A variety of entropy-based measures have been used to characterize networks associated with biological or chemical systems [5] , [6] ; a recent application in computational biology uses an entropy measure to analyze metabolic networks [7] , [8] . In addition to the use of measures on graphs to analyze biological or chemical systems, information theory has been employed in network physics, see [1] , [9] , [10] . Arnand et al. [1] provide a comprehensive review of Shannon entropy measures applied to network ensembles. The measures discussed in this review are based on probabilistic aspects of networks such as the expected number of links, the expected community structure, or the expected degree sequences that have been used to define probabilities. In addition, Arnand et al. [1] compared Shannon entropies on network ensembles with Gibbs and von Neumann entropies by plotting numerical values of the respective measures. By contrast, we will establish connections between different definitions of entropy by means of inequalities. Sanchirico et al. [10] explored another problem in network theory, namely, characterizing complex networks based on degree distributions. In particular, entropy functions have been used to investigate scale-free networks, see [10] . Finally, Krawitz et al. [9] have applied the so-called Basin entropy to boolean networks, which have been shown to be of great value in analyzing biological [7] and related systems [9] . Krawitz et al. [9] applied the Basin entropy measure to specific components of boolean networks [9] . In these applications, entropy provides a measure of network connectivity. It is noteworthy that Krawitz et al. [9] were able to estimate the Basin entropy from time-series data, since the model thus becomes applicable to erroneous networks (i.e., graphs affected by measurement errors) which are of great importance in biology.
The information measures we want to consider in this paper represent the structural information content of a network [5] , [11] – [13] . In particular, they have been applied to special classes of graphs and have figured prominently in research on topological aspects of biological and chemical systems, e.g., see, [5] , [11] – [18] . Common to all such research is the use of Shannon's [19] classical measure to derive entropies of the underlying graph topology interpreted as the structural information content of a graph. [5] , [11] – [13] . Measures of this kind are functions of probability values that derive, in the classical case [5] , [12] , from a graph invariant and an equivalence relation [11] , [12] , [20] . Thus far, a number of specialized measures have been developed that are used primarily to characterize the structural complexity of chemical graphs [11] , [21] , [22] . That is to say, these measures can be viewed as indexes of complexity based on certain structural features of a graph. In the classical cases, special graph invariants (e.g., number of vertices, edges, degrees, distances etc.) and equivalence relations have given rise to special measures of information contents [11] , [12] , [15] .
Another class of graph entropies, not based on a graph invariant associated with an equivalence relation, has also been explored. These alternative measures are based on information functions [23] that assign a probability value to each vertex of a graph [23] . An interesting feature of these measures is that they are parametric, see, e.g., [2] , [15] , thus allowing the formulation of optimization problems involving the parameters for given data sets. This approach to measurement is applicable to research problems in graph complexity, data analysis, and machine learning. Furthermore, the measures are computable in polynomial time because they depend on determining metrical properties of graphs [24] . In view of the large number of existing quantitative measures of network structure [22] , [25] , methods are needed for comparing the different indexes and investigating their interrelations. Such research on interrelations can be expected to yield new insights into complex systems that can be represented by graphs [22] , [26] .
One promising direction is to infer inequalities between such indices describing network information contents. Relatively little work on this problem has appeared in the literature, see, e.g., [27] , [28] . Thus far we have studied in [2] so-called implicit information inequalities involving two parametric entropies using different information functions. General as well as special graphs have been considered [2] . The present paper deals mainly with inequalities between classical and parametric entropies. On the one hand, this gives rise to general information inequalities between measures; on the other hand, bounds for special classes of graphs can be obtained.
The paper is organized as follows: In Section ‘ Methods and Results ’, we describe the classes of information measures to be examined, and detail relevant properties. Also, we prove inequalities between classical and parametric entropies. The last section provides a summary and conclusion. | Methods and Results
Classical Measures and Parametric Graph Entropies
In this section, we sketch briefly known graph entropy measures for determining the information content of networks. As a preliminary remark, denotes the cardinality of a given set . Now, let be a graph and let . The existing graph entropy measures can be divided into two main classes: (i) Classical measures [14] and (ii) parametric measures [23] . Classical measures are defined relative to a partition of a set of graph elements induced by an equivalence relation on . More precisely, let be a set of graph elements (typically vertices), and let for , be a partition of induced by . Suppose further that . Then
Parametric measures are defined on graphs relative to information functions. Such functions are not identically zero and map graph elements (typically vertices) to the non-negative reals. For simplicity of description, we consider information functions defined on . Let for . Clearly, the form a probability distribution over the vertices. Then
In general, a detailed overview of graph entropy measures can be found in [2] , [11] , [14] . Note that various other graph entropies have also been developed, see, e.g., [3] , [4] , [29] – [32] but these won't be discussed here.
The structural feature of a network captured by a classical information measure depends on the graph invariant and the equivalence criterion . This is clear from Equation (1). The relationship between the quantitative measure and graph structure for classical measures is examined further by Nikolić [33] .
For more general measures (Equation (2)), the structural feature depends on the information function used to define the measure. Examples are given by
The are positive coefficients used to weight structural differences in a graph [23] and are the -sphere cardinalities. denotes the degree and the eccentricity of the vertex . stands for the diameter of . Such functions are used to obtain the vertex probabilities as explained in [23]
The family of graph entropies resulting from different probability distributions is represented by Equation (2). In the following, we provide examples of such an information function (choosing as a special case) as well as of the resulting entropy measure. Furthermore, we compare this measure with a classical one using an identity graph as an example. Note that the information function has already been used to characterize chemical structures [15] . But first consider the graphs in Figure 1 to explain the graph entropy measure in more detail. For calculating this structural information content explicitly, we set
The structural feature captured by this measure is linked to the following observation: The more the vertices differ with respect to their spherical neighborhoods, the smaller is the value and conversely. Hence, detects a kind of inner symmetry of an underlying graph. By using in Equation (2), regular graphs have a constant information content equal to the maximum entropy (for every information function). For example, the graph gives rise to (see Figure 2 ) and finally . To compare the parametric with one of the classical measures, we consider a special case of Equation (1) in which the probabilities are determined by the respective sizes of the (vertex) orbits of the automorphism group, see [12] . The resulting graph entropy measure is denoted by . Because is vertex-transitive, there is only one orbit containing all vertices and therefore we obtain
Now, we evaluate the two different graph entropy measures and for the identity graph depicted in Figure 3 . This graph has a trivial automorphism group (i.e., the identity graph) and, hence, all orbits are singleton sets. This implies
But when calculating , we get where . Finally, we find that . In contrast, note that represents a symmetry-based complexity measure [12] . Other structural features could be chosen to provide other or more comprehensive measures of complexity. For example, Bertz [34] modified the total information content discussed by Bonchev [11] to obtain a different measure. Other approaches to tackle this challenging problem have been outlined by Nikolić [33] . To better understand the measure and to get an intuitive sense of its complexity, we perform a parameter study. More precisely, we show the entropy represented by Equation (11) for different parameters. We plotted the entropy for constant values of (0, 0.5 - first row, and 1, 3 - second row) independent of the other variables and , see Figure 4 . Clearly, the positions of maximum entropy are shifted for different values of ; and for higher values of the location of the maximum approaches that of .
Inequalities for Parametric Graph Entropies and Classical Measures
Most of the graph entropy measures developed thus far have been applied in mathematical chemistry and biology [2] , [11] , [14] . These measures have been used to quantify the complexity of chemical and biological systems that can be represented as graphs. Given the profusion of such measures, it is useful, for instance, to prove bounds for special graph classes or to study interrelations among them. Such results might be useful to investigate network-based systems as well as to design new network measures more adequately. In terms of information-theoretic measures for graphs, relatively little attention has been paid to this effort. An early attempt in this direction was undertaken by Bonchev [27] when investigating inequalities between entropy-based network measures by considering special graph classes. In particular, Bonchev [27] used such inequalities to investigate the concept of branching [35] in molecules. A topic within this general framework which seems to be completely unexplored is an analysis (using inequalities) of formal relations between complexity measures. On the one hand, this could be done by starting from special graph classes which are interesting for practical applications. But, on the other hand, one can also infer more general interrelations between non-information-theoretic and information-theoretic measures (e.g., see Theorem (1)).
In [28] , we have investigated so-called implicit information inequalities for graphs. Such information inequalities describe general interrelations between parametric measures based on arbitrary information functions. In this section, we demonstrate inequalities between classical graph entropies and the entropy families given by Equation (2). As mentioned earlier, numerous network information measures [11] , [14] , [22] have been developed, but their mathematical properties have yet to be studied in detail. Therefore, the results of this section can be considered as a first attempt to detail these properties. Some of the interrelations represent bounds which hold for special graph classes (with no assumptions about the parameters involved) when using a special information function.
We start with a more general statement expressing an interrelation between the parametric entropy and a classical entropy measure that is based on certain equivalence classes associated with an arbitrary equivalence relation. In particular, this interrelation can be stated as an upper bound of the parametric entropy depending on the classical entropy measure.
Theorem 1
Let be an arbitrary graph, and let , be the equivalence classes associated with an arbitrary equivalence relation on . Suppose further that is an information function with for , and . Then,
Proof
Note that we use the simplified notation instead of because it is clear (by definition) that a vertex probability value depends on the information function . Now, suppose . Then, and Assuming and making use of Inequality (14) we derive or
Adding up these inequalities (i.e., by adding across the vertices), we obtain
But this is Inequality (12).
In the following, we apply the assumption for to some special graph classes and using the proof technique of the previous theorem. The set is taken to be , and thus the equivalence relation induces a partition of into equivalence classes of vertices. These assumptions allow for obtaining upper bounds on which can be stated as corollaries of Theorem (1).
Corollary 2
Let be a star graph having vertices and suppose is the vertex with degree . The remaining non-hub vertices are labeled arbitrarily. stands for a non-hub vertex. Let be an information function satisfying the conditions of Theorem (1). Let and denote the orbits of the automorphism group of forming a partition of . Then
Proof
By hypothesis and so that
The information functions given by Equation (3), (4), (5) can be seen to satisfy the above conditions by choosing appropriate . Again, with , the Inequalities (20) yield
Now, applying the proof technique of Theorem (1) and performing some elementary transformations, we obtain Inequality (19).
Corollary 3
be an identity graph having vertices. has only the identity automorphism and therefore each orbit is a singleton set, i,e., . Let be an information function satisfying the conditions of Theorem (1). Then,
Proof
By hypothesis
Clearly,
Once again applying the proof technique of Theorem (1) and performing some elementary transformations, we obtain Inequality (23).
Corollary (3) leads immediately to
Corollary 4
Let be an identity graph having satisfying the conditions of Corollary (3). Then, where .
Corollary 5
Let be a path graph having vertices and let be an information function satisfying the conditions of Theorem (1). If is even, possesses equivalence classes and each contains 2 vertices. Then,
If is odd, then there exist equivalence classes that have 2 elements and only one class containing a single element. This implies,
Proof
By hypothesis
Then, it is easy to see that
When is odd, we have and
Multiplying these inequality systems by -1 and performing the addition step (of the proof technique of Theorem (1) gives Inequality (31) and Inequality (32).
Assuming different initial conditions, we can derive additional inequalities between classical and parametric measures. We state the following theorems without proofs because the underlying technique is similar to the proofs of the previous assertions.
Theorem 6
Let be an arbitrary graph and . Then,
Theorem 7
Let be an arbitrary graph with being the probabilities satisfying Equation (1) such that . Then,
For identity graphs, we can obtain a general upper bound for the parametric entropy.
Corollary 8
Let be an identity graph having vertices. Then, | Methods and Results
Classical Measures and Parametric Graph Entropies
In this section, we sketch briefly known graph entropy measures for determining the information content of networks. As a preliminary remark, denotes the cardinality of a given set . Now, let be a graph and let . The existing graph entropy measures can be divided into two main classes: (i) Classical measures [14] and (ii) parametric measures [23] . Classical measures are defined relative to a partition of a set of graph elements induced by an equivalence relation on . More precisely, let be a set of graph elements (typically vertices), and let for , be a partition of induced by . Suppose further that . Then
Parametric measures are defined on graphs relative to information functions. Such functions are not identically zero and map graph elements (typically vertices) to the non-negative reals. For simplicity of description, we consider information functions defined on . Let for . Clearly, the form a probability distribution over the vertices. Then
In general, a detailed overview of graph entropy measures can be found in [2] , [11] , [14] . Note that various other graph entropies have also been developed, see, e.g., [3] , [4] , [29] – [32] but these won't be discussed here.
The structural feature of a network captured by a classical information measure depends on the graph invariant and the equivalence criterion . This is clear from Equation (1). The relationship between the quantitative measure and graph structure for classical measures is examined further by Nikolić [33] .
For more general measures (Equation (2)), the structural feature depends on the information function used to define the measure. Examples are given by
The are positive coefficients used to weight structural differences in a graph [23] and are the -sphere cardinalities. denotes the degree and the eccentricity of the vertex . stands for the diameter of . Such functions are used to obtain the vertex probabilities as explained in [23]
The family of graph entropies resulting from different probability distributions is represented by Equation (2). In the following, we provide examples of such an information function (choosing as a special case) as well as of the resulting entropy measure. Furthermore, we compare this measure with a classical one using an identity graph as an example. Note that the information function has already been used to characterize chemical structures [15] . But first consider the graphs in Figure 1 to explain the graph entropy measure in more detail. For calculating this structural information content explicitly, we set
The structural feature captured by this measure is linked to the following observation: The more the vertices differ with respect to their spherical neighborhoods, the smaller is the value and conversely. Hence, detects a kind of inner symmetry of an underlying graph. By using in Equation (2), regular graphs have a constant information content equal to the maximum entropy (for every information function). For example, the graph gives rise to (see Figure 2 ) and finally . To compare the parametric with one of the classical measures, we consider a special case of Equation (1) in which the probabilities are determined by the respective sizes of the (vertex) orbits of the automorphism group, see [12] . The resulting graph entropy measure is denoted by . Because is vertex-transitive, there is only one orbit containing all vertices and therefore we obtain
Now, we evaluate the two different graph entropy measures and for the identity graph depicted in Figure 3 . This graph has a trivial automorphism group (i.e., the identity graph) and, hence, all orbits are singleton sets. This implies
But when calculating , we get where . Finally, we find that . In contrast, note that represents a symmetry-based complexity measure [12] . Other structural features could be chosen to provide other or more comprehensive measures of complexity. For example, Bertz [34] modified the total information content discussed by Bonchev [11] to obtain a different measure. Other approaches to tackle this challenging problem have been outlined by Nikolić [33] . To better understand the measure and to get an intuitive sense of its complexity, we perform a parameter study. More precisely, we show the entropy represented by Equation (11) for different parameters. We plotted the entropy for constant values of (0, 0.5 - first row, and 1, 3 - second row) independent of the other variables and , see Figure 4 . Clearly, the positions of maximum entropy are shifted for different values of ; and for higher values of the location of the maximum approaches that of .
Inequalities for Parametric Graph Entropies and Classical Measures
Most of the graph entropy measures developed thus far have been applied in mathematical chemistry and biology [2] , [11] , [14] . These measures have been used to quantify the complexity of chemical and biological systems that can be represented as graphs. Given the profusion of such measures, it is useful, for instance, to prove bounds for special graph classes or to study interrelations among them. Such results might be useful to investigate network-based systems as well as to design new network measures more adequately. In terms of information-theoretic measures for graphs, relatively little attention has been paid to this effort. An early attempt in this direction was undertaken by Bonchev [27] when investigating inequalities between entropy-based network measures by considering special graph classes. In particular, Bonchev [27] used such inequalities to investigate the concept of branching [35] in molecules. A topic within this general framework which seems to be completely unexplored is an analysis (using inequalities) of formal relations between complexity measures. On the one hand, this could be done by starting from special graph classes which are interesting for practical applications. But, on the other hand, one can also infer more general interrelations between non-information-theoretic and information-theoretic measures (e.g., see Theorem (1)).
In [28] , we have investigated so-called implicit information inequalities for graphs. Such information inequalities describe general interrelations between parametric measures based on arbitrary information functions. In this section, we demonstrate inequalities between classical graph entropies and the entropy families given by Equation (2). As mentioned earlier, numerous network information measures [11] , [14] , [22] have been developed, but their mathematical properties have yet to be studied in detail. Therefore, the results of this section can be considered as a first attempt to detail these properties. Some of the interrelations represent bounds which hold for special graph classes (with no assumptions about the parameters involved) when using a special information function.
We start with a more general statement expressing an interrelation between the parametric entropy and a classical entropy measure that is based on certain equivalence classes associated with an arbitrary equivalence relation. In particular, this interrelation can be stated as an upper bound of the parametric entropy depending on the classical entropy measure.
Theorem 1
Let be an arbitrary graph, and let , be the equivalence classes associated with an arbitrary equivalence relation on . Suppose further that is an information function with for , and . Then,
Proof
Note that we use the simplified notation instead of because it is clear (by definition) that a vertex probability value depends on the information function . Now, suppose . Then, and Assuming and making use of Inequality (14) we derive or
Adding up these inequalities (i.e., by adding across the vertices), we obtain
But this is Inequality (12).
In the following, we apply the assumption for to some special graph classes and using the proof technique of the previous theorem. The set is taken to be , and thus the equivalence relation induces a partition of into equivalence classes of vertices. These assumptions allow for obtaining upper bounds on which can be stated as corollaries of Theorem (1).
Corollary 2
Let be a star graph having vertices and suppose is the vertex with degree . The remaining non-hub vertices are labeled arbitrarily. stands for a non-hub vertex. Let be an information function satisfying the conditions of Theorem (1). Let and denote the orbits of the automorphism group of forming a partition of . Then
Proof
By hypothesis and so that
The information functions given by Equation (3), (4), (5) can be seen to satisfy the above conditions by choosing appropriate . Again, with , the Inequalities (20) yield
Now, applying the proof technique of Theorem (1) and performing some elementary transformations, we obtain Inequality (19).
Corollary 3
be an identity graph having vertices. has only the identity automorphism and therefore each orbit is a singleton set, i,e., . Let be an information function satisfying the conditions of Theorem (1). Then,
Proof
By hypothesis
Clearly,
Once again applying the proof technique of Theorem (1) and performing some elementary transformations, we obtain Inequality (23).
Corollary (3) leads immediately to
Corollary 4
Let be an identity graph having satisfying the conditions of Corollary (3). Then, where .
Corollary 5
Let be a path graph having vertices and let be an information function satisfying the conditions of Theorem (1). If is even, possesses equivalence classes and each contains 2 vertices. Then,
If is odd, then there exist equivalence classes that have 2 elements and only one class containing a single element. This implies,
Proof
By hypothesis
Then, it is easy to see that
When is odd, we have and
Multiplying these inequality systems by -1 and performing the addition step (of the proof technique of Theorem (1) gives Inequality (31) and Inequality (32).
Assuming different initial conditions, we can derive additional inequalities between classical and parametric measures. We state the following theorems without proofs because the underlying technique is similar to the proofs of the previous assertions.
Theorem 6
Let be an arbitrary graph and . Then,
Theorem 7
Let be an arbitrary graph with being the probabilities satisfying Equation (1) such that . Then,
For identity graphs, we can obtain a general upper bound for the parametric entropy.
Corollary 8
Let be an identity graph having vertices. Then, | Discussion
Quantitative measures of network structure have been defined and applied in many different settings, see, e.g., [2] , [22] , [25] . For example, chemists represent molecules as graphs as an aid in distinguishing molecules and cataloguing their properties [36] , [37] ; biologists model natural phenomena as complex networks in the study of brains and genetic information systems [38] , [39] ; epidemiologists and computational biologists investigate the spread of diseases in populations modeled as networks of individuals [40] , [41] ; computer scientists design and build networks of artificial systems that support message exchange and distributed computation [42] , [43] . In each of these different settings, questions about the structure of networks arise, leading to the definition of mathematical functions designed to measure structural complexity. As a result of all these relatively independent scholarly efforts, many different measures [2] , [22] , [25] , [33] have been defined whose interrelations remain to be determined. This paper is intended as a contribution to the classification of these diverse measures of network structure. In particular, we have singled out two different classes of measures, namely classical and parametric entropy measures defined on graphs, and have examined their interrelations.
The approach taken in this paper is to establish inequalities between measures. As already mentioned, an early attempt in this direction has been undertaken by Bonchev [27] who proved inequalities in the course of investigating branching structures in molecules. Our aim here is somewhat broader, namely to establish general, formal relations between complexity measures defined on arbitrary graphs. Since complexity measures typically assign real numbers to networks, inequalities provide the foundation for constructing partial orders on sets of measures. Knowledge of such order relations enables us to use inequalities to infer bounds on specific measures of the structural complexity of graphs and networks. Knowing that measure is bounded above by measure whose maximum value has been established tells us that measure has a maximum value less than that of measure . Relatively little work on the problem of ordering entropy measures has appeared in the literature, see, e.g., [44] , [45] .
The main focus of the paper has been on establishing inequalities between entropy-based measures, i.e., measures that make use of Shannon's classical entropy function. In particular, we examined inequalities involving classical and parametric information measures. Such measures have been used extensively to quantify the information content of systems represented by graphs [2] , [12] , [21] , [22] . For this reason, we believe that such inequalities are critical for a proper understanding of entropy-based measures.
The inequalities presented in this paper show interrelations between entropy-based measures applied to special classes of graphs. Establishing such inequalities for arbitrary graphs is a daunting task. The work reported here is thus a first step in that the methods employed can in principle be generalized to establish inequalities between information measures on arbitrary graphs. More research is clearly needed to extend the results to graphs in general, and ultimately to build a foundation for a unified interpretation of network complexity as measured by entropy-based functions. | Wrote the paper: MD AM FES.
This paper explores relationships between classical and parametric measures of graph (or network) complexity. Classical measures are based on vertex decompositions induced by equivalence relations. Parametric measures, on the other hand, are constructed by using information functions to assign probabilities to the vertices. The inequalities established in this paper relating classical and parametric measures lay a foundation for systematic classification of entropy-based measures of graph complexity. | CC BY | no | 2022-01-13 08:14:22 | PLoS One. 2011 Jan 5; 6(1):e15733 | oa_package/45/e7/PMC3016402.tar.gz |
|||
PMC3016403 | 21246047 | Introduction
The neurotransmitter serotonin (5-HT) exerts a wide spectrum of actions in a variety of behaviors, such as pain sensation, locomotion, circadian rhythm, food intake and emotional behaviors [1] , [2] . Extensive efforts have been made to characterize the molecular pathways that control the specification, differentiation and survival of 5-HTergic neurons during brain development [3] , because this line of research is very helpful for understanding the genetic basis of central 5-HT deficiency which leads to many mental disorders [4] , [5] . Sonic hedgehog secreted from the floor plate triggers the expression of Mash1 and GATA2 in progenitor cells in the ventricular zone of hindbrain [6] , and both genes are essential for the development of 5-HTergic neurons [7] , [8] . 5-HTergic neurons are classified into two groups based on their anatomical location: a rostral group located in the pons and a caudal group located in the medulla oblongata. Although Nkx2.2 is expressed in the progenitors of all 5-HTergic neurons in hindbrain, evidence from null mutant mice show that it is only required for the generation of 5-HTergic neurons in the dorsal raphe nucleus, one cluster neurons in pons group [9] , and GATA3 is thought to be required for the differentiation of the medulla oblongata group [10] . Both Lmx1b and Pet1 are expressed in postmitotic 5-HTergic neurons and essential for the differentiation and survival of 5-HTergic neurons during embryonic development [4] , [11] , [12] .
Our previous study has shown that Lmx1b is persistently expressed in central 5-HTergic neurons during postnatal development and throughout adulthood suggesting that Lmx1b may be involved in regulating normal expression of 5-HT in adult brain. To test this hypothesis, we used a tamoxifen-inducible Cre-LoxP system [13] to selectively inactivate Lmx1b expression in central 5-HTergic neurons of adult mice. Our data showed that 5-HT level in Lmx1b iCKO mice was reduced to 60% of control mice probably due to down-regulation of Tph2 . In addition, Sert and Vmat2 that are implicated in maintaining normal functions of 5-HTergic neurons were greatly reduced in Lmx1b iCKO mice. Thus, Lmx1b , an essential gene for the development of central 5-HTergic neurons, is also required for the normal biosynthesis of 5-HT in the adult brain and possibly for regulating normal functions of central 5-HTergic neurons. | Materials and Methods
Genetic crossings, genotyping and animal maintenance
Lmx1b flox/flox mice [14] and Rosa26-LacZ reporter (Rosa26R) mice [15] were generated and genotyped as previously described. In Lmx1b flox/flox mice, exons 4-6 of Lmx1b were flanked by two LoxP sites and can be deleted in the presence of Cre in vivo [14] . To specifically inactivate Lmx1b expression in 5-HTergic neurons in the adult mouse brain, Lmx1b flox/flox mice were crossed with Pet1-CreER T2 mice (see below) and their offspring Pet1-CreER T2 ; Lmx1b flox/+ mice were then crossed with one another to obtain Pet1-CreER T2 ; Lmx1b flox/flox ( Lmx1b iCKO) mice. Animal care practices and all experiments were reviewed and approved by the Animal Committee of Tongji University School of Medicine, Shanghai, China (TJmed-010-10).
Generation of Pet1-CreER T2 mice
Pet1-CreER T2 BAC construct was obtained by inserting CreER T2 coding sequence downstream of the Pet1 start codon within the RP23-165D11 BAC (BACPAC Resources Center at Children's Hospital Oakland Research Institute) via homologous recombination in EL250 bacteria [16] . Sepharose-4B (Sigma)-purified BAC DNA was then introduced into FVB/N fertilized mouse eggs by pronuclear injection using standard methods. Transgenic mice were genotyped by PCR with primers against Cre (forward: TCG ATG CAA CGA GTG ATG AG ; reverse: TCC ATG AGT GAA CGA ACC TG ) resulting in a ∼400 base-pair product. All progeny carrying this transgene were found to be viable and fertile without any obvious abnormalities.
To determine the spatial pattern of Cre activity, Pet1-CreER T2 mice were crossed with Rosa26R mice [15] and Cre activity was examined by administering tamoxifen to Pet1-CreER T2 ; Rosa26R progeny. Tamoxifen (20 mg/ml; Sigma) diluted in corn oil (Sigma) was administered by oral gavage in once-daily doses of 8 mg/40 g of body weight on the following schedule: days 1, 8, 9, 11 and 12. Mice were sacrificed 2-3 weeks after the last dose and brains were removed and fixed in 4% paraformaldehyde (Sigma) in 0.01 M phosphate buffered saline (PBS; pH 7.4) for 3 hours. After cryoprotection with 30% sucrose in PBS, 40 μm-thick sections were cut on a cryostat (CM1900, Leica) and immediately subjected to X-gal staining as described previously [5] .
In situ hybridization and immunohistochemistry
In situ hybridization probes against Tph2 , Sert , Aadc and Dopamine β-hydroxylase ( Dbh ), were constructed according to the description on the website of Allen Brain Atlas ( http://www.brain-map.org ). The Lmx1b [17] and Pet1 in situ probes encompassed the complete Lmx1b and Pet1 coding sequence, respectively. We also generated an in situ probe against exons 4–6 of Lmx1b only. All probes were cloned into pGEM-T vector (Promega). Eight mice (4 wild-type and 4 Lmx1b iCKO) were used for in situ hybridization. For in situ hybridization, brains were fixed in 4% PFA in PBS for 24 hours, cryoprotected with 30% sucrose in PBS, and 30 μm-thick transverse sections were cut on a cryostat and mounted onto glass slides (Fisher Scientific). RNA probes labeled by digoxygenin-UTP (Roche) were generated by in vitro transcription and hybridization signals were visualized upon nitro blue tetrazolium chloride (Fermentas) and 5-bromo-4-chloro-3-indolyl phosphate (Fermentas) staining.
Twelve mice (6 wild-type and 6 Lmx1b iCKO) were used in immunochemistry. Thirty μm-thick brain sections were incubated with primary antibody at 4°C overnight. After washing in PBS, sections were incubated with appropriate secondary antibody for 3 hours at room temperature, washed in PBS, and incubated with Cy3-conjugated streptavidin (1∶1000; Jackson ImmunoResearch) for 1 hour. The following primary antibodies were used: goat anti-β-galactosidase (β-gal; 1∶1000; AbD Serotec), rabbit anti-Lmx1b (1∶2000) [17] , rabbit anti-Tph2 (1∶4000) [18] , [19] , rabbit anti-Vmat2 (1∶1000; Chemicon), mouse anti-Tyrosine hydroxylase (TH; 1∶4000; Sigma). For Tph2 and β-gal double staining, sections were incubated with a mix of the anti-Tph2 and anti-β-gal antibodies overnight, then for 3 hours with a mix of Cy3-labeled donkey anti-rabbit (1∶400; Jackson ImmunoResearch) and biotinylated horse anti-goat IgG (1∶400; Vector Laboratories), and finally with Cy2-conjugated streptavidin (1∶1000; Jackson ImmunoResearch) for 1 hour. There were no immunostaining signals when primary antibodies were omitted or replaced with normal IgG. Stained sections were observed and scanned under a fluorescence or confocal microscope.
Cell count
We counted positive cells in every six sections. Positive cells in the dorsal raphe nucleus (around the level of −4.72 mm to Bregma) [20] and raphe magnus nucleus (around the level of −4.84 mm to Bregma) were counted for statistical comparison between wild-type and Lmx1b iCKO mice ( n = 4 for each). Statistical significance was determined by the Mann-Whitney test. All data were expressed as mean ± SEM, and error bars represent SEM. P values less than 0.05 were considered statistically significant.
High performance liquid chromatography (HPLC)
Adult (3 months) tamoxifen-induced wild-type and Lmx1b iCKO mice were used for HPLC ( n = 6 for each). Two-three weeks after completion of tamoxifen treatment, whole brains were dissected out immediately after anesthesia with sodium pentobarbital (0.07 mg/g body weight), and HPLC samples were made according to methods described previously [5] . 5-HT and its metabolite 5-hydroxyindoleacetic acid (5-HIAA), dopamine and its metabolite dihydroxyphe-nyacetic acid and homovanillic acid, and norepinephrine were measured using HPLC electrochemical detection as described previously [5] . Statistical significance was determined by the Student's t -test. All data were expressed as mean ± SEM, and error bars represent SEM. P values less than 0.05 were considered statistically significant. | Results
Generation and characterization of Pet1-CreER T2 transgenic mice
To delete Lmx1b in the adult mouse brain, we crossed mice carrying two floxed Lmx1b alleles with mice harboring a tamoxifen-inducible form of Cre recombinase (CreERT2) [13] under the control of the Pet1 promoter, to generate Lmx1b iCKO mice ( Figure 1A ). Genotypes were confirmed by PCR. Pet1 is expressed specifically in central 5-HTergic neurons [4] , and the distribution of Cre-recombination activity was determined by crossing to Rosa26R mice [15] . Pet1-CreER T2 ; Rosa26R mice were administered a regimen of tamoxifen (see Materials and methods ) beginning at P90 and analyzed by X-gal staining 2–3 weeks after completing the induction regimen. The procedure of tamoxifen administration is shown in Figure 1B . The first day of tamoxifen administration was termed as D1, and tamoxifen was administrated in once-daily doses of 8 mg/40 g body weight on D1, D8, D9, D11 and D12. X-gal-positive cells were found in the raphe nuclei and ventrolateral reticular formation of the medulla oblongata ( Figure 1C–G ), showing that Pet1 -driven Cre-recombinase activity can be induced in adulthood. Note that X-gal labeling was not observed in other brain regions outside the raphe nuclei ( Figure 1H–J ) and no Cre activity was present in negative control-treated and untreated Pet1-CreER T2 ; Rosa26R mice (data not shown).
To determine whether Cre-recombinase activity in Pet1-CreER T2 ; Rosa26R mice was exclusive to 5-HTergic neurons, we performed Tph2/β-gal double immunostaining. Tph2 is the key enzyme responsible for 5-HT synthesis in the brain [21] . All β-gal-positive neurons expressed Tph2, and approximately 85% of Tph2-labeled neurons were β-gal-positive ( Figure 2 ). Thus, Cre activity in Pet1-CreER T2 mice is specific to 5-HTergic neurons, and is capable of inducing recombination in the majority of 5-HTergic neurons in the adult brain.
Deleting Lmx1b in the adult brain leads to 5-HT insufficiency
Previous studies have demonstrated that Lmx1b is required for the differentiation and survival of central 5-HTergic neurons during embryonic development [11] , [12] . We set out to examine whether Lmx1b also plays a role in the adult central 5-HTergic neurons. Lmx1b iCKO and wild-type control mice were administrated with tamoxifen beginning at P90 and examinations were performed 2–3 weeks after completion. We first used Lmx1b antibody [17] to examine whether Lmx1b is deleted in Lmx1b iCKO mice. As shown in Figure S1 , similar Lmx1b immunostaining was found in the dorsal raphe nucleus of wild-type mice and Lmx1b iCKO mice showing that the antibody recognizes the truncated Lmx1b protein without exons 4–6 of Lmx1b ; in this case the Lmx1b antibody can be used to trace Lmx1b mutant cells. Because the full length of in situ hybridization probe for Lmx1b [17] was unable to distinguish truncated mRNA from normal Lmx1b mRNA either (data not shown), we generated an in situ probe against exons 4–6 of Lmx1b only. However, the sensitivity of this probe was too low to detect Lmx1b mRNA in adult wild-type mice, although it worked in showing Lmx1b mRNA in embryos (data not shown). The floxed Lmx1b alleles were deleted after crossing Lmx1b flox/flox mice with Pet1 - Cre or Wnt1 - Cre mice [5] , [14] and Cre activity in Pet1-CreER T2 mice was functional as shown by X-gal staining in 5-HTergic neurons in Pet1-CreER T2 ; Rosa26R mice ( Figures 1 , 2 ), we thus speculate that the exons 4–6 of Lmx1b should be deleted in the majority of 5-HTergic neurons in the adult Lmx1b iCKO mice (also see phenotypes described below).
We next examined 5-HT expression in Lmx1b iCKO mice, and found that intensities of 5-HT immunofluorescence in individual neurons were slightly reduced in the raphe nuclei of Lmx1b iCKO mice compared with wild-type mice ( Figure 3A–D ). To further investigate whether the content of 5-HT in the brain is altered or not, we used HPLC to measure the levels of 5-HT and its metabolite 5-HIAA in the brain, and found that they were decreased in Lmx1b iCKO mice to about 60% and 30% of control levels, respectively ( Figure 3E ). We speculate that the discrepancies between no apparent reduction of 5-HT immunofluorescence and 40% reduction of 5-HT revealed by HPLC in Lmx1b iCKO are probably due to the low sensitivity of 5-HT antibody, which is unable to detect this reduction. Taken together, we conclude that Lmx1b is required for normal expression of 5-HT in adult brain.
Deleting Lmx1b in the adult brain results in down-regulation of 5-HTergic neuron-associated genes
To explore the mechanisms underlying decreased 5-HT level in Lmx1b iCKO mice, we examined the expression of Tph2 , which is a specific enzyme for synthesis of 5-HT in the brain [21] . The number of neurons with intense Tph2 immunofluorescence in the raphe nuclei of Lmx1b iCKO mice was dramatically reduced compared with control mice ( Figure 4 ). Correspondingly, many weakly-labeled neurons were seen in the Lmx1b iCKO raphe nuclei (arrowheads in Figure 4D ), whereas they were not observed in wild-type mice. These observations were further confirmed by in situ hybridization for Tph2 ( Figure 5A–D ). The number of cells with intense in situ signals was significantly decreased in Lmx1b iCKO relative to wild-type mice ( Figure 6 ). Since approximately 15% of 5-HTergic neurons in Pet1-CreER T2 mice did not exhibit Cre activity ( Figure 2 ), the strong Tph2 labeling retained in Lmx1b iCKO mice may correspond to 5-HTergic neurons in which Lmx1b was not deleted. Nevertheless, these results indicate that deleting Lmx1b in the adult brain impairs Tph2 expression, leading to a deficiency of central 5-HT.
To further investigate the function of Lmx1b , we examined the expression of several genes essential for maintaining the normal function of 5-HTergic neurons. Sert is required for the re-uptake of 5-HT in axonal terminals [22] , and its expression was greatly reduced in the raphe nuclei of Lmx1b iCKO mice ( Figure 5E–H ). Cell counts showed a significant difference in the number of Sert -expressing cells between wild-type and Lmx1b iCKO mice ( Figure 6 ). Vmat2, which is required for packaging 5-HT into synaptic vesicles [23] , was also down-regulated in Lmx1b iCKO mice ( Figure 5I–L ). These results indicate that expression of the genes associated with the maintaining functions of 5-HTergic neurons is impaired. To test whether deleting Lmx1b decreases the number of 5-HTergic neurons or not, which in turn results in the phenotypes mentioned above, we examined the expression of Aadc and Pet1 . We found that the number of Aadc -expressing neurons was unchanged in Lmx1b iCKO mice compared with controls ( Figures 5M–P , 6 ), consistent with the finding that similar Lmx1b immunostaining was present in both Lmx1b iCKO and wild-type mice ( Figure S1 ). The dorsal raphe nucleus contains the most abundant 5-HTergic neurons among the raphe nuclei. In Nissl-stained sections, the Nissl-stained 5-HTergic neurons are larger and more intensely stained relative to non-5-HTergic neurons. Nissl-stained sections from wild-type and Lmx1b iCKO mice showed no obvious difference in cell density and distribution ( Figure S2 ). Furthermore, Pet1 expression in 5-HTergic neurons requires Lmx1b during embryonic development [12] , but its expression in the raphe nuclei of Lmx1b iCKO mice showed no difference from wild-type controls ( Figure 5Q–T ). These results suggest that the overall number of 5-HTergic neurons is not affected by deleting Lmx1b in adulthood, and that Pet1 expression in the adult brain is independent of Lmx1b .
Expression of dopamine and norepinephrine is unchanged in Lmx1b iCKO mice
revious studies have shown that central 5-HT deficiency may affect the expression of other monoamines in the brain [24] . To explore this possibility, we examined the expression of TH, the essential enzyme for the synthesis of both dopamine and norepinephrine, and Dbh , an enzyme that converts dopamine into norepinephrine [25] in Lmx1b iCKO mice. TH immunostaining in the substantia nigra and ventral tegmental area (dopaminergic neurons), and in the locus coeruleus (norepinephrinergic neurons) in Lmx1b iCKO mice was similar to that in control mice ( Figure 7A, B, D, E ). Dbh in situ hybridization in the locus coeruleus of Lmx1b iCKO mice was also similar to that of wild-type controls ( Figure 7C, F ). In addition, levels of norepinephrine, dopamine and its metabolites (dihydroxyphenylacetic acid and homovanillic acid) in Lmx1b iCKO mice were not different from those of controls, as determined by HPLC analysis ( Figure 7D ). Thus, expression of dopamine and norepinephrine in Lmx1b iCKO mice appeared normal. | Discussion
In the present study, we took advantage of an inducible Cre-LoxP system to inactivate Lmx1b expression in adult 5-HTergic neurons. We found that the level of central 5-HT in Lmx1b iCKO mice is reduced to 60% of controls, and that the expression of 5-HT neuron-associated genes such as Tph2 , Sert and Vmat2 are down-regulated in Lmx1b iCKO mice.
We generated Pet1-CreER T2 mice, and X-gal staining data from Pet1-CreER T2 ; Rosa26R mice treated with tamoxifen in adulthood showed that Cre was functional and restricted to central 5-HTergic neurons. Our previous studies have shown that the flanked Lmx1b allele was deleted in Wnt1-Cre; Lmx1b flox/− and Pet1-Cre; Lmx1b flox/− mice [5] , [14] . Although we failed to provide direct morphological data showing the deletion of Lmx1b , based on the data mentioned above and phenotypes observed in Lmx1b iCKO mice, it is reasonable to speculate that Lmx1b is inactivated in 5-HTergic neurons of Lmx1b iCKO mice. Pet1-CreER T2 mice are very useful in time-controlled deletion of interested genes in central 5-HTergic neurons particularly in adulthood.
The function of Lmx1b in the development of 5-HTergic neurons has been studied extensively [5] , [11] , [12] , [26] . In Lmx1b null mice, postmitotic 5-HTergic neurons fail to express 5-HT and several genes (e.g. Pet1 ) critical for 5-HT neuron development [11] , [12] . When Lmx1b is conditionally deleted after 5-HTergic neuron development has initiated (around embryonic day 12.5), 5-HTergic neurons differentiate normally, but end up dying at later embryonic stages [5] , [26] . Thus, Lmx1b is required for both differentiation and survival of 5-HTergic neurons during embryonic development. As we showed in the present study, the inactivation of Lmx1b in adulthood led to a reduction in central 5-HT levels, probably as a consequence of Tph2 down-regulation. In addition, Sert, the protein responsible for the re-uptake of 5-HT into axonal terminals, and Vmat2, which is involved in packaging 5-HT into synaptic vesicles [22] , [23] , were both greatly reduced in the raphe nuclei of Lmx1b iCKO mice. In contrast, the expression of both Pet1 and Aadc appeared unchanged in Lmx1b iCKO mice relative to control mice, indicating that there was no loss of 5-HTergic neurons in the raphe nuclei. It has been shown that Pet1 is required for terminal differentiation of 5-HTergic neurons during embryonic development [4] , and its expression is lost in Lmx1b null mice [12] . Recently, it is reported that Pet1 is required for maintaining the serotonergic neurotransmitter system during adult stages [27] . Loss of Pet1 in the 5-HTergic neurons leads to a decrease of Tph2 expression but no change in Lmx1b expression. In the present study, normal Pet1 expression was found in Lmx1b iCKO mice. It is likely that Lmx1b and Pet1 act in parallel to regulate central 5-HTergic system, the expression of Pet1 in adult brain is independent of Lmx1b , and Pet1 is not involved in alterations of gene expression in Lmx1b iCKO mice. Taken together, these results indicate that Lmx1b is required for 5-HT biosynthesis and expression of several key genes associated with functions of 5-HTergic neurons, but not their survival in adult brain.
Central 5-HT deficiency has been associated with some mental disorders, such as depression and posttraumatic stress disorder [28] , [29] , [30] . We previously generated Lmx1b iCKO mice in which Lmx1b is deleted specifically in 5-HTergic neurons at embryonic stage with the help of Pet1-Cre , and found that 5-HT level in brain is less than 10% of that in wild-type mice. Interestingly, these mice showed enhanced contextual fear memory [5] . On the other hand, 5-HT plays important roles in the development of nervous system at embryonic stages and during early postnatal development, such as axonal growth [31] , spine formation [32] and barrel formation in the somatosensory cortex [33] . It is likely that abrogating 5-HT biosynthesis or 5-HT neuronal development with traditional genetic ablation techniques might have uncontrolled pleiotropic effects by interfering with the development of other brain systems. The use of Lmx1b iCKO mice circumvents these complications by allowing the brain to develop normally through to adulthood, and they serve as a new mouse model to study mental disorders associated with central 5-HT deficiency. | Conceived and designed the experiments: NNS HL YQD. Performed the experiments: NNS JBX YH JYC LZ. Analyzed the data: NNS JBX. Contributed reagents/materials/analysis tools: LG KPL. Wrote the paper: NNS YQD.
The transcription factor Lmx1b is essential for the differentiation and survival of central serotonergic (5-HTergic) neurons during embryonic development. However, the role of Lmx1b in adult 5-HTergic neurons is unknown. We used an inducible Cre-LoxP system to selectively inactivate Lmx1b expression in the raphe nuclei of adult mice. Pet1-CreER T2 mice were generated and crossed with Lmx1b flox/flox mice to obtain Pet1-CreER T2 ; Lmx1b flox/flox mice (which termed as Lmx1b iCKO). After administration of tamoxifen, the level of 5-HT in the brain of Lmx1b iCKO mice was reduced to 60% of that in control mice, and the expression of tryptophan hydroxylase 2 ( Tph2 ), serotonin transporter ( Sert ) and vesicular monoamine transporter 2 ( Vmat2 ) was greatly down-regulated. On the other hand, the expression of dopamine and norepinephrine as well as aromatic L-amino acid decarboxylase ( Aadc ) and Pet1 was unchanged. Our results reveal that Lmx1b is required for the biosynthesis of 5-HT in adult mouse brain, and it may be involved in maintaining normal functions of central 5-HTergic neurons by regulating the expression of Tph2 , Sert and Vmat2 . | Supporting Information | CC BY | no | 2022-01-13 08:14:22 | PLoS One. 2011 Jan 5; 6(1):e15998 | oa_package/aa/91/PMC3016403.tar.gz |
||
PMC3016404 | 21246048 | Introduction
Phenotypic plasticity is a widespread phenomenon: individuals alter their phenotypes in response to environmental cues, often as an adaptation to variable environments. This multidisciplinary concept has recently been of increasing interest to biologists as a feature of both normal and abnormal individual development that is not only shaped by evolution, but also one that influences the function of individuals, the structure of ecological communities, and evolutionary trajectories [1] , [2] , [3] , [4] . Despite its apparent adaptive superiority, phenotypic plasticity is neither universal nor infinite in expression. To arrive at a better understanding of its evolution, we must identify its costs and limits, and illuminate the functional relationships between them [4] , [5] , [6] . Auld et al [7] suggested that many of these limits and costs may be alternative views of the same constraint, arguing that most of them are merely special cases or consequences of two fundamental costs, phenotype-environment mismatch (costs of phenotypes) and costs of the ability to be plastic (primarily maintenance and production costs).
Phenotype-environment (P-E) mismatch results in an ecological cost and is often caused by imperfect cue reliability and/or developmental lag times in the induced traits. Some phenotypes (e.g. morphological as opposed to behavioral) require substantial lengths of time to be expressed after their induction has been triggered [8] , and an early warning is necessary well before the anticipated condition occurs to avoid this cost. However, cue reliability tends to decline with the duration between the cue and the environmental condition it predicts. False early information can also induce the wrong phenotype for the eventual environmental condition [9] . Therefore, organisms are expected to rely on multiple cues [e.g. 10] , that may be available at different times before the anticipated condition and with variable degrees of reliability. Such continuous integration of multiple cues from the environment may be limited by the problem of processing contradictory cues. An additional limitation that has been frequently observed or assumed in models is the restriction of developmental flexibility to certain ontogenetic windows, beyond which developmental trajectories become canalized [e.g. 11] , [12] . However, general explanations for the occurrence of such windows are mostly lacking [but see 13] .
In the absence of ontogenetic windows that limit the expression of plasticity, or within such windows, the contradiction of early information by late but more reliable information is expected to induce compensatory development, requiring the organism to express an extreme degree of plasticity. Accelerated development, while avoiding the cost of P-E mismatch, may incur greater costs of producing the target phenotype in the form of reduced life expectancy [14] , compromised immune system [15] , locomotor performance [16] . However, constraints on plasticity – both the onset of late ontogenetic canalization and the costs of compensatory development – may be asymmetrical for different developmental trajectories induced by false early information. In other words, compensating for a false alarm for a particular stress may be constrained to a different degree than compensating for the unexpected occurrence of the same stress.
Explaining patterns of size and age at life-history transitions such as metamorphosis has been an ongoing, central challenge in evolutionary biology. Larval development towards the completion of metamorphosis in amphibians involves two directional processes: growth in body size and the differentiation and remodeling of tissues and organs [17] . The rates of both of these processes generally respond to various environmental factors, ultimately determining size and age at metamorphosis, respectively, and have been the focus of extensive research involving analyses of phenotypic plasticity [e.g. 11] , [12] , [17] , [18] , [19] , [20] , [21] , [22] . Some of this work has focused on developmental responses to the risk of habitat termination, with the overwhelming majority showing that larvae accelerate development and metamorphose earlier [reviewed in 23] . This response tends to result in a smaller size at metamorphosis due to a shortened growth period. However, few studies have explicitly addressed other, endogenous costs [e.g. 15] , [24] , particularly such that lead to increased mortality [25] , [26] .
We studied the responses of fire salamander larvae to two cues for habitat termination that differ in their timing and reliability, to test the following hypotheses: 1) larvae sense recent desiccation of conspecifics from previous cohorts as an early cue of their habitat's duration, and respond adaptively by altering their developmental rates to increase their probability of metamorphosing before it desiccates; 2) if developmental plasticity is not ontogenetically limited, rates of development will be updated according to later cues that indicate the habitat's actual duration more reliably, to the point of complete compensation; 3) in the latter case, environmental demand for extreme plasticity will carry asymmetric developmental costs. The acceleration of a biological process that requires energy inputs is intuitively expected to demand increased effort and thus incur greater costs. Therefore, we specifically predicted that compensatory, hyper-accelerated development following a false negative early cue (i.e. an unexpected catastrophe) will be more costly to execute than a compensatory delay in development following a false positive early cue (i.e. a false alarm). Based on the results of our experiment, we suggest a simple model of costs of development rates. We use it to demonstrate how phenotype production costs of plasticity are influenced by the timing of information and its reliability, and how production costs are traded off with P-E mismatch costs to explain the frequently-observed ontogenetic loss of plasticity. | Methods
Ethics statement
Field collection of salamanders, experimentation and their return were conducted according to the Nature and Parks Authority permit 2009/36605 and the Animal Experimentation Ethics Committee permit 190/10.
Study organism
We studied the ovoviviparous fire salamander [ Salamandra infraimmaculata ; 27] , whose larvae are deposited during the winter into mostly temporary pools in northern Israel. These pools vary greatly in various ecological characteristics, including their water holding capacity. Many temporary pools dry more than once within the same breeding season, particularly during early winter (October–December) and spring (March–April), when rains are infrequent, temperatures are high and the ground water level is low [28] . Pool desiccation is a very important factor contributing to salamander larval mortality and reproductive failure [29] . Many early-born larvae that die in early-winter events of pool desiccation dry in the sun (Sadeh, personal observations), with their flesh likely changing in chemical composition. Most of their decomposition occurs only after the pools are reflooded later in the season, possibly emitting unique chemicals that may be perceived by conspecifics. A pool's water holding ability depends on its floor structure, connectedness to the ground water table and exposure to solar radiation, and often does not change within a single breeding season. Thus, the presence or absence of such chemicals can convey moderately reliable information to the bulk of conspecifics that are deposited during mid-winter regarding the pool's liability to early-spring desiccation. Although previous experiments showed that S. infraimmaculata habitat selection behavior is finely tuned to changing ecological conditions [e.g. 30] , a preliminary mesocosm experiment indicated that gravid females may not be responding to this cue in their choice of larviposition pool (Sadeh, unpublished data). Thus the larvae, deposited indiscriminatingly with regards to formerly desiccated larvae, are predicted to perceive this cue and respond to it by adjusting their development rates.
Animal collection and return
To prepare the cue of recent desiccation, we collected larvae that either died in desiccated natural pools or in other mesocosm experiments during previous years (up to 3 years), dried them by placing under a light bulb until their mass stopped decreasing and stored them in sealed plastic bags at −20°C. Since no larva was intentionally killed to prepare this manipulation, the experiment was limited in size by the number of available dried larvae. In order to hasten their physical decomposition, we ground the dried larvae to a heterogeneously fine powder using a manual pestle and mortar after weighing 50 individuals to determine their mean individual dry mass. Thirty six hours before the experiment, we mixed powder quantities of 0.445 g (equivalent to 9 larvae) into outdoor tubs, each filled with 40 L tap water. The 36-hour waiting time allowed chlorine to dissipate from the water, and the powder to partially dissolve and initiate the organic decomposition process. Salamandra females often larviposit into pools within the first few days after they fill (Sadeh, personal observations). After this waiting time, we stirred the tubs and took water from them to fill the experiment's “early cue” treatment tubs. We took water for the “no early cue” treatment tubs from similar outdoor tubs that held 36-hour-aged tap water and did not contain any conspecific powder. With this manipulation, we did not control for the possible perception of the dried conspecifics by the focal larvae as a risk of predation, either by sensing their death (but not drying) as an alarm cue, or by sensing their presence (but not death) as older, cannibalistic conspecifics. However, the risk of predation is known in this species to induce reduced foraging and delayed development [31] , [32; Sadeh, unpublished data] , the opposite pattern than predicted for the perception of risk of desiccation.
During November 2009, we collected gravid females from natural breeding sites and placed them in field enclosures containing mesocosms [identical to those described in 30] to larviposit. We weighed and photographed each larva ventrally, and after removing larvae that were at the extremes of the mass distribution, we paired individuals from the same mother and of similar mass. Up to 10 hours after birth, we randomly allocated each pair to an experimental tub. Mean individual mass of the larvae at the beginning of the experiment was 0.243 g (SD = 0.041 g), and their mean snout-vent length (SVL) was 1.7 cm (SD = 0.1 cm). After giving birth, we returned all the adult females and excess larvae to their sites of collection. After the experiment ended, we returned all the surviving metamorphs to the natural pools nearest to the location of their mother's collection.
Experimental design
We conducted a factorial design experiment combining two levels of early cue of desiccation (presence/absence of dried conspecific powder, hereafter “early cue”) with two levels of water regime (constant/reducing water volume according to Figure 1 ). Thus, the experiment tested the developmental responses (growth and larval period) of larvae to the early information embodied in the presence or absence of the cue, both in situations when this early information was either true or false. We replicated each treatment combination 8 times in indoor, cuboid tubs (floor dimensions 36 by 21 cm), initially filled with 12 liters of water and containing 2 sibling larvae. We prepared the early cue as described above and implemented it as a single pulse manipulation before day 1. We removed water from the reducing water regime throughout the experiment by periodically filtering out water through coffee filter paper, and returning the residues from the filter paper back into the tub. This was also done to the constant water tubs but both residues and water were returned. We compensated weekly for water loss from constant water tubs due to evaporation by adding deionized water. The resulting increasing difference in solutes between treatments is a natural effect of pools drying partially due to evaporation. We only decreased the water volume to 2 liters in the reducing water regime because we sought to measure the larval response until metamorphosis without killing them or imposing a limitation on their swimming and feeding behavior. We fed the larvae ad libitum throughout the experiment to ensure similar food intake among treatments. This feeding regime also eliminated any potential chemical effects or indirect trophic effects of the introduced dried conspecifics material. We fed the larvae either a mixture of field-collected zooplankton organisms [mainly Arctodiaptomus similis (Copepoda), various daphnids and mosquito larvae ( Culiseta longiareolata and Culex laticinctus )], or purchased chironomid ( Chironomus sp) larvae.
We followed natural daily photoperiod in the experimental room, beginning with 10∶14 (L∶D) on day 1, 10.5∶13.5 from day 68, 11∶13 from day 87, and 11.5∶12.5 from day 101 to the end of the experiment at day 103. Similarly, we kept temperatures at the region's long-term mean daily maximal temperature for each month, according to the Israel Meteorological Service website ( http://www.ims.gov.il/IMSEng/CLIMATE ). Temperatures were: 16–18 degrees during days 1–76, 17–19 degrees during days 77–99 and 19–21 degrees from day 100. However, on days 76 and 85, due to air conditioning system malfunction, air temperatures rose to 36°C and 26°C, respectively, for 1–2 days before the problem was corrected, equally affecting the water temperatures of all treatments. The first case of air conditioning failure marked the beginning of emergence of metamorphs for this experiment. While these sharp deviations in temperature were not planned, they occurred during springtime when severe heat waves occur naturally in Israel, raising air and water temperatures to yearly extrema of up to 40°C and affecting natural ponds in a similar way when not completely drying them.
Response variables and metamorphosis
We recorded larval mortality and growth prior to metamorphosis, as well as size at and time to metamorphosis. To determine growth, we weighed all the larvae to the nearest mg and photographed them on days 1, 5 (weight only), 22, 42, 58, 76, 81, 87, 95, 101 and 103. Once the first larvae started displaying progressive metamorphic morphology (dark skin color, a reduced tailfin and/or reduced gills), we checked the tubs every one to two days to collect emerging metamorphs. We photographed emerging metamorphs to determine size at metamorphosis, and recorded their times to metamorphosis. The photographs were used to determine SVL to the nearest mm using image processing software (ImageJ 1.40g). Body mass is a good index of an individual's immediate condition and short-term growth as it includes the mass gained by recent meals that may be stored in lipid reserves or quickly used up. Therefore it is quick to respond to environmental conditions and carries relatively high intra-individual variation (Sadeh, personal observations). In contrast, SVL is slower to respond and less sensitive to short-term conditions, but gives a better estimate of long-term growth, as it is the result only of the portion of energy that was allocated into skeletal development and growth in body size.
Statistical analyses
We used repeated measures ANOVA to test the effects of the early cue and water regime on larval growth trajectories during most of their growth period, both in mass (five dates) and in SVL (four dates), using tub means as independent data points. The repeated measures analysis was done up to day 58, before any larva metamorphosed to prevent the distortion of test results by the reductions in mean sizes due to the removal of the usually larger emerged larvae. However, considerable shifts in the response patterns occurred during the metamorphic period. The results of these shifts are captured in a two-way ANOVA used to test the effects of early cue and water regime on time to- and size (SVL) at metamorphosis. We removed from this analysis two tubs in which both larvae died. To test our hypotheses regarding the costs of the expression of plasticity, we used three orthogonal planned contrasts on mean larval mortality data [33] : the main effect of water reduction (the mean of the two reducing water treatments vs. the mean of the two constant water treatments), the effect of a false positive early cue (early cue + reducing water vs. no early cue + reducing water) and the effect of a false negative cue (no early cue + constant water vs. early cue + constant water). | Results
The overall mass growth during the first 58 days (prior to any metamorphosis) was positively affected by the early cue (p = 0.016). There was no significant effect yet of the water regime, during this period, nor a significant early cue × water regime interaction ( Table 1 ; Figure 2 ). The mass growth trajectory was positively affected by both water volume regime (time × water regime interaction: p = 0.001) and the early cue (time × early cue interaction: p = 0.008), but not by their interaction (time × water regime × early cue interaction: p = 0.876; Table 1 ). Qualitatively similar results were obtained for the test of these factors' effects on larval SVL growth ( Table 2 ).
The larvae responded quickly to the early cue of desiccation risk by accelerating their mass growth rates ( Figure 2 ). An a posteriori t-test of larval mass on day 5 of the experiment revealed that the early cue has already produced a significant effect (t = 2.219, p = 0.034). By day 22, larvae in the treatment combinations that received a positive early cue grew to a larger mean size compared to those of treatments without this cue (a 13% difference). Growth rates considerably slowed under the reducing water level regime after day 42, and came to almost a complete cessation following day 58 ( Figure 2 ) due to the cessation of feeding during metamorphic climax. Under the constant water level regime this growth restriction was evident only after day 76. Thus, the water level regime gained an increasing effect on growth that overwhelmed the effect of the early cue only near metamorphosis, after day 58 ( Figure 2 ).
Time to emergence responded significantly only to the water volume regime ( Figure 3 ; two-way ANOVA: P<0.0005; Table 3 ), with a LS mean emergence time of 80.4 days for the reducing water volume treatments and 92.4 days for the constant water volume treatments. The early cue effect and the water regime × early cue interaction term were both non-significant, indicating that the larvae compensated for their initial response to early information according to prevailing hydroperiod conditions. Though the central tendencies of the early cue treatments did not differ significantly from those without the cue, their mean times to metamorphosis were slightly shorter under both water regimes. The temporal distribution of individual emergences revealed an initial surge of metamorphoses in both of the treatment combinations with the early cue compared to treatments without it ( Figure 4 ). In both water level regimes, the maximal difference between cumulative metamorph frequencies of the two early cue treatments occurred at the first quartile (day 76 and day 88 in the reducing water and the constant water regimes, respectively). A posteriori comparisons between the proportions of metamorphs at these times between early cue treatments showed that this initial surge was significant in both water level regimes (reducing water regime: Z = −1.750, p = 0.040; constant water regime: Z = −1.703, p = 0.044; one-sided normal approximation tests of equality of proportions). Thus, the initial developmental response to the early cue of desiccation risk may have skewed the distribution of emergences to the left, but was mostly compensated for when eventually contradicted by prevailing conditions.
Similar to the overall pattern of time to metamorphosis, final larval sizes at metamorphosis only differed significantly between water level regimes (P<0.0005; Figure 3 ; Table 3 ), with a LS mean SVL of 3.1 cm in the reducing water volume treatments and 3.4 cm in the constant water volume treatments. This pattern indicates that increased allocation of energy to hastened differentiation and/or a shorter growth period compromised the total larval growths, whereas initial responses in development rates to false early information on risk of desiccation were compensated for by later responses to actual hydroperiod conditions.
Larval mortality was most pronounced under the reducing water regime without an early cue (43.75%). Under the reducing water regime and with the early cue mortality rate was 12.5%, whereas in the other treatment combinations, mortality rates were 6.25% ( Figure 5 ). The reducing water regime significantly increased mortality compared to the constant water regime, regardless of the early cue (P = 0.029). However, when the water loss was unexpected, mortality increased significantly (by 31.25%; P = 0.028) compared to the same condition when preceded by the presence of the early cue. In contrast, mortality under the constant water regime was similar following a false early cue or its absence (6.25%; P = 1.000). See Table 4 for the statistical summary of these contrasts. No larval mortality occurred before day 75. | Discussion
Responses to and compensation for early cues
The results support the hypothesis that the recent desiccation of conspecifics serves as an early cue for risk of pool desiccation, accelerating development at least via early larval growth ( Figure 2 ). Since it is generally agreed that size thresholds limit differentiation rates [20] , [21] , [23] , an early cue indicating a potentially severe time constraint on development and requiring its acceleration is predicted to induce accelerated growth so that differentiation remains unconstrained. This response is opposite to that of larvae under risk of predation or cannibalism [31; Sadeh, unpublished data] , and therefore we rule out the larvae's perception of dried-up conspecifics as a cue for predation/cannibalism risks.
No staging system has been developed for this species to indicate its ontogenetic progress. However, gradual morphological changes (in skull shape, skin color and limb usage patterns) occur throughout larval ontogeny before the conspicuous final stage of metamorphosis (Sadeh, personal observations), indicating that differentiation is occurring throughout the larval period, along with growth in body size [consistent with 17] , [19] , [34] . Therefore, similar to growth rate, differentiation rate is likely also hastened by the early cue for desiccation. Our a posteriori analysis of the temporal distributions of metamorphoses ( Figure 4 ) suggests that the early cue had an accelerating effect on the rates of differentiation under both water regimes, but this effect did not last to significantly affect mean time to metamorphosis as a result of full developmental compensation by most of the individuals for the false early information.
By the end of the larval period, the effects of the early cue had practically vanished, with only the water regime exerting a strong effect. This is not surprising since the dynamics of water depth, water volume and concentration of solutes are far more reliable cues for future desiccation than the scent of recent death by desiccation. Larvae markedly compensated for their initial response to false early information according to prevailing hydroperiod conditions, showing considerable developmental plasticity that is not limited to early phases of the larval period. This was evident by practically identical reaction norms for both size at- and time to metamorphosis for true and false early cues, under both water regimes ( Figure 3 ).
Costs of developmental rates
Compensatory development was mostly apparent between days 42 and 76 ( Figure 2 ), followed by increased mortality ( Figure 5 ), that occurred only after day 76 and to the end of the experiment. The water reduction regime induced a high average rate of development (larval period −1 ), regardless of the presence of the early cue, associated with increased mortality compared to the constant water regime. However, a significantly greater contribution to this increased mortality was due to the compensation demanded by the unexpected reduction in water, in the absence of the early cue, where the larvae had to hyper-accelerate their development. In contrast, we found no detectable costs of reducing the developmental rate to compensate for an early false alarm. This pattern of mortality suggests that it was the result of the maximum instantaneous rate of development performed by the larvae.
Recent studies have found that high growth rates and especially compensatory growth are traded-off with other life-history traits and body functions over various time scales [35] , including lifespan [14] , reproductive output [36] and locomotor performance [37] . This possibly occurs through the accumulation of cellular damage caused by oxidative stress, or through increased allocation of resources to its repair [38] , [39] , [40] at the expense of other functions. For example, a study by Inness & Metcalfe [14] suggested that three-spined sticklebacks that reproduce only once in their lifetime, cannot afford to divert resources away from reproduction in order to repair damage inflicted by compensatory growth. Therefore, fish under these conditions suffered increased rates of mortality. A similar tradeoff may underlie the pattern of mortality in our experiment; metamorphosis requires high inputs of energy [41] and cannot be delayed or compromised to repair the damage caused by compensatory development when the larval habitat approaches termination. In contrast to compensation in growth rates, little work has been done on compensatory differentiation rates, and their costs are less understood. Accelerated differentiation of stem cells may reduce the available pool of undifferentiated cells and limit other functions they may serve (Artyom Kopp, personal communication). At the tissue level, Arendt & Hoang [42] suggested that accelerated differentiation of muscle tissue results in numerous but smaller fibers and reduced performance of the tissue. At the whole-organism level, some tradeoffs for accelerated differentiation rates have been identified in amphibians, with the effects sometimes carried beyond metamorphosis [26] , such as decreased immune function [15] and locomotor performance [16] . These costs may stem from adaptive allocation of limited resources to various body functions, or from compromised whole-organism coordination of different tissues, resulting in disruption of homeostasis and increased vulnerability to environmental stress. Studies on heat shock protein expression also indicate that stress resistance and development appear to be negatively related [reviewed in 39] . Indeed, mortality in our experiment may have resulted also from a compromised ability to cope with the accidental heat waves that occurred in our lab, similar to those that frequently occur in nature late in the larval period.
Costs and limits of plastic phenotype development
Callahan et al [6] emphasized the need to distinguish costs of phenotypes from costs of plasticity per se , and to address their potential interactions. We agree with this important distinction, but see no reason why the fundamental production cost of phenotypes should differ between plastic and fixed development. Development is a cumulative process. Therefore, the total phenotype production cost for a certain trait value is the sum of costs incurred during each small time interval throughout its development. We suggest that this cost may accumulate differently throughout the development of a trait to incur increased costs for plastic development. Based on our results and the growing body of recent literature on the costs of accelerated, compensatory development, we will show that the instantaneous production cost can be generally characterized as an increasing, strictly convex function of the instantaneous rate of development.
Consider an organism that must develop a certain trait, p , from an initial state of p = 0 to some required final state, p req , within a limited time interval, 0≤ t ≤ T , where T is unknown and must be assessed. Furthermore, in order to minimize ecological phenotype-environment mismatch costs, the organism must reach p req at time T exactly. For example, an aquatic larva that is born into an ephemeral pond must progress the differentiation of body systems for terrestrial function while growing, and complete metamorphosis before the pond dries to avoid death by desiccation. However, upon completion of metamorphosis it must emerge from the water and miss further opportunities for larval growth in case this occurs too early. Thus, the organism must continuously assess its time limit and adjust its rate of development, r , accordingly, such that The following analysis can equivalently correspond to cases where the time for development is fixed with the organism having to assess the expected environmental conditions at that time and match the target trait value accordingly (e.g. produce defenses against an uncertain predation level). Either way, the problem is that of adjusting developmental rates. However, development incurs costs. Defining the development rate-dependent instantaneous production cost, , the cumulative cost of producing the required trait is The results of our experiment, as well as recent literature on compensatory development imply that Equation (3a) clearly implies that for any value of r . We will now show that Eqn. (3b) implies that for any value of r .
At t = 0, the organism perceives mildly reliable information on its future time limitation, and sets an initial rate of development, r 1 , accordingly (or this can be a genetically-determined default rate in the absence of any information). Unless additional information is perceived later by the organism, allowing it to reassess the remaining time available for development, this rate will remain constant. Assume that at time t c (0< t c < T ) the organism perceives a perfectly reliable information on the time limitation, allowing it to readjust its developmental rate to r 2 , such that Eqn. (1) is satisfied. Thus, is the required average rate of development over the entire time interval [0 T ] to attain p req , where is the relative weight of the duration of development at the rate of r 1 . In reality, organisms continuously perceive and integrate multiple cues that bear imperfect information and readjust their developmental rates, resulting in more curved developmental trajectories than assumed in our analysis. However, these curved trajectories can be approximated as a sequence of many short linear intervals similar to those considered here.
From Eqn. (3b) we know that Substituting (4) into (5) we get: Equation (6) is the mathematical definition of strict convexity for . In other words, for any value of r .
That is a strictly increasing and convex function has important implications for life-history tradeoffs of development. Specifically, as reliable information regarding the required developmental rate of a trait is perceived later in that trait's ontogeny, complete compensation demands increasing endogenous costs, to the point that they may exceed the ecological costs of phenotype-environment mismatch that compensation is aimed at minimizing. This would not have been the case if , since then (3b) would be an equality, and the cost for any given final phenotype would be constant regardless of the developmental trajectory leading to it. Thus, for any plastic trait our analysis predicts that at some point in its ontogeny, development will become canalized and cease to respond to environmental cues that otherwise induce its acceleration. The specific timing of loss of plasticity depends on the specific forms of and of the phenotype-environment mismatch cost as a function of the deviation of the realized phenotype from the required phenotype, and can only be considered in a full, system-specific life-history model. Ontogenetic loss of plasticity has been documented frequently in various organisms [e.g. 11] , [12] , [13] , [43] , and we suggest that our analysis provides a potential general explanation for this phenomenon. In contrast, compensatory deceleration of development demands little costs. In this case, the extra costs are incurred for the needlessly high initial rates of development before the perception of corrective information. Therefore, the deceleration of development is not predicted to be limited by endogenous costs throughout ontogeny.
To graphically illustrate the model's behavior and implications, we arbitrarily chose a function that upholds the general requirements of , i.e. an increasing and convex function of r : where a is a scaling coefficient. In Figure 6A we simulate the developmental trajectories of a fixed slow developer, a fixed fast developer, as well as plastic developers that accelerate or decelerate their developmental rates following the reception of perfectly reliable cues at some point. Figure 6B demonstrates how production costs accumulate for these developmental trajectories, calculated according to equations (2) and (7), resulting in greater costs for fast developers, and increased costs for the expression of compensatory development. Figure 6C shows the increase in the extra production costs incurred by compensatory development (the production cost of plasticity), as reliable information is perceived later. While the extra production cost for accelerating compensation approaches infinity as reliable information is perceived later towards the time limit, it only approaches a finite value in the case of decelerating compensation.
Our model predicts a lower cost of compensating for a “false alarm” than for an “unexpected catastrophe”. However, in our experiment, we found no evidence for any cost for the former. Detecting endogenous costs empirically is very difficult, since they could manifest in different body functions, through adaptive tradeoffs or physiological outcomes. Mortality is the ultimate cost, but it bears information only on the extreme cases of physiological compromise, below which costs may remain undetected. Thus, sub-lethal costs may have been incurred in our experiment, but remained below our detection threshold.
We suggest that phenotype production costs accumulate to greater costs for individuals expressing plastic development since they do not follow the most efficient trajectory towards their final phenotype. Therefore, early information is extremely valuable for reducing deviations from the most efficient trajectory, but it tends to be less reliable. Complex and more effective information acquisition strategies [e.g. plants extrapolating temporal dynamics into the future; 44] are expected to evolve to moderate this tradeoff, but these strategies and the maintenance of their underlying physiological mechanisms may themselves be costly [5] .
Analysis of constraints on phenotypic plasticity has mostly utilized final-phenotype reaction norms, comparing trait values at the end of their development over different environments, thus capturing the phenotype-environment relationship at the end-point of the underlying developmental process. Clearly, more may be learned about plasticity and its constraints by considering the development of the ultimate phenotype and the role of plasticity in it [13] , [45] , [46] . Our study motivates future studies to manipulate the timing of perceived cues and their reliability throughout the ontogeny of the focal phenotype, as well as to determine the system-specific proximate mechanisms of costs of developmental rates at the cellular, tissue and whole-organism levels. Such combined ecological and developmental approaches, with system-specific life-history modeling, hold a promise for advancing our understanding of developmental plasticity, its costs and evolution. | Conceived and designed the experiments: AS LB. Performed the experiments: AS NT. Analyzed the data: AS. Contributed reagents/materials/analysis tools: LB. Wrote the paper: AS NT MM LB. Developed the model: AS MM.
Understanding constraints on phenotypic plasticity is central to explaining its evolution and the evolution of phenotypes in general, yet there is an ongoing debate on the classification and relationships among types of constraints. Since plasticity is often a developmental process, studies that consider the ontogeny of traits and their developmental mechanisms are beneficial. We manipulated the timing and reliability of cues perceived by fire salamander larvae for the future desiccation of their ephemeral pools to determine whether flexibility in developmental rates is constrained to early ontogeny. We hypothesized that higher rates of development, and particularly compensation for contradictory cues, would incur greater endogenous costs. We found that larvae respond early in ontogeny to dried conspecifics as a cue for future desiccation, but can fully compensate for this response in case more reliable but contradictory cues are later perceived. Patterns of mortality suggested that endogenous costs may depend on instantaneous rates of development, and revealed asymmetrical costs of compensatory development between false positive and false negative early information. Based on the results, we suggest a simple model of costs of development that implies a tradeoff between production costs of plasticity and phenotype-environment mismatch costs, which may potentially underlie the phenomenon of ontogenetic windows constraining plasticity. | We thank Jeff Arendt, Neil Metcalfe, Ofer Ovadia, Alan Templeton and Brian Wisenden for providing help on several bibliographical, statistical and methodological queries. Noam Elron, Tamar Keasar, Arik Kershenbaum, Inon Scharf, and Hagai Shemesh provided insightful comments on the manuscript. | CC BY | no | 2022-01-13 08:14:22 | PLoS One. 2011 Jan 5; 6(1):e15602 | oa_package/03/cb/PMC3016404.tar.gz |
||
PMC3016405 | 21122124 | Background
Iron is an essential element for all living organisms, being part of many proteins participating in fundamental mechanisms such as DNA synthesis, respiration, photosynthesis and metabolism [ 1 ]. In plants, the main cause of Fe deficiency is its low availability in the soil solution due to the scarce solubility of its compounds in well aerated environments. To cope with this problem plants have developed efficient mechanisms to acquire Fe from the soil. Two main strategies are known: dicots and non-graminaceous monocots operate applying what is known as Strategy I, while graminaceous monocots operate with the so-called Strategy II [ 2 , 3 ]. In the last decade a great amount of biochemical and molecular data have been acquired, increasing the knowledge about the mechanisms adopted by Strategy I plants, especially when grown in the absence of Fe. In particular, three main events seem to assure iron uptake. First, the induction of the reducing activity of a Fe 3+ -chelate reductase (FC-R) located at the plasma membrane of epidermal root cells. The enzyme was first cloned in Arabidopsis ( AtFRO2 ) [ 4 ] and FRO2 homologues were found in other Strategy I plants [ 5 - 7 ]; second, the induction of a Fe 2+ transporter belonging to the ZIP family of proteins [ 8 ] and identified as IRTs in several plants [ 9 , 10 ]; third, the activation of a P-type H + -ATPase [ 11 - 13 ] necessary to decrease the apoplastic pH, thus favouring, on one hand, the solubilization of external Fe compounds and the activity of the FC-R [ 14 , 15 ] and, on the other hand, to establish an effective driving force for Fe uptake [ 11 , 16 , 17 ]. Since the maintenance of these activities requires the constant production of energetic substrates, changes in metabolism have also been studied under Fe deficiency conditions. It has been shown that the rate of glycolysis is increased [ 18 , 19 ]; the pentose phosphate pathway is increased, as well, to produce both reducing equivalents and carbon skeletons [ 18 , 20 ]. Furthermore, the phospho enol pyruvate carboxylase (PEPC) activity has been shown to increase several times under Fe deficiency [ 21 , 22 ]. This enzyme is very important in the economy of the cell, since it can accomplish several tasks: (i), by consuming PEP it increases the rate of glycolysis, releasing the negative allosteric control exerted on phosphofructo kinase-1 (PFK-1) and aldolase by this phosphorylated compound [ 23 ]; (ii), it contributes to the intracellular pH-stat mechanisms [ 24 ] and (iii), it forms organic acids, in particular malate and citrate, that may play an important role in the transport of iron through the xylem to the leaf mesophyll [ 25 , 26 ]. Furthermore, PEPC activity sustains the anaplerotic production of carbon skeletons for biosynthetic pathways (in particular the synthesis of amino acids) and along with the accumulation of di-tricarboxylic acid carrier (DTC), increases the communication between the cytosolic and mitochondrial pools of organic acids, to help maintaining a higher turnover of reducing equivalents [ 27 ]. Implication of metabolism has been also inferred from the microarray analysis performed on Fe-starved Arabidopsis plants [ 28 ], in which it was shown that the levels of several transcripts encoding enzymes of these metabolic pathways were increased. However, the changes in transcript levels are not a direct proof that the encoded proteins have changed, but that relevant metabolic pathway or biological processes have been affected. To study a global change in the concentration of proteins the new proteomic technologies can be undoubtedly of great help. Concerning plant iron nutrition, two recent studies have analysed by 2-DE the proteome of wild-type tomato and its fer mutant [ 29 , 30 ] grown under Fe deficiency, to identify to what extent the transcription factor FER influences the accumulation of Fe-regulated protein, while another one analysed the changes in proteomic and metabolic profiles occurring in sugar beet root tips in response to Fe deficiency and resupply [ 31 ].
Cucumber ( Cucumis sativus L.) plants develop rapid responses to Fe deficiency, and previous works by our and other groups have described very important changes, not only in the classical responses of Strategy I plants, i.e. FC-R and H + -ATPase activities, but also in the metabolic rearrangement induced by Fe starvation [ 7 , 18 , 19 , 32 , 33 ].
In this work we have carried out a proteomic analysis on proteins isolated from cucumber roots grown in the presence or in the absence of Fe for 5 and 8 d. Furthermore, we chose to analyse only the cytosolic soluble protein fraction without contaminations by organelles or membranes. | Methods
Plant material and growth conditions
Cucumber ( Cucumis sativus L. cv. Marketmore '76 from F.lli Ingegnoli, Milan) seeds were sown in agriperlite, watered with 0.1 mM CaSO 4 , allowed to germinate in the dark at 26 °C for 4 d. Thirty seedlings were transferred to a 10 L tank for hydroponic culture. The nutrient solution had the following composition: 2 mM Ca(NO) 3 , 0.75 mM K 2 SO 4 , 0.65 mM MgSO 4 , 0.5 mM KH 2 PO 4 , 10 μM H 3 BO 3 , 1 μM MnSO 4 , 0.5 μM CuSO 4 , 0.5 μM ZnSO 4 , 0.05 μM (NH 4 )Mo 7 O 24 and 0.1 mM Fe-EDTA (when added). The pH was adjusted to 6.2 with NaOH. Aerated hydroponic cultures were maintained in a growth chamber with a day/night regime of 16/8 h and a photosynthetic photon flux density (PPFD) of 200 μmol m -2 s -1 at the plant level. The temperature was 18 °C in the dark and 24 °C in the light. The effect of different treatments at the root level was determined after 5 and 8 d. A scheme of the growing condition is reported in Figure 1A .
Semiquantitative RT-PCR
Root tissues were ground in liquid nitrogen using mortar and pestle, and total RNA was extracted using Trizol ® reagent (Invitrogen, Milano, Italy). First-strand cDNA synthesis was carried out using the iScriptTMcDNA Synthesis Kit (Bio-Rad, Milano, Italy) according to the manufacturer's instructions. Actin was used as house keeping gene. Semiquantitative RT-PCR was carried out on the first-strand cDNA and the identity of the amplified fragments verified by sequencing both strands. To detect differences in the cDNA expression level for each sample set, a variable number of amplification cycles, between 20 and 24 depending on gene templates, were tested. The thermal cycle program was: one initial cycle at 94°C for 5 min, followed by cycles at 94°C for 30 sec, 56°-60°C for 1 min, 72°C for 1 min, with 20-24 cycles for TDFs selected for the RT-PCR analysis, all followed by a final 72°C elongation cycle for 5 min. The amplified products were run on a 1% agarose gel without ethidium bromide. The gels were incubated in Tris-HCl 1 mM pH 8, EDTA 0,1 mM adding 1‰ of Vistra Green Nucleic Acid Stain (GE Healthcare Life Sciences, USA), as fluorescent stains, for 30 min. Then, gels were scanned and bands were detected with the Typhoon 9200 high performance laser scanning system (GE Healthcare Life Sciences, USA).
For the internal reference amplification profile, the constitutive expression level was compared for each reaction by using primers against the actin transcript of cucumber ( Csactin , Genbank accession no AB010922 ) according to Waters et al [ 7 ]. RT-PCR analysis was also performed for CsFRO1 , CsIRT1 and CsHA1 (Genbank accession nos. AY590765 , AY590764 and AJ703810 , respectively) using specific primers according to Santi et al. [ 50 ] and Waters et al., [ 7 ]. The validation of all the steps of the experiment was done with three independent biological replicates each of them with two technical replicates.
Extraction of protein samples for 2-DE analysis
Roots of plants grown in the presence or absence of Fe were harvested, rinsed in distilled H 2 O and homogenized in a buffer containing 50 mM TRIS-HCl (pH 7.5), 10 mM MgCl 2 , 10% (v/v) glycerol, 1 mM EDTA. 14 mM β-mercaptoethanol, 1 mM phenylmethylsulphonyl fluoride (PMSF) and 10 μg ml -1 leupeptin were added to avoid or minimize proteolysis [according to 51]. A ratio of 3 ml of buffer per 1 g of roots was used. The homogenate was centrifuged at 13 000 g for 15 min and the supernatant was again centrifuged at 100 000 g for 30 min. Proteins were then precipitated by adding four volumes of pre-cooled 12.5% TCA in acetone and incubating them at -20°C overnight. Precipitated proteins were recovered by centrifuging at 13 000 g at 4 °C for 30 min and then washed two times with cold 80% (v/v) acetone. The final pellet was dried under vacuum and dissolved in IEF buffer [7 M urea, 2 M thiourea, 3% (w/v) CHAPS, 1% (v/v) NP-40, 50 mg mL -1 DTT and 2% (v/v) IPG Buffer pH 4-7 (GE Healthcare Life Sciences, USA)] by vortexing and incubating for 1 h at room temperature. Samples were centrifuged at 10 000 g for 10 min and the supernatants stored at -80°C until further use. The protein concentration was determined by 2-D Quant Kit (GE Healthcare Life Sciences, USA). For each condition, three biological replicates were obtained.
2-DE analysis
Protein samples (400 μg) were loaded on pH 4-7, 24 cm IPG strips passively rehydrated overnight in 7 M urea, 2 M thiourea, 3% (w/v) CHAPS, 1% (v/v) NP-40, 10 mg mL -1 DTT and 0.5% (v/v) IPG Buffer pH 4-7. IEF was performed at 20 °C with current limit of 50 μA/strip for about 50 kVh in an Ettan IPGphor (GE Healthcare Life Sciences, USA). After IEF, strips were equilibrated by gentle stirring for 15 min in equilibration buffer [100 mM Tris-HCl pH 6.8, 7 M urea, 2 M thiourea, 30% (w/v) glycerol, 2% (w/v) SDS] supplemented with 0.5% (w/v) DTT for disulfide bridge reduction and for an additional 15 min in the same equilibration buffer supplemented with 0.002% (w/v) bromophenol blue and 4.5% (w/v) iodoacetamide for cysteine alkylation. Second-dimensional SDS-PAGE was run in 10% acrylamide gels using the ETTAN DALT six apparatus (GE Healthcare Life Sciences, USA). Running was first conducted at 5 W/gel for 30 min followed by 15 W/gel until the bromophenol blue line ran off. For each biological replicates two technical replications were performed (n = 6).
Protein visualization and data analysis
Gels were stained using the colloidal Coomassie Brilliant Blue G-250 (cCBB) procedure, as previously described by Neuhoff et al. [ 52 ]. The gels were scanned in an Epson Expression 1680 Pro Scanner and analyzed with ImageMaster 2-D Platinum Software v6.0 (GE Healthcare Life Sciences, USA). Automatic matching was complemented by manual matching. Molecular weights of the spots were estimated using a migration wide range standard (MW 6.500 - 205.000, GE Healthcare), while pI was determined according to the strip manufacturer's instructions (GE Healthcare Life Sciences, USA).
During this analysis only spots showing at least a two-fold change in expression and having a relative spot volume average (% Vol) larger than 0.08 in at least one of the four treatments were considered for successive steps. In order to find differentially expressed proteins, all values were log(z+1) transformed and a Two-way ANOVA ( p <0.001), with Fe and days of treatment as factors, was carried out. Significant differences linked to the factor Fe were analyzed through a two-way hierarchical clustering methodology, using the software PermutMatrix as previously described by Negri et al [ 53 ].
Protein in-gel digestion and LC-ESI-MS/MS analysis
Spots excised from the cCBB gels were digested as described by Prinsi et al [ 54 ]. The LC-ESI-MS/MS experiments were conducted using a Surveyor (MS pump Plus) HPLC system directly connected to the ESI source of a Finnigan LCQ DECA XP MAX ion trap mass spectrometer (ThermoFisher Scientific Inc., Waltham, USA). Chromatography separations were obtained on a reverse phase C18 column (200 μm I.D × 150 mm length, 5 μm particle size), using a gradient from 5% to 80% solvent B [solvent A: 0.1% (v/v) formic acid; solvent B: ACN containing 0.1% (v/v) formic acid] with a flow of 2.0 μl/min. ESI was performed in positive ionization mode with spray voltage and capillary temperature set at 2.5 kV and at 220 °C, respectively. Data were collected in full-scan and data dependent MS/MS mode with a collision energy of 35% and a dynamic exclusion window of 3 min.
Spectra were searched by TurboSEQUEST ® incorporated in BioworksBrowser 3.2 software (ThermoFisher Scientific Inc., Waltham, USA) against the Cucumis protein subset, Cucumis sativus EST subset and against the protein NCBI-nr database, all downloaded from the National Center for Biotechnology Information http://www.ncbi.nlm.nih.gov/ . The searches were carried out assuming parent ion and fragment ion mass tolerance of ± 2 Da and ± 1 Da, respectively, two possible missed cleavages per peptide, fixed carboxyamidomethylation of cysteine and variable methionine oxidation. Positive hits were filtered on the basis of peptide scores [Xcorr ≥ 1.5 (+1 charge state), ≥ 2.0 (+2 charge state), ≥ 2.5 (≥ 3 charge state), peptide probability < 1 × 10 -3 , ΔCn ≥ 0.1 and Sf ≥ 0.70]. If needed, identified peptides were subjected to a protein similarity search performed by alignment analyses against the NCBI-nr database using the FASTS algorithm http://fasta.bioch.virginia.edu/fasta_www2/ [ 55 ]. Theoretical molecular masses and p Is of characterized proteins were calculated by processing sequence entries at http://www.expasy.org/tools/pi_tool.html . | Results
Experimental planning and 2-DE analysis
In this study the changes in the protein profile of cucumber roots expressed in response to Fe deficiency were analyzed. The choice to collect proteins after 5 and 8 days of growth was done after a preliminary analysis in which we assessed the increases in transcript abundances related to the Strategy I adaptation responses occurring under Fe-starvation (Figure 1A and 1B ) and by previous biochemical evidence obtained by our laboratory [ 18 , 19 , 34 ]. Figure 1B shows the rapid increase occurring for the mRNAs encoding for the three typical Strategy I proteins. While for CsFRO1 and CsIRT1 their expression increased strongly at early stages, for CsHA1 the increase occurred later after Fe deficiency induction. Eight-d-old plants showed the highest response for all three transcripts. Soluble (cytosolic) proteins were obtained from roots of plants grown in the presence or in the absence of Fe, after centrifugation to eliminate any possible contamination by organelles and endomembranes. Proteins were successively separated by 2-DE. Figure 2 reports the two-dimensional gel electrophoresis representative maps of soluble proteins isolated from roots of plants grown for 5 and 8 d in the presence or in the absence of Fe.
Hierarchical clustering analysis
The comparison between the control and the -Fe treatment showed that 57 protein spots were expressed differentially. These spots were subjected to two-way hierarchical clustering analysis using the PermutMatrix software [ 35 ]. Figure 3 represents the results obtained and shows the pairwise comparison of protein levels for the two dates and the two Fe treatments chosen. The protein spots were sorted in two main groups: those showing a decreased abundance in the presence of Fe and those which accumulate in the presence of the ion. Focusing the attention on lower level groupings, it is interesting to note that the protein behavior at the two dates was quite similar but not identical, because although most differences were more marked after 8 d, some other ones (e.g. spots 724, 1341, 1321) were essentially associated to the 5-d stage. These evidences underlined that cucumber root response can be slightly but significantly affected by some peculiar traits depending on the considered stage of Fe deficiency.
Comparative analysis of the soluble proteins under Fe deficiency
The 57 spots of interest were analyzed by LC-ESI-MS/MS. Forty-four out of them were identified and listed in Tables 1 and 2 and indicated by numbers in Figure 4 . Numbers in red in Figure 4 identified proteins whose amount is increased, while the numbers in green identified proteins whose amount is decreased under Fe deficiency. Statistical information about LC-ESI-MS/MS analysis are reported in Additional file 1 .
Some of the proteins were identified in more than one spot in the 2-DE gel. The variability in the level of proteins belonging to the same family suggests the presence of different isoforms, which can be subjected to different post-translational modifications.
Twenty-one protein spots out of 44 showed increased accumulation (Table 1 ) in the absence of Fe with a further increase between the pairwise comparison after 8 d (Figure 3 ). The increased proteins under Fe deficiency were sorted into four different functional classes (Figure 5A ) on the basis of data available in the literature. All the identified proteins except one (spot number 724) were characterized as enzymes and most of them (43%) belong to the glycolytic/gluconeogenetic pathways, confirming the proteomic [ 29 - 31 ] and the biochemical data obtained by several groups [ 18 , 19 , 22 ] and the prediction from microarray analysis of Fe-deficient Arabidopsis [ 28 ]. We have also considered that the spot number 954 (the pyrophosphate-fructose-6-phosphate 1-phosphotransferase) belongs to this group, since under Fe deficiency it follows the increasing trend shown by other glycolytic enzymes. In fact, after 8 d there is a substantial increase in the level of this protein notwithstanding an initial decrease. This increase is corroborate by the enzymatic assay that show that after 8 d of Fe deficiency the activity is increased two-fold (data not shown). A second group of proteins (19% of the total) were classified as belonging to the general carbohydrate metabolism. In this group we have included the spot identified as malate dehydrogenase (number 1739) and two spots corresponding to alcohol dehydrogenase (number 1519 and 1593). Among them, one spot (number 2613) is of particular interest since it appears only after 8 d of Fe deficiency and was identified as a galactokinase. A third group (24%) belongs to nitrogen metabolism and includes alanine aminotransferase (spot number 1195), two spots corresponding to S-adenosyl methionine synthase (number 1321 and 1341), glutamine synthase 1 (number 2607) and a spot identified as a C-N hydrolase (number 1760). The last 14% of the proteins belongs to cellular redox proteins and other. One spot (number 724) corresponds to a heat shock protein 70, while the other two spots match with a disulfide isomerase protein (PDI, number 858), which catalyses the formation, isomerization and reduction/oxidation of disulfide bonds [ 36 ] and with an old yellow enzyme-like protein (OYE) (number 1515) that was the first enzyme shown to contain flavins as cofactor. Proteins from OYE family can use either NADPH, NADH or both, thus classifying them as NAD(P)H oxidoreductase [ 37 ].
Twenty-three out of 44 protein spots identified were decreased in quantity (Table 2 ) under Fe deficiency. Among these 11 were characterized as enzymes and 13 as structural or stress response proteins. The proteins decreased in quantity were also sorted into five different functional classes according to the literature (Table 2 and Figure 5B ), with some proteins (22%) involved in the metabolism of sucrose and complex structural carbohydrates, such as invertase (spots number 586, 588, 596), 1,4-β-xylosidase (spot 712) and UDP-glucose dehydrogenase (spot 1169). A second group (39%) has been identified as structural proteins (spots number 1113, 1176, 1217, 1433, 1438, 1442, 1454, 1637 and 1676) and a third one (9%) as stress-response proteins (spots number 757 and 758). The fourth group (13%) comprises proteins containing Fe, such as aconitase (number 349 and 350) and peroxidase (number 1543). The last group (17%) contains a PDI-like protein (spot 871), the beta subunit of the mitochondrial ATPase (spot 1106), a S-adenosylmethionine synthase (spot 1340) and a wali7-like protein (spot 2186).
Change in the protein level under Fe deficiency
Figure 6 reports the changes in the relative spot volumes of proteins that were increased in quantity under Fe deficiency. For most of the proteins there was an increasing trend between the 5 th and the 8 th day after Fe starvation, indicating that the response lasts for several days after its induction. As stated before, most of these proteins belong to the glycolytic pathway, confirming previous biochemical results showing an increased activity of some of these enzymes. Three proteins decreases to the level of the control only after 8 d of Fe starvation (spots number 724, 1321 and 1341). The first is a heat shock protein with a MW of 70 Kd (HSP70) and its early increase is not easily understood, since other proteins (spots number 757 and 758) identified as HSP70 decrease under Fe starvation (see Table 2 and Figure 7 ). The other two proteins (spot numbers 1321 and 1341) were identified as S-adenosylmethionine synthase. This enzyme is the starting point of the metabolic pathway for the biosynthesis of nicotianamine [ 38 ] and phytosiderophores of the mugineic acid family. Nicotianamine is considered a Fe transporter in Strategy I plants. From the phenotype of the Na-auxotroph tomato mutant chloronerva a key role for nicotianamine in the transport of Fe taken up by the roots to the shoots was postulated [ 39 ].
Figure 7 reports the changes in the relative spots volume of proteins that were reduced in quantity during Fe deficiency. As stated before, most of these proteins belong to structural proteins or to stress response protein groups. Interestingly, other decreases correspond to enzymes related to carbohydrate metabolism and linked to the biosynthesis of cell wall polysaccharides (spot numbers 586, 588, 596, 712 and 1169) in good agreement with the hypothesis of a recycling of these carbohydrate units. Also, enzymes containing Fe (aconitase, spot numbers 349 and 350 and peroxidase, spot number 1543) are decreased accordingly with a decreased level of Fe in the cell. | Discussion
In this work we have analyzed the soluble proteins extracted from cucumber roots grown in the presence or in the absence of Fe at two different dates, 5 d and 8 d, by 2-DE. Recently, some proteomic studies on Fe deficiency responses have appeared in the literature [ 29 - 31 ]. The first two papers reported the differential expression of proteins in two tomato lines: the T3238-FER genotype and its Fe uptake-inefficient mutant T3238-fer. The former [ 29 ] was a study addressed to the identification of a diverse set of differentially accumulated proteins under the control of FER and/or Fe supply, while the latter [ 30 ] was a study on total root proteins extracted from these two tomato genotypes, with the increase/decrease being evaluated in a single date after one week of treatment. The third paper [ 31 ] reports changes in the proteomic profiles of sugar beet root tips in response to Fe deficiency and resupply.
In order to correlate the metabolic evidences so far obtained in roots of Fe-deficient plants, we have restricted our research to the soluble cytosolic proteins in order to avoid any interference by other cellular systems. Furthermore, we have applied another restriction by characterizing only those spots which showed a two-fold increase or decrease. Under these experimental conditions, 44 proteins that change their level of accumulation were identified. Twenty-one out of 44 increased their concentration under Fe deficiency. Among these, the majority (42% of the total) are enzymes belonging to the glycolytic pathway, confirming previous biochemical data suggesting the involvement of metabolism, and in particular of glycolysis, in response to Fe deficiency. In fact, previous biochemical evidences had shown that under these growing conditions the activities of hexokinase (HK), ATP-dependent phosphofructokinase-1 (ATP-PFK1), glyceraldehyde 3-phosphate dehydrogenase (GAP-DH) and pyruvate kinase (PK) were increased [ 18 , 19 , 34 ]. Surprisingly, none of these enzyme was detected in this proteomic study, but other enzymes of this pathway such as PP-dependent phosphofructokinase (PP-PFK), aldolase, phosphoglycerate kinase (PGK), phosphoglycerate mutase (PGM) and enolase were detected and found to be enhanced by Fe deficiency. This discrepancy could be explained by several factors. First of all, it is always risky to strictly link protein levels to their activities: these glycolytic enzymes, in fact, are known to be highly regulated by allosteric mechanisms [ 23 ]. In our case, it is thus possible that such mechanisms act in concert with slight increases in the amount of proteins, which might be not considered after the statistical analysis for the subsequent MS analysis. The incomplete match between the levels of some glycolytic enzymes and their activities is also supported by gene expression and the microarray analysis conducted on Arabidopsis , that revealed that only ATP-PFK1, PGK, PGM and enolase transcripts increase in Fe-deficient roots after seven days of Fe starvation, while for HK, GAP-DH and PK a decrease was shown, corroborating in some way our proteomic data [ 28 ]. Finally, the peculiarities of the electrophoretic approach must be taken into account. For instance, it is possible that some glycolytic enzymes were not considered in this analysis because of the pI or the molecular weight ranges employed, comigration phenomena and problems of saturation staining.
The same major discrepancy occurs for the PEPC activity whose increase was around 4 fold in cucumber roots, but it was not detected in this proteomic study. The same discrepancy was also found in the proteomic study carried out on sugar beet root tips [ 31 ]. However, the amount of protein as determined by immunochemical identification indicated a consistent increase after 10 d of Fe starvation, while if we compare the times used in this work the enhancement between the control and -Fe conditions was less evident [ 21 ] and perhaps below the two fold-increase considered for the successive identification by mass spectrometry. Furthermore, the increase in the activity of PEPC could be related to the complex regulation of this enzyme exerted by the positive allosteric effector Glucose-6-P, whose level has been shown to increase under Fe deficiency [ 19 ], and by post-translational regulation [ 40 ]. These data are in agreement with the microarray analysis [ 28 ] done in Arabidopsis , which shows that the PEPC transcript increase occurs only at the 5th d of Fe deficiency, while at the 7th d the transcript is undetectable. While our data on the glycolytic enzymes are in good agreement with those obtained by Rellán-Álvarez et al [ 31 ], they agree only in part with those of Li et al [ 30 ], since they found that only enolase and triose-P-isomerase increase their level, while, on the contrary, the aldolase activity decrease; from this point of view our data on the involvement of glycolytic enzymes give a much more complete picture. The increase in the glycolytic pathway under Fe deficiency has been confirmed by many biochemical data obtained by several groups [ 18 , 19 , 22 ] and by the proteomic data described in this work, and is in agreement with the major request of energy, reducing equivalents and carbon skeletons to sustain the greater energetic effort and the request of substrate for the synthesis of the large amount of mRNAs and proteins related to this response [ 41 , 42 ]. Another interesting result is the increase of alcohol dehydrogenase (spot numbers 1519 and 1593) that would confirm the involvement of anaerobic metabolism in response to Fe deficiency [ 22 ]. This increase is also in agreement with the microarray study in Arabidopsis [ 28 ] in which the transcript for the alcohol dehydrogenase was found to be increased.
The metabolic changes induced by Fe deficiency on the protein pattern is not confined only to glycolysis but other pathways seem to be rearranged as a consequence of this stress, as it occurs for instance in the mitochondria [ 27 , 33 ]. In fact, we found that enzymes related to carbohydrate metabolism might be suppressed or increased. In particular, enzymes related to the biosynthesis of cell wall polysaccharides such as invertase, 1,4-β-xylosidase and UDP-Glucose dehydrogenase (UDP-Glc-DH) are decreased (Table 2 ). The decrease in the biosynthesis of the cell wall polysaccharides in Fe-deficient roots would mean a decrease in carbon flux towards the synthesis of cell wall (more likely less important in these conditions) favoring instead glycolysis and other biosynthetic pathways. Moreover, the cell wall can be considered, in conditions where the photosynthetic apparatus might be damaged or not properly working, as a temporary source of carbohydrates. In order to sustain this change in metabolism we found an increased concentration of galactokinase after 8 d of Fe deficiency, which would channel carbon skeletons originating from cell wall degradation to fuel glycolysis. This enzyme is involved in the metabolism of D-galactose-containing oligo- and polysaccharides and occurs in various plants. The raffinose family of oligosaccharides (RFOs) ranks next to sucrose in their abundance in plant kingdom [ 43 ]. Plant cell wall contains numerous polysaccharides which consist of a wide range of different sugar residues. An analysis of Arabidopsis identified glucose, rhamnose, galactose, xilose, arabinose and galacturonic and glucuronic acids as the major sugar constituent in the cell wall [ 44 ], while a study on the changes of metabolites occurring in sugar beet root tips under Fe deficiency showed a large increase in the RFO sugars [ 31 ]. Galactokinase belongs to a sugar-1-P kinase family which account for hydrolysis and recycle of pectic polymers. RFOs might therefore be an important source of rapidly metabolisable carbon other than function as antioxidant [ 31 ], (ROS detoxification has been observed in Fe-deficient roots [ 45 ]), then, the increase in RFO could help to alleviate ROS damage produced under Fe deficiency. The simultaneous decrease in enzymes involved in the cell wall synthesis might bring to the observed stunting growth of roots under Fe deficiency. Changes in cell wall metabolism has been also observed in Fe-deficient tomato roots [ 30 ] and the decrease in invertase activity could, as suggested by Li et al. [ 30 ] decrease the relative level of fructose and explain why a down regulation of fructose metabolism was found in these roots.
Another important group of proteins which increase under Fe deficiency is related to nitrogen metabolism (24%). S-adenosylmethionine synthase, alanine aminotransferase, glutamine synthase 1 (the root isoform of GS) and a C-N hydrolase family protein belong to this group. Concerning this group only the S-adenosylmethionine synthase shows a temporal increase, which is limited to the first date of Fe deficiency (Figure 6 ). This enzyme is involved not only in the biosynthesis of nicotianamine and phytosiderophores of the mugineic acid family [ 38 ], but also in the biosynthesis of ethylene, which has been reported to influence the response of Strategy I plants to Fe deficiency [ 7 ]. The other three proteins increase at both dates considered. Among them, the most interesting is the C-N hydrolase family protein. In fact, this family of protein includes several enzymes that are involved in nitrogen metabolism and that cleave nitriles as well as amides. Utilization of these nitrogen compounds usually involves several reduction steps. The final step is the assimilation of NH 4 + or its transfer to various intermediates such as keto acids [ 46 ]. It is well known that Fe deficiency leads to an increase in the organic acid level which play different roles one of which is linked to the synthesis of amino acids [ 25 ]. Our study also shows a decrease in the cytoskeleton proteins actin and tubulin along with the storage protein globulin (Table 2 and Figure 7 ). An intriguing hypothesis we can drive from these results is that all these proteins might be recycled under Fe deficiency and used as a source of amino acids, carbon skeletons and nitrogen. This could be in agreement with the increase in the C-N hydrolase protein family and, even if with contrasting results, with changes in two spots identified as protein PDIs. PDIs catalyses the rearrangement of disulfide bridges of proteins [ 47 ] and in Arabidopsis these family of proteins is encoded by 12 genes [ 48 ]. While spot number 858 (Table 1 and Figure 6 ) increases, the other one, spot number 871 (Table 2 and Figure 7 ) decreases, especially after 8 d. Contrasting results have been found also for spots identified as heat shock proteins, where in one case (spot number 724) we found an increase while in two cases (spot numbers 757 and 758), on the contrary, a decrease was observed. PDIs and HSP70 are involved in the mechanism(s) of protein folding as molecular chaperones (HSP70) and protein folding catalysts (PDIs) so assuring a proper fold of nascent polypeptides into functional proteins. This variability could be associated with a change in the ratio between biosynthesis an degradation of proteins that could bring to a release of amino acids that might serve both as nitrogen and carbon sources. We are aware that the hypothesis is speculative, but the data obtained in this proteomic study support it. Furthermore, other data obtained in our laboratory (manuscript in preparation) show a decrease in the activity of enzymes of the nitrogen assimilatory pathway, since some of them, such as nitrate reductase and nitrite reductase, are Fe-dependent. | Conclusions
In conclusion, the data obtained in this proteomic profiling study confirm some metabolic changes occurring as a response to Fe deficiency. In particular, our data support the increase in the glycolytic flux and in the anaerobic metabolism to sustain the energetic effort Fe-deficient plants must undertake. In fact, Fe deficiency leads to an impairment of the mitochondrial respiratory chain, so the cell must overcome this problem by activating alternative pathways to sustain the energetic requirement and the NAD(P)H turnover [ 33 , 49 ]. We also found a decrease in the amount of enzymes linked to the biosynthesis of complex carbohydrates of the cell wall, and, on the other hand, an increase in enzymes linked to the turnover of proteins. In a scenario in which the production of new carbon skeletons is strongly impaired by a less efficient photosynthetic apparatus, the plant must face the increased demand of energy and organic compounds. This "cellular effort" seems to be comparable with that occurring in the mammalian muscles in which a strong energetic effort, caused by an enhanced muscular activity, stimulate the anaerobic pathway to produce energy [ 27 ]. In Fe-deficient plants, the effort is much more complex, since the contribution of photosynthesis is poor and the plant must recover carbon skeletons from other sources to sustain metabolism. We are aware that more work is necessary to better understand what is going on under Fe deficiency, but the data obtained in the present proteomic work along with those on metabolic activities could cast new light on the responses induced by Fe-deficient plants. | Background
Iron deficiency induces in Strategy I plants physiological, biochemical and molecular modifications capable to increase iron uptake from the rhizosphere. This effort needs a reorganization of metabolic pathways to efficiently sustain activities linked to the acquisition of iron; in fact, carbohydrates and the energetic metabolism has been shown to be involved in these responses. The aim of this work was to find both a confirmation of the already expected change in the enzyme concentrations induced in cucumber root tissue in response to iron deficiency as well as to find new insights on the involvement of other pathways.
Results
The proteome pattern of soluble cytosolic proteins extracted from roots was obtained by 2-DE. Of about two thousand spots found, only those showing at least a two-fold increase or decrease in the concentration were considered for subsequent identification by mass spectrometry. Fifty-seven proteins showed significant changes, and 44 of them were identified. Twenty-one of them were increased in quantity, whereas 23 were decreased in quantity. Most of the increased proteins belong to glycolysis and nitrogen metabolism in agreement with the biochemical evidence. On the other hand, the proteins being decreased belong to the metabolism of sucrose and complex structural carbohydrates and to structural proteins.
Conclusions
The new available techniques allow to cast new light on the mechanisms involved in the changes occurring in plants under iron deficiency. The data obtained from this proteomic study confirm the metabolic changes occurring in cucumber as a response to Fe deficiency. Two main conclusions may be drawn. The first one is the confirmation of the increase in the glycolytic flux and in the anaerobic metabolism to sustain the energetic effort the Fe-deficient plants must undertake. The second conclusion is, on one hand, the decrease in the amount of enzymes linked to the biosynthesis of complex carbohydrates of the cell wall, and, on the other hand, the increase in enzymes linked to the turnover of proteins. | Authors' contributions
SD carried out protein extraction, 2-DE gel analysis, statistical analysis and drafted the manuscript. BP carried out protein characterization by LC-ESI-MS/MS, analysed the MS data. ASN carried out the clustering and statistical analysis. GV carried out the RT-PCR analysis. LE coordinated the 2-DE gel analysis and the LC-ESI-MS/MS analysis. GZ participated in the strategic planning of the work, data analysis and writing the manuscript. All the authors contributed to the discussion of the results and took part to the critical revision of the manuscript. All authors read and approved the final manuscript.
Supplementary Material | Acknowledgements
This work was supported by grants from MIUR and the Università degli Studi di Milano (PUR) | CC BY | no | 2022-09-26 23:35:35 | BMC Plant Biol. 2010 Dec 1; 10:268 | oa_package/8d/7b/PMC3016405.tar.gz |
PMC3016406 | 21143977 | Background
Thaxtomin A (TA) is the main phytotoxin produced by the pathogen Streptomyces scabies , the most important causal agent of potato common scab [ 1 , 2 ]. Production of TA is required for the development of disease symptoms [ 1 , 3 - 5 ], and application of the purified toxin on immature potato tuber tissues induces the production of scab-like lesions [ 6 ]. A wide variety of plant species are sensitive to exogenous application of TA, inducing symptoms ranging from growth inhibition, root stunting, and cell hypertrophy to cell death [ 3 , 4 , 7 ]. TA can also activate a genetic program of cell death in Arabidopsis thaliana cell suspensions [ 8 ].
Previous reports have shown that TA inhibits crystalline cellulose biosynthesis [ 9 ]. Recent evidence indicates that addition of TA to Arabidopsis seedlings decreased the stability of cellulose synthase (CESA)-complexes, releasing them from the plasma membrane to be accumulated in small microtubule-associated compartments [ 10 ]. This is similar to what has been described in response to another inhibitor of cellulose synthesis, isoxaben (IXB) [ 11 ]. Moreover, changes in gene expression induced in response to TA or IXB treatment were very similar, indicating that the mode of action of TA closely resembles that of IXB [ 10 , 12 ]. While mutant analyses suggest that IXB targets CESA3 and CESA6 [ 13 , 14 ], the mode of action and specific target of TA have not yet been identified.
The plant cell wall is important to maintain cell shape and strength in response to the high turgor pressure applied by the vacuole. Cellulose, the main glycan component of the plant cell wall, is organized into microfibrils, which are bound by hemicelluloses to form a network embedded in a matrix of pectins [ 15 ]. This strong but flexible arrangement of complex polysaccharides is important not only for the control of plant cell structure, expansion and position, but is also involved in several cellular processes, including cell differentiation, intercellular communication and defense responses [ 15 , 16 ]. The composition and organization of the plant cell wall change during the plant cell cycle, growth, differentiation and can be altered in response to biotic and abiotic stress [e.g., [ 17 - 23 ]]. Previous reports have demonstrated the possibility of adapting or "habituating" plant cells to grow and divide in the presence of inhibitors of cellulose synthesis, such as IXB and dichlobenil (DCB) by adding incremental concentrations of the inhibitors over several cell generations [ 24 - 32 ]. While some variations were noted between different plant species, habituation was generally associated with a decrease in cellulose that was compensated by changes in the composition or organization of the cell wall, where the xyloglucan-cellulose network was partly or almost completely replaced by pectins. Likewise, plant cell cultures habituated to water and salt stresses presented modified cell walls with a decrease in cellulose content with increases in hemicellulose and proteins and a general reorganization of the pectin network [ 18 , 19 ]. Gene expression analyses in hormone habituated cells, which are capable of unlimited growth in the absence of cytokinins, also suggested that this type of habituation was associated with changes in cell wall biochemistry [ 33 ]. Reciprocally, mutations perturbing cellulose synthesis or cell adhesion, as in the mutants tsd1/KORRIGAN [ 34 , 35 ] and tsd2 [ 36 , 37 ] respectively, led to hormonal habituation. These data demonstrate that there is a reciprocal link between the physiological, developmental or metabolic state of the cell and the composition of its cell wall.
In this work, we show that while inhibition of cellulose synthesis by TA can activate cell death in hybrid poplar cells, it is also possible to habituate poplar cell suspensions to grow and divide in the presence of lethal levels of TA. Habituation to TA was associated with modifications in the cell wall composition, with a decrease in crystalline cellulose and an increase in pectins. Interestingly, we found that TA-habituated cells cultured in the absence of TA have remained resistant to TA for more than three years. Remarkably, these cells also exhibited enhanced resistance to two other inhibitors of cellulose synthesis, IXB and DCB, and this resistance has been sustained for more than three years. To investigate the genetic mechanisms that are involved in establishing and maintaining resistance to TA, we have performed a global transcriptional analysis in TA-habituated cells cultured in the absence of TA. | Methods
Plant material and treatments
Hybrid poplar cell suspensions ( Populus trichocarpa x Populus deltoides H11-11) maintained in the dark in Murashige and Skoog (MS) medium (pH 5.7) supplemented with B5 vitamins [ 80 ] were subcultured weekly in a 1:4 dilution or biweekly for TA-habituated cell suspensions. TA was produced and purified from Streptomyces scabies using oat bran broth cultures as described before [ 5 , 8 ]. To study morphological changes and cell death, TA was diluted (10 mM) in methanol and added to cell suspensions at the indicated final concentrations 3 d after subculture, or at the time of transfer for the habituation process. IXB (Crescent Chemicals Co., Inc., Islandia, NY, USA) and DCB (Sigma-Aldrich) both diluted (10 mM) in methanol were added to cell suspensions 3 d after subculture at the indicated final concentrations. Control cells were treated with the same volume of methanol. The final concentration of methanol added to cell suspensions during habituation was always less than 0.014% and had no effect on cell survival or growth.
Detection of cell death
The number of dead cells was determined by staining the cell cultures 1:1 with 0.4% trypan blue diluted in 140 mM NaCl and 3.5 mM K 2 HPO 4 as described before [ 8 ].
TA-habituation of poplar cell suspensions
Habituation of poplar cell suspensions to 1.3 μM TA was performed by adding increasing levels of TA at each subculture over a period of 12 months, beginning with 0.1 μM thaxtomin A. From 2 to 4 subcultures were performed between each step-up in TA concentration. Non-habituated hybrid poplar cells used as control received the same volume of methanol as that added to TA-habituated cells. From a concentration of 0.4 μM TA, TA-habituated cells were subcultured every other week due to the reduced growth rate. After 12 months, half of the cells adapted to 1.3 μM of thaxtomin A were transferred to culture medium without TA. These cells are referred to as "TA(-)hab" cells. These cell suspensions were subcultured every two weeks in the absence of TA for an additional period of at least 18 months (more than 40 subcultures) before performing analyses.
Cell wall purification
Suspension-cultured poplar cells were harvested by filtration 24 h after adding TA or methanol (control) and 10 d after subculture for TA(-)hab cells and non-habituated cells. Cells were quickly frozen in liquid nitrogen and kept at -80°C until further used. Plant cell walls were extracted as described [ 31 ]. In brief, cells were washed with potassium phosphate buffer, homogenized and treated with 2.5 units ml -1 of α-amylase (Sigma-Aldrich) for 4 h at 37°C. After centrifugation, the pellets were sequentially washed with potassium phosphate buffer, distilled water, acetone, methanol:chloroform and dietylether and air-dried.
Sugar quantification
Sugars were quantified from cell wall extracts or from crystalline cellulose fraction purified with a modified protocol from Updegraff (1969) [ 81 ]. Briefly, dry cell wall samples were boiled for 1 h in acetic-nitric reagents. The acid insoluble fraction (crystalline cellulose) was recovered on a glass filter (GF/C, 2.5 cm diameter, Whatman) and washed thrice with distilled water. Hydrolysis of whole cell wall extracts or crystalline cellulose fraction was performed as described by Ruiz and Ehrman (1996) [ 82 ]. In summary, 3 mL sulfuric acid (72%) was added to dry cell wall extracts or to the acid-insoluble fraction on the glass filter. Samples were incubated for 2 h in a 30°C water bath, diluted to a final concentration of 4% sulfuric acid and autoclaved at 121°C for 1 h. Fucose was added as an internal control and samples were filtered through a 0.45 μM nylon filter. Monosaccharides were quantified using a Dionex DX 500 HPLC-system equipped with an ED40 electrochemical detector and a CarboPac PA10 ion exchange column. Extraction of uronic acid was realized following the technique described by Redgwell and Selvendran (1986) [ 83 ]. Briefly, dry cell walls were incubated at room temperature with 50 mM cyclohexane-trans-1,2-diaminetetra-acetic acid sodium salt (CDTA) at pH 6.5 for 6 h. After centrifugation, the supernatant (CDTA-1 fraction) was preserved and the residue was extracted with CDTA for 2 h (CDTA-2 fraction). The CDTA-1 and CDTA-2 fractions were combined, filtered through a glass fibre filter, dialyzed and lyophilized. Uronic acid content of cell wall from the CDTA fractions was determined by the m-hydroxydiphenyl colorimetric assay of Filisetti-Cozzi and Carpita (1991) [ 84 ]. The uronic acid lyophilized samples were suspended in 0.4 mL water, to which are added 40 μL of 4 M sulfamic acid/potassium sulfamate solution and 2.4 mL sulfuric acid. This mixture was heated at 100°C for 20 min and cooled on ice. Thereafter, 80 μL of 0.15% m-hydroxydiphenyl diluted in 0.5% sodium hydroxide was added and mixed vigorously. After a 10-min incubation, absorbance was measured at 525 nm and the uronic acid content was estimated by comparison with a standard calibration curve of galacturonic acid.
Confocal and electronic microscopy
Cells examined by confocal microscopy were stained 1:1 with 0.01% fluorescein diacetate (Sigma), a fluorescent marker for cell viability. Confocal laser-scanning microscopy was performed with an Olympus microscope 1X70 equipped with an argon Fluoview laser. Cells observed by electron microscopy were fixed with 2% glutaraldehyde and 4% paraformaldehyde (buffered with 10 mM cacodylate, pH 7.4) for 4 h and post-fixed with 1% osmium tetroxide for 3 h. The specimens were dehydrated through ethanol series (30-50-70-80-90-100%) and embedded with Epon. Samples were examined with a Philips model 201 electron microscope.
Sample preparation for microarray analysis, data collection and analyses
Each sample was taken from an individual flask of non-habituated poplar cells or TA(-)hab cells grown for 5 d after subculture. Total RNA was extracted as previously described [ 8 ]. RNA quality assessment, synthesis of cRNA, labeling and hybridizations to Affymetix GeneChip Poplar Genome Array were performed at Genome Québec, Innovation Center (McGill University, Montréal, Canada) following Affymetrix recommended protocols. Six arrays were hybridized, representing 3 arrays per cell type. Data and statistical analyses were carried out using FlexArray 1.3 [ 46 ]. Raw signal intensities were normalized using Robust Multi-array Average methodology (RMA) [ 45 ] and Significance analysis of microarrays (SMA) [ 85 ] was performed to determine the differentially expressed genes. Raw data obtained for IXB-habituated cells (GSE6181 or NASCARRAYS-27) were also analyzed using RMA and SAM with the FlexArray software. Probesets corresponding to genes in TA(-)hab cells that had > 2.5 FC in expression with a P value < 0.05 when compared to non-habituated levels were selected. All materials and procedures complied with the MIAME standards set for microarray data [ 86 ]. The full dataset has been submitted to the Gene expression omnibus (GEO) and is available through GEO Series accession number GSE17804.
Gene annotation was performed using the PLEXdb database [ 47 ] http://www.plexdb.org , PopArray database http://aspendb.uga.edu/poparray [ 48 ] and NetAffx analysis from the Affymetrix website http://www.affymetrix.com . Ambiguous annotations were further confirmed using BLAST similarity searches. For gene models associated with more than one probeset (Additional file 3 Table S3), only one probeset with the highest FC was selected for other analyses (Additional file 2 Table S2). Predicted gene ontology (GO) for each probeset was obtained using the AgriGO analysis tool http://bioinfo.cau.edu.cn/agriGO/ [ 49 ]. The MapMan 2.2 software [ 50 ] was used to visualize changes in expression in different metabolic pathways. A custom-made mapping file based on differentially expressed genes in TA(-)hab cells was assembled using information from the poplar Ptrich_AFFY_09 mapping file that we updated with the most recent annotations.
qPCR validation of microarray results
Five upregulated and five downregulated genes in TA(-)hab cells according to the microarray analyses were randomly selected for quantitative PCR (qPCR) validation of microarray results. Total RNA (from the same samples used for microarrays analysis) was treated with Turbo DNA-free (Ambion, Austin, TX) to degrade genomic DNA. Reverse transcription was performed on 2 μg RNA as follows. RNA was mixed with 1 μg oligo(dT) 15 primer (Promega, Madison, WI) in a volume of 10 μL and incubated at 70°C for 5 min. Reverse transcription was achieved in a total volume of 25 μL after addition of dNTP (final conc. 1 mM), 15 U AMV reverse transcriptase (Promega) and 40 U RNasin (Promega) and incubation at 42°C for 1 h. Real-time PCR were performed with 2 μL (in a final volume of 20 μL) of 1:9 diluted cDNA. PCR conditions were 95°C for 3 min followed by 30 cycles at 95°C for 20 s, 55°C for 45 s and 72°C for 20 s. Primers used for validation are listed in Additional file 4 Table S4. Relative gene expression was calculated according to Pfaffl (2001)[ 87 ] using act11 as the reference gene [ 88 ]. Average FC in TA(-)hab cells of the genes under study were log 2 -transformed and plotted with log 2 -transformed FC in Affymetrix GeneChip [ 58 ]. | Results and Discussion
Effects of TA on hybrid poplar cell suspensions
It was shown previously that TA induced an increase in cell volume in tobacco suspension cultures [ 7 ] and in Arabidopsis cells [ 8 ]. Similarly, some of the hybrid poplar suspension-cultured cells treated with 1.0 μM TA for 24 h were hypertrophied when compared to control cells treated with methanol (Figure 1A-B ). However, the increase in cell volume was less pronounced in poplar cells than in Arabidopsis cells. Similar changes were also observed when adding IXB (5.0 μM) or DCB (5.0 μM) (data not shown). As reported for Arabidopsis cell suspensions [ 8 ], TA induced cell death in poplar suspension cultures; 73% of the cells were dead 48 h after adding 1.0 μM TA (Figure 2 ). Cell death in poplar cells was also associated with nuclear DNA fragmentation, a typical hallmark of programmed cell death (PCD), as detected by the TUNEL assay (Additional file 1 Fig. S1).
TA has been shown to inhibit the incorporation of radioactive glucose in the acid-insoluble fraction of the cell wall, which corresponds to crystalline cellulose [ 9 ]. The effects of TA on the level of crystalline cellulose in poplar cells were analyzed by quantifying glucose in the acid-insoluble fraction of the cell walls. As indicated in Table 1 , cells in contact with TA for 24 h contained 12% less crystalline cellulose than control cells. These results indicated that TA rapidly inhibited the synthesis or incorporation of cellulose in the poplar cell walls, demonstrating that TA can also alter cellulose synthesis in a tree species.
Habituation of poplar cell suspensions to TA is associated with changes in cell wall composition
Plant cell habituation to inhibitors of cellulose synthesis such as DCB and IXB has been reported [ 24 , 26 , 28 , 30 , 32 ]. To habituate hybrid poplar cell suspensions to TA, we initially cultured them with a low level of TA (0.1 μM) that was gradually increased up to 1.3 μM over a period of 12 months. These cells became resistant to lethal TA concentrations. During the process of habituation, changes in cell morphology and growth rate were observed. When compared to non-habituated cells, TA-habituated cells were wider, rounder, twisted and formed aggregates (Figure 1C ). Their growth rate was also greatly reduced. In order to have a volume of cell inoculum similar to that of control cells, subculture of TA-habituated cells had to be performed every other week instead of weekly. TA-habituated cells were then subcultured in the absence of TA for at least 18 months before performing additional characterization. This procedure had been termed "dehabituation" in previous work [ 31 ] but TA-dehabituated cells will be further referred to as "TA(-)hab" cells. As observed in other habituated cells, TA(-)hab cells had a modified cell volume and reduced growth rate but they progressively became more elongated and did not form aggregates (Figure 1D ; Additional file 1 Table S1). Electron microscope analysis also revealed the accumulation of electron-dense material in fragmented vacuoles (Figure 1F-H ) and in some cases close to cell walls (Figure 1L ). Cell walls of TA(-)hab cells appeared as thick as those of control cells but were more opaque (Figure 1J-K ).
Habituation was associated with changes in the cell wall composition. The proportion of the various monosaccharides evaluated in this work, including glucose, in relation to the total sugars (Table 2 ), was not significantly different in both types of cells (Additional file 1 Fig. S2). However, TA(-)hab cell walls contained about 25% less glucose in the crystalline cellulose fraction (acid-insoluble fraction) than non-habituated cell walls. In addition, the overall level of glucose in the cell wall material was significantly reduced in TA(-)hab cells, while the estimated level of glucose remaining in the acid-soluble fraction was increased. This fraction is mainly composed of xyloglucans, non-crystalline β-1,4-glucans and pectins [ 9 ], thus supporting a general reorganization of the cell wall to compensate for the reduction in cellulose. The level of uronic acids was determined in the CDTA-soluble pectin fraction of dry cell walls. The value increased from 17.1 μg to 31.4 μg mg -1 cell wall in TA(-)hab cells, representing 1.8 times more CDTA-soluble pectins than in the non-habituated cell walls. Microscopic analysis using ruthenium red for staining of pectic polysaccharides also revealed a more intense staining in the cell walls of TA(-)hab cells compared to a very faint staining in control cells, also suggesting the accumulation of more pectins in the cell walls of TA(-)hab cells (Additional file 1 Fig. S3).
Habituation to inhibitors of cellulose synthesis has frequently been associated with changes in the composition and organization of the cell wall characterized by a decrease in cellulose content and an increase in the pectin network [ 24 , 26 , 28 , 30 , 38 , 39 ]. However, the extent to which the cell wall was modified varied widely between habituated cells depending on the species and inhibitor used. In TA(-)hab cells, the decrease in crystalline cellulose was much less substantial than that reported in bean cells habituated to IXB [ 28 ] or tomato cells habituated to DCB [ 24 ], where close to 72% and 97% reduction was observed respectively. This may be due to the fact that each inhibitor uses a different mode of action to inhibit cellulose synthesis. It was also proposed that variations in the initial composition of the cell wall in different species could influence cell wall adaptations during the habituation process [ 25 , 26 ].
TA(-)hab cells are more resistant to TA, DCB and IXB
Resistance to TA was tested in TA(-)hab cells. Even after being subcultured in the absence of TA for more than three years, TA(-)hab cells still tolerated high levels of TA (Figure 2 ). Cell death was below 14% in the presence of 2.0 μM TA for 48 h compared to 78% for non-habituated cells. In the presence of 20 μM TA, the level of cell death reached 38% for TA(-)hab cells while 87% of non-habituated cells were dead. TUNEL assays performed on TA(-)hab cells treated with TA also indicated that DNA fragmentation was increasing in dying cells, suggesting that PCD was still activated in response to TA (Additional file 1 Fig. S1 G-I). These results suggest that a sub-population of TA(-)hab cells remained susceptible to TA. Because the modified composition of the cell walls of TA(-)hab cells was reminiscent of that of DCB- and IXB-habituated cells, TA(-)hab cells were tested for resistance to these inhibitors. A concentration of 5.0 μM was used for IXB as poplar cells were more tolerant to this inhibitor than other species, with a level of cell death lower than 40% after a 48 h-treatment with 5.0 μM IXB compared to about 45% of cell death after a 48 h-treatment with 100 nM IXB in Arabidopsis thaliana [ 12 ]. Induction of cell death after treatment with DCB or IXB was always less pronounced in TA(-)hab cells when compared to non-habituated cells in all four assays over a three-year period. As shown in Figure 3 , more than 72% of hybrid poplar cells were killed by DCB after 48 h compared to 37% in TA(-)hab cells. IXB treatment induced 32% of cell death in hybrid poplar cells compared to 19% in TA(-)hab cells. Hence, habituation to TA not only provided specific resistance to the TA toxin itself but also enhanced cell survival in response to two other molecules also known to inhibit cellulose synthesis. Therefore, it is unlikely that resistance to TA is simply due to a detoxification mechanism that would transform TA to less toxic metabolites, as it was reported in the presence of the fungus Aspergillus niger [ 40 ]. Such a specific mechanism could not operate on structurally different molecules such as DCB and IXB. It is also unlikely that enhanced resistance in TA(-)hab cells would be due to a modification of the inhibitors' target, since each inhibitor is thought to perturb cellulose synthesis by targeting specific molecules, with IXB possibly targeting CESA subunits 3 and 6 [ 13 , 14 ], and DCB proposed to target either a small protein of 12-18 kD [ 41 ] or the microtubule-associated protein MAP20 [ 42 ]. In any cases, habituation to TA most probably activated a mechanism that enhanced resistance to inhibition of cellulose synthesis per se rather than enhancing resistance to the inhibitory molecules themselves.
Since TA-, DCB- and IXB-habituated cells all presented a modified cell wall composition where pectins accumulated to compensate for reduced cellulose level, it is tempting to speculate that enhanced resistance to inhibition of cellulose synthesis was due to cell wall adaptations that occurred during habituation. As found for TA(-)hab cells, it was reported that DCB-habituated bean cells cultured in the absence of DCB for several months (DCB-dehabituated cells) were still resistant to lethal levels of DCB [ 38 , 43 ]. The fact that dehabituated cells retained a high level of resistance even when cultured in the absence of the inhibitor supports previous reports suggesting that a durable mechanism is activated during the habituation process [ 26 , 30 , 38 ]. However, while DCB-dehabituated cells were still resistant to DCB, the composition of their cell walls was progressively restored close to control levels after being cultured in absence of DCB for more than 6 months, retaining a higher proportion of pectins with lower degree of methyl-esterification than in habituated cells [ 31 , 38 , 44 ]. This contrasts with TA(-)hab cells which had a reduced cellulose content even when cultured for more than 18 months in the absence of TA. This suggests that the major changes in cell wall composition, such as reduced cellulose and increased pectins, were not required for resistance to DCB. Garcia-Angulo et al. (2009)[ 43 ] have proposed that the cellulose synthesis machinery in DCB-dehabituated cells would be less effective but more resistant to DCB. Mutations affecting the cellulose biosynthesis machinery could be responsible for the enhanced and durable resistance to DCB in those cells [ 43 ]. It is possible that mutations in components of the cellulose synthesis machinery could lead to defective cellulose synthesis in TA(-)hab cells. However, it is less likely that these mutations would lead to an increased tolerance to different inhibitors of cellulose synthesis. Further investigations will be required to determine whether reduced cellulose synthesis in TA(-)hab cells is caused by mutations affecting the cellulose synthesis machinery or due to the activation of a mechanism of adaptation to inhibition of cellulose synthesis.
Habituation to TA is associated with important transcriptional changes
To study the genetic mechanisms that may be involved in TA resistance and in maintaining this resistance in TA(-)hab cells, we have performed a global transcriptional analysis in TA(-)hab. While transcriptional changes do not directly represent the overall physiological or metabolic state of plant cells, modifications in gene expression provide good indications on how plant cells respond to changing environments and how these responses are sustained at the gene expression level. Microarray analysis was carried out using the Affymetrix GeneChip Poplar Genome Array. Data were normalized and analyzed by Robust Multi-Array Average (RMA) [ 45 ] using the FlexArray software [ 46 ]. Probesets with a more than 2.5-fold change (FC) in expression in TA(-)hab cells when compared to non-habituated cells and a P value ≤ 0.05 following significance analysis of microarrays (SAM) were selected as being up- or downregulated (Additional file 2 Table S2 and Additional file 3 Table S3). Overall, 404 probesets corresponding to 346 predicted genes were upregulated in TA(-)hab cells and 880 probesets associated with 764 predicted genes were downregulated. Validation of microarray results was performed using qPCR for five genes upregulated and five genes downregulated in TA(-)hab cells. As shown in Figure 4 and Additional file 4 Table S4, qPCR results were strongly correlated with the microarray data. Regression analysis of log 2 -transformed FC generated slope y = 1.022 - 0.0027 and R 2 = 0.9542 ( P < 0.0001), demonstrating the high precision of the GeneChip Poplar Genome Array data.
Candidate gene annotations were performed using PLEXdb [ 47 ], PopArray database [ 48 ] and the NetAffx from the Affymetrix website http://www.affymetrix.com as described in Methods. Gene products and functions were mainly predicted based on sequence homology. The names of predicted poplar genes were indicated when available. Otherwise, the putative function of the closest Arabidopsis homologous gene was indicated to facilitate comparison (Additional file 2 Table S2). Because the actual function of most poplar genes remains to be shown, some of the predicted functions may be incorrect as similar sequences may have different functions in diverse species. Gene ontology analysis was performed using the AgriGO analysis toolkit and database (Figure 5 ) [ 49 ]. Predicted genes that had no GO annotations (258 downregulated genes, 128 upregulated genes) were classified in the "unknown biological process" category. In downregulated genes (Figure 5 ), the most frequent annotations were related to metabolic process (24.1%, including 3.3% in secondary metabolic process), cellular process (22.0%), response to stimulus (11.1%, including 6.4% in the stress category), localization and transport (7.3%) and biological regulation (7.2%). These same categories were also highly represented in upregulated genes, with 24.0% annotations in metabolic process, 26.6% for cellular process (including 5.8% for transcription), 8.1% for response to stimulus (including 5.5% for response to stress) and 6.7% for localization and transport. Moreover, upregulated genes included 5 GO annotations (1.4%) for chromatin assembly or disassembly, in a reference group that contains only 79 genes.
Comparison with other habituation experiments
In 2004, Manfield et al. have characterized global gene expression using the Affymetrix ATH1 GeneChip in Arabidopsis cells that were habituated to IXB [ 32 ]. These cells contained less glucose and more pectins in their cell walls. IXB-habituated (referred hereafter to "IXBhab") cells were still grown in the presence of IXB in contrast to TA(-)hab cells that were subcultured in the absence of TA. As mentioned earlier, there is experimental evidence suggesting that the mode of action of TA resembles that of IXB, although each molecule individually activates a few distinctive responses [ 10 , 12 ]. Hence, the identification of conserved patterns of gene expression in both experiments could help identify the mechanisms that are involved in providing resistance to inhibitors of cellulose synthesis. However, it is essential to keep in mind the important differences in species, growth conditions, method of habituation and type of microarray analyses when examining these results. In order to compare gene expression data in IXBhab cells with those of TA(-)hab cells, raw microarray data (CEL file) from IXBhab cells available at GEO (GSE6181) or NASC (NASCARRAYS-27) were analyzed using RMA and SAM with the Flexarray software. Genes that displayed a change of expression that was more than 2 FC and a P value ≤ 0.05 following SAM were selected for comparison (Additional file 5 Table S5). With this method, more genes were considered to be significantly up- or downregulated in IXBhab cells than previously reported, but the expression of genes already reported to be upregulated or downregulated followed the same trend [ 32 ]. Gene expression in TA(-)hab cells was first compared with data from IXBhab cells using the closest AGI predicted for each poplar probeset (Additional file 2 Table S2). However, since matching AGIs are predicted on the basis of sequence homology, it is possible that similar sequences may encode proteins with different functions and conversely, that divergent sequences encode proteins with similar functions. To overcome some of the difficulties in comparing gene expression between different species, we have chosen to use the MapMan software [ 50 , 51 ] to evaluate globally how different cellular processes and metabolic pathways are affected in TA(-)hab cells when compared to IXBhab cells. We assembled a MapMan mapping file based on expression data from TA(-)hab cells using the poplar Ptrich_AFFY_09 mapping file that was updated with information from the most recent annotation. MapMan results for "Metabolism overview" are presented in Figure 6 for TA(-)hab cells and in Additional file 1 Fig. S4 for IXBhab cells. Results for "Regulation overview" and "Cellular response" are presented in Additional file 1 Fig. S5 and S6. Differential gene expression was observed in cell wall synthesis and modification pathways as well as in secondary metabolism, with more genes downregulated in TA(-)hab cells than in IXBhab cells. A notable difference was in the photosynthesis process, where several genes were upregulated in IXBhab cells with little changes in gene expression in TA(-)hab cells. We speculate that different growth conditions may explain this difference, as TA(-)hab cells were grown in the dark, and we suspect that IXBhab cells were grown in light, although this has not been stated. To facilitate comparison, we have also used MapMan to generate a list of differentially expressed genes in IXBhab cells that are classified according to the major BinCode functional categories (Additional file 5 Table S5).
Expression of cell wall-related genes
TA(-)hab cells have a modified cell wall, with less cellulose and more pectins. To help determine how TA(-)hab cells adjusted their cell wall composition, we have looked more closely at the expression of genes involved in cell wall synthesis, modification or degradation corresponding to the BinCode category 10 (Additional file 2 Table S2). Most predicted genes belonging to this category were downregulated. Cellulose is synthesized by large membrane complexes constituted by CESAs [ 52 ]. Expression of CESA genes was not significantly modified by more than 2.5 FC in TA(-)hab cells. Only one predicted CESA -like gene (predicted ortholog of CSLG3 ) was downregulated. Hence, the reduced cellulose content was not associated with differential expression of cellulose synthase genes, as it was reported for IXBhab cells [ 32 ]. However, since there is increasing evidence that CESA complexes are associated with other proteins that aid microfibril formation and that link the complexes to nearby microtubules for guidance along the membrane [ 15 ], it is possible that expression of genes encoding some of these unidentified proteins could be altered in TA(-)hab cells. Other downregulated genes included genes encoding proteins involved in cell wall degradation (glycosyl hydrolase, xyloglucan endotransglucosylases/hydrolases (XTH), polygalacturonases), cell wall modification (polygalacturonases, pectin(acetyl)esterases, XTHs) and cell wall proteins (fasciclin-like arabinogalactan-proteins and extensins). Only a few genes were upregulated, such as genes predicted to encode beta-xylosidases, a beta-mannan endohydrolase, a polygalacturonase, a pectinesterase, two expansins and a lyase.
Expression data in TA(-)hab cells was compared to that of Arabidopsis IXBhab cells [ 32 ] using matching AGIs (Additional file 2 Table S2 and Additional file 5 Table S5). Several genes encoding predicted orthologs had a similar pattern of expression in both cell types, except for two XTHs ( XTH9 and XTR7 ), one pectinacetylesterase and one polygalacturonase inhibiting protein gene ( PGIP1 ) that were upregulated in IXBhab cells. Moreover, a callose-synthase gene ( CALS1 ) downregulated in IXBhab cells was upregulated in TA(-)hab cells. However, two other callose synthase genes ( AtGSL09 and AtGSL12 ) were upregulated in IXBhab cells. Several predicted cell wall-related poplar genes differentially expressed in TA(-)hab cells did not have a matching Arabidopsis gene differentially regulated in IXBhab cells. However, these poplar genes had a predicted function that was similar to that of at least one of the genes that were differentially expressed in IXBhab cells. For instance, a proline-rich extensin like gene downregulated in TA(-)hab cells was also downregulated in IXBhab cells. Therefore, TA(-)hab and IXBhab cells exhibited similar changes in the expression of a large overlapping set of genes involved in cell wall modifications, even though TA(-)hab cells were no longer cultured in the presence of TA. Moreover, this analysis shows that despite species differences, it is possible to correlate expression data in TA(-)hab poplar cells with those of IXBhab Arabidopsis cells, at least at the level of cell wall-related genes. It would certainly be of interest to determine whether similar transcriptional changes also occurred in DCB-habituated cells. This could eventually help pinpoint a potential conserved mechanism of adaptation to inhibition of cellulose synthesis. On the other hand, we suspect that most of these changes would be lost during the DCB-dehabituation process since the cell wall composition was then restored close to initial levels [ 31 , 38 ]. Nonetheless, some modifications were retained in DCB-dehabituated cells, such as a reduced level of arabinogalactan proteins and the accumulation of modified pectins [ 31 , 38 ]. We found that some genes predicted to encode arabinogalactan proteins and pectin modifying enzymes were downregulated by more than 2.5 FC in TA(-)hab cells, suggesting that less arabinogalactan proteins and pectin modifications were present in the TA(-)hab cell walls. The implication of these modifications for the establishment of durable resistance to inhibitors of cellulose remains to be shown.
Genes involved in the phenylpropanoid pathway
The phenylpropanoid pathway leads to the synthesis of a wide range of natural products in plants, including lignans, lignin, flavonoids and anthocyanins, several of which are induced by stress [ 53 ]. In poplar, genes involved in the synthesis of phenylpropanoids are part of expanded families that contain genes with conserved functions as well as new members whose biochemical function may be distinct [ 54 - 56 ]. Several genes predicted to belong to these large gene families were downregulated in TA(-)hab cells. These include genes predicted to encode one cinnamyl-alcohol dehydrogenase ( CAD14 ), one caffeic acid/5-hydroxyferulic acid O-methyltransferase ( COMT6 ), two trans-caffeoyl-CoA 3-O-methyltransferases ( CCoAOMT1 and 2 ), and three different hydroxycinnamoyl-Coenzyme A shikimate/quinate hydroxycinnamoyltransferases ( HCT2 , HCT5 and HCT7 ). The poplar CCoAOMT1 and 2 have been shown to be specifically involved in lignin synthesis, as reduced CCoAOMT activity in poplar led to reduced lignin synthesis [ 56 ]. Lignin is deposited in the secondary cell walls to provide rigidity and impermeability to the cells. It is possible that reduced expression of these genes in TA(-)hab cells also turns down the production of lignin. However, HCT2 , 5 and 7 , as well as COMT6 and CAD14 , are barely expressed in lignifying tissues, suggesting that they may be involved in other processes [ 55 , 56 ]. While ectopic lignification was observed in mutants with reduced cellulose synthesis [ 57 ] and in Arabidopsis seedlings treated with TA or IXB [ 10 ], IXBhab cells did not show any ectopic lignificaton [ 32 ]. Supporting these results, several genes specifically involved in lignin synthesis (BinCode 16, Additional file 5 Table S5) were also downregulated in Arabidopsis IXBhab cells, such as genes encoding a CCoAMT, a caffeic acid/5-hydroxyferulic acid O -methyltransferase ( AtOMT1 ), a cinnamoyl CoA reductase ( CCR2 ) and a cinnamyl-alcohol dehydrogenase 4 ( CAD4 ).
Flavonoids function as sunscreen and as defense compounds and have been shown to accumulate in response to various stresses [ 58 , 59 ]. Some genes involved in the synthesis of flavonoids were also downregulated in TA(-)hab cells. These genes were predicted to encode a chalcone synthase ( CHS6 ), which is the committed step to flavonoid synthesis, a flavonol synthase ( FLS ), which participates in the synthesis of flavonols, and an anthocyanidin reductase ( ANR/BAN1 ), which is involved for the formation of proanthocyanidins [ 55 , 59 ]. However, the specific function of each isoform remains to be shown.
In poplar, several genes of the lignin and flavonoid synthesis pathways were dramatically upregulated during infection by Melampsora medusae leaf rust [ 60 , 61 ]. In contrast, gray poplar roots exposed to hypoxic stress displayed a reduced expression in lignin and flavonoid synthesis-related genes [ 62 ]. It was proposed that repression of the phenylpropanoid pathway in these conditions would be a way of inhibiting energy demanding mechanisms in favor of glycolysis to maintain carbon and energy metabolism in periods of O 2 deficiency [ 62 ]. Similarly, downregulation of lignin and flavonoid synthesis pathways in TA(-)hab cells may help repress high energy consuming pathways to redirect carbohydrates to other processes that may be required for cell survival in response to reduced cellulose synthesis. However, while the metabolic outcome of repressing these pathways is unknown, we suspect that a significant fraction of the phenylpropanoids produced will not be incorporated in lignin and flavonoids and could either be accumulated or directed to other pathways. Accumulation of phenolics in vacuoles has been frequently reported [ 63 ]. It is possible that the electron dense material that was observed in vacuoles of TA(-)hab cells (Figure 1 ) were phenylpropanoids that accumulated due to repressed lignin and flavonoid synthesis, but this hypothesis remains to be tested. Whether these changes were related to enhanced resistance to TA is unknown at this time. While some of the genes involved in lignin synthesis were also dowregulated in IXBhab cells, we observed very limited changes in the expression of flavonoid synthesis-related genes, suggesting that modulation of this pathway may either be a specific response to TA or related to species differences in response to inhibition of cellulose synthesis.
Expression of cell death-related genes
We have shown previously that TA and IXB activate a program of cell death in Arabidopsis cell suspensions [ 8 ] and in poplar (this work). Since TA(-)hab cells were able to survive in high concentrations of TA, it is possible that genes encoding proteins involved in regulating the onset of cell death were differentially regulated in TA(-)hab cells. We had found in previous work that more than half of the genes that were upregulated in common after a short exposure of Arabidopsis cells to TA or IXB were downregulated in IXBhab cells, suggesting that some stress-related mechanisms were turned down in those cells [ 12 ]. Interestingly, several genes predicted to control the process of cell death were differentially regulated in TA(-)hab cells. For example, a gene predicted to be the ortholog of STP13 , which encodes a hexose transporter whose expression is correlated with PCD [ 64 ] was downregulated in TA(-)hab cells (FC -3.9). Another gene predicted to encode an ortholog of the Arabidopsis DMR6 was drastically downregulated in TA(-)hab cells (FC -37.5) and in IXBhab cells (FC -14.6). This gene has been shown to play a role in the onset of PCD during plant-pathogen interactions. Hence, absence of DMR6 in the Arabidopsis mutant dmr6 led to resistance to Hyaloperonospora parasitica that was associated with the absence of PCD and reactive oxidative intermediates with no induction of the expression of the defense-associated gene PR-1 [ 65 ]. Several other defense-related genes were downregulated in TA(-)hab cells, including numerous disease resistance proteins that may play a role in the regulation of the hypersensitive cell death [ 66 ].
Another set of genes predicted to function in protecting against cell death was upregulated in TA(-)hab cells. These include a gene putatively encoding a spermine synthase orthologous to the Arabidopsis ACAULIS5 ( ACL5 ) gene that was upregulated 6.5 times in TA(-)hab cells (7.1 in IXBhab cells). Mutant analysis has shown that ACL5 is involved in xylem specification. Expression of ACL5, a spermine synthase, is thought to prevent premature death of the developing vessel element [ 67 ]. This is corroborated by the fact that exogenous application of spermine can prolong xylem element differentiation while stimulating cell expansion and cell wall elaboration. Another gene was the predicted poplar gene encoding an ortholog of AtBAG6 (upregulated 2.8 times), a member of BAG family proteins also believed to be involved in cell survival [ 68 ]. It is possible that differential regulation of cell genes regulating the PCD that is induced in response to TA could significantly contribute to cell survival in TA(-)hab cells.
Expression of genes involved in cell cycle
Several genes predicted to be involved in the control of cell division and cell cycle (Bincode 31.2 and 31.3) were upregulated in poplar TA(-)hab cells as well as in Arabidopsis IXBhab cells (Additional file 1 Fig. S5 and S6; Additional file 2 Table S2 and Additional file 5 Table S5). These include genes predicted to encode for the cyclin-dependent kinase CDKB1;2, which accumulates in a cell cycle-dependant manner to reach a maximum level at the G2/M transition where its activity is required [ 69 ]; the cyclin-dependent kinase regulators, CYCB2;4, CYCB1:4, whose expression also peaks at the G2/M transition and during M phase transition; and the cell division cycle-like protein CDC45 that accumulates in the G1/S transition [ 70 ]. Other members were also upregulated in IXBhab cells, e.g. CYCB2;2 , CYCD3;1 , CYCB1;4 and CYCB2;1 . Cellulose synthesis fluctuates during the cell cycle, as it is required for cell elongation, differentiation and cell plate formation. It was shown that cellulose is deposited in cell plates at the late M phase after callose deposition [ 71 ]. Results obtained in the dinoflagellate Crypthecodinium cohni have suggested that cell cycle progression is coupled with cellulose synthesis at the G1 phase [ 72 ]. Hence, inhibition of cellulose synthesis would halt cell growth by introducing a G1 cell cycle delay that could lead to a cell cycle arrest in late M phase [ 72 ]. Upregulation of cell cycle-related genes in TA(-)hab and IXBhab cells may be a consequence of the reduced cellulose content, which in turn could signal changes in the progression of the cell cycle.
Expression of genes involved in DNA and chromatin modifications
Another important feature of TA(-)hab cells was their capacity to remain resistant to TA over several generations. Therefore, most of the changes in gene expression that were induced during the habituation process and that are important for resistance to TA must be conserved after cell division. Mitotically transmitted changes in gene expression can be caused by direct and irreversible alterations in the original DNA sequence (mutations) or may be mediated by epigenetic processes, such as reversible DNA methylation, histones modifications and chromatin remodeling [ 73 ]. It is known that both mutations and epigenetic modifications are more frequently induced during plant tissue culture than in whole plants [ 74 ]. Work by Pishke et al. (2006) [ 33 ] has shown that hormone habituation of Arabidopsis cells was associated with transcriptional activation of epigenetic-related genes involved in DNA methylation, histone methylation and deacetylation, as well as chromatin remodeling factors. DNA and chromatin modifications occurring during hormone habituation may be critical for the acquisition of cytokinin habituation. In TA(-)hab cells, several poplar genes predicted to encode histones were upregulated, including histone H1 ( HON901 ), H2 ( HTA902/HTA912 ), H3 ( HTR910/HTR914 ) and H4 ( HFO905/HFO907 ), except for the gene predicted to encode the histone variant H1.2 that was downregulated. Histone proteins are important for nucleosome and chromatin formation. In particular, histone variants may be important for specialized functions as their incorporation at certain regions of the chromosomes may confer specific structural or functional features to chromatin [ 75 ]. Arabidopsis orthologs of several of these genes were also found to be significantly upregulated (> 2.0 FC) in hormonally habituated cells ( H2A , H2B , H3 , H3.2 and H4 ) and in IXBhab cells ( H2A , H2A.Z , H2B , H3 ), but downregulation was also observed for histone H1. Several genes participating in DNA and chromatin modifications were also found to be upregulated in TA(-)hab cells, including genes predicted to encode: a cytosol-specific methyltransferase ( DMT909 ) which is involved in DNA methylation (FC 2.7); high-mobility-group HMGA (FC 3.2) and HMGB (FC 2.6) proteins, which are members of chromatin-associated proteins that would act as architectural factors in nucleoprotein structures and which regulate DNA-dependent processes including transcription [ 76 ]; a chromatin remodeling complex subunit ( CHR942 ) that is a member of SNF2 domain-containing protein family (FC 2.8), which includes proteins that are proposed to play a role in gene silencing and that would interact with histone variants to alter chromatin structure [ 75 ]; a trithorax-related protein/SET-domain containing protein ( SDG933 ; FC 2.5) whose Arabidopsis predicted ortholog TXR5 was shown to encode a H3K27 monomethyltransferase that is required for gene silencing through histone methylation [ 77 ].
Changes in DNA methylation patterns and chromatin modification events have also been correlated with activation of transposons [rev. in [ 78 , 79 ]], as it was observed in hormone habituated cells [ 33 ]. However, we did not detect differential expression of transposon-related sequences in TA(-)hab cells. It may be that differential expression of transposon-related sequences took place in TA(-)hab cells but at a level that was below 2.5 FC. Alternatively, activation of transposons could have occurred at an earlier stage of the habituation process (e.g., in the presence of TA) to be silenced later on due to epigenetic modifications [ 79 ].
However, activation of transposons was clearly induced in IXBhab cells that were still cultured in the presence of IXB (Additional file 5 Table S5). At least 10 transposon-related sequences, including copia-like retrotransposons, gypsy-like retrotransposons and a CACTA-like transposase family were differentially regulated in IXBhab cells. This was also associated with upregulation of the expression of several genes coding for DNA and chromatin modification enzymes, such as: DDM1 , a member of the broad SWI2/SNF2 protein family DNA promoting chromatin remodelling (FC 3.8); cytosine methyltransferase MET1 (FC 3.7); histone deacetylases 2A (FC 3.9), 2B(FC 3.3), 2C(FC 2.7), and HDT4 (FC 9.5); the histone-lysine N-methyltransferase SUVH6 (FC 2.6) and SUVR2 (FC 3.7); high-mobility-group HMG1/2 family protein (FC 3.8) [ 78 ].
Differential expression of epigenetic-related genes suggest that DNA and chromatin modifications occur during the process of habituation to TA or IXB and are possibly involved in maintaining some of the features of TA(-)hab cells. Moreover, the fact that these changes occurred in IXBhab that were still grown in the presence of IXB suggests that DNA and chromatin modifications were initiated when cells were still in the presence of the inhibitor of cellulose synthesis. Whether these changes are related to durable resistance to inhibitors of cellulose synthesis or simply associated with the habituation process is still unknown.
Other genes of interest
Several other processes were affected in TA(-)hab cells, as many differentially expressed genes were found to be involved in processes like hormone metabolism, transport, stress responses, regulation of transcription, protein modifications and signal transduction (Additional file 2 Table S2 and Additional file 5 Table S5). Most genes involved in biotic stress-related responses were downregulated, and this included genes known to be generally upregulated in response to pathogens, such as pathogenesis-related proteins, several members of disease-resistance family proteins and chitinases. However, expression of several small heat shock factors was upregulated in TA(-)hab cells. Moreover, numerous genes encoding members of different transcription factors families, including WRKY, C2H2-type zinc finger protein, MYB and NAC domain containing proteins, were differentially regulated in TA(-)hab cells, with about half being upregulated and half downregulated. A similar pattern of expression was also found in IXBhab cells, although there were more genes involved in heat shock responses that were downregulated. The fact that many transcription regulator genes are differentially expressed in TA(-)hab cells is not surprising since there are so many different processes that were affected in those cells. These data clearly indicate that the process of habituation is associated with very complex changes in gene expression that certainly alter the general metabolism of the habituated cells. | Results and Discussion
Effects of TA on hybrid poplar cell suspensions
It was shown previously that TA induced an increase in cell volume in tobacco suspension cultures [ 7 ] and in Arabidopsis cells [ 8 ]. Similarly, some of the hybrid poplar suspension-cultured cells treated with 1.0 μM TA for 24 h were hypertrophied when compared to control cells treated with methanol (Figure 1A-B ). However, the increase in cell volume was less pronounced in poplar cells than in Arabidopsis cells. Similar changes were also observed when adding IXB (5.0 μM) or DCB (5.0 μM) (data not shown). As reported for Arabidopsis cell suspensions [ 8 ], TA induced cell death in poplar suspension cultures; 73% of the cells were dead 48 h after adding 1.0 μM TA (Figure 2 ). Cell death in poplar cells was also associated with nuclear DNA fragmentation, a typical hallmark of programmed cell death (PCD), as detected by the TUNEL assay (Additional file 1 Fig. S1).
TA has been shown to inhibit the incorporation of radioactive glucose in the acid-insoluble fraction of the cell wall, which corresponds to crystalline cellulose [ 9 ]. The effects of TA on the level of crystalline cellulose in poplar cells were analyzed by quantifying glucose in the acid-insoluble fraction of the cell walls. As indicated in Table 1 , cells in contact with TA for 24 h contained 12% less crystalline cellulose than control cells. These results indicated that TA rapidly inhibited the synthesis or incorporation of cellulose in the poplar cell walls, demonstrating that TA can also alter cellulose synthesis in a tree species.
Habituation of poplar cell suspensions to TA is associated with changes in cell wall composition
Plant cell habituation to inhibitors of cellulose synthesis such as DCB and IXB has been reported [ 24 , 26 , 28 , 30 , 32 ]. To habituate hybrid poplar cell suspensions to TA, we initially cultured them with a low level of TA (0.1 μM) that was gradually increased up to 1.3 μM over a period of 12 months. These cells became resistant to lethal TA concentrations. During the process of habituation, changes in cell morphology and growth rate were observed. When compared to non-habituated cells, TA-habituated cells were wider, rounder, twisted and formed aggregates (Figure 1C ). Their growth rate was also greatly reduced. In order to have a volume of cell inoculum similar to that of control cells, subculture of TA-habituated cells had to be performed every other week instead of weekly. TA-habituated cells were then subcultured in the absence of TA for at least 18 months before performing additional characterization. This procedure had been termed "dehabituation" in previous work [ 31 ] but TA-dehabituated cells will be further referred to as "TA(-)hab" cells. As observed in other habituated cells, TA(-)hab cells had a modified cell volume and reduced growth rate but they progressively became more elongated and did not form aggregates (Figure 1D ; Additional file 1 Table S1). Electron microscope analysis also revealed the accumulation of electron-dense material in fragmented vacuoles (Figure 1F-H ) and in some cases close to cell walls (Figure 1L ). Cell walls of TA(-)hab cells appeared as thick as those of control cells but were more opaque (Figure 1J-K ).
Habituation was associated with changes in the cell wall composition. The proportion of the various monosaccharides evaluated in this work, including glucose, in relation to the total sugars (Table 2 ), was not significantly different in both types of cells (Additional file 1 Fig. S2). However, TA(-)hab cell walls contained about 25% less glucose in the crystalline cellulose fraction (acid-insoluble fraction) than non-habituated cell walls. In addition, the overall level of glucose in the cell wall material was significantly reduced in TA(-)hab cells, while the estimated level of glucose remaining in the acid-soluble fraction was increased. This fraction is mainly composed of xyloglucans, non-crystalline β-1,4-glucans and pectins [ 9 ], thus supporting a general reorganization of the cell wall to compensate for the reduction in cellulose. The level of uronic acids was determined in the CDTA-soluble pectin fraction of dry cell walls. The value increased from 17.1 μg to 31.4 μg mg -1 cell wall in TA(-)hab cells, representing 1.8 times more CDTA-soluble pectins than in the non-habituated cell walls. Microscopic analysis using ruthenium red for staining of pectic polysaccharides also revealed a more intense staining in the cell walls of TA(-)hab cells compared to a very faint staining in control cells, also suggesting the accumulation of more pectins in the cell walls of TA(-)hab cells (Additional file 1 Fig. S3).
Habituation to inhibitors of cellulose synthesis has frequently been associated with changes in the composition and organization of the cell wall characterized by a decrease in cellulose content and an increase in the pectin network [ 24 , 26 , 28 , 30 , 38 , 39 ]. However, the extent to which the cell wall was modified varied widely between habituated cells depending on the species and inhibitor used. In TA(-)hab cells, the decrease in crystalline cellulose was much less substantial than that reported in bean cells habituated to IXB [ 28 ] or tomato cells habituated to DCB [ 24 ], where close to 72% and 97% reduction was observed respectively. This may be due to the fact that each inhibitor uses a different mode of action to inhibit cellulose synthesis. It was also proposed that variations in the initial composition of the cell wall in different species could influence cell wall adaptations during the habituation process [ 25 , 26 ].
TA(-)hab cells are more resistant to TA, DCB and IXB
Resistance to TA was tested in TA(-)hab cells. Even after being subcultured in the absence of TA for more than three years, TA(-)hab cells still tolerated high levels of TA (Figure 2 ). Cell death was below 14% in the presence of 2.0 μM TA for 48 h compared to 78% for non-habituated cells. In the presence of 20 μM TA, the level of cell death reached 38% for TA(-)hab cells while 87% of non-habituated cells were dead. TUNEL assays performed on TA(-)hab cells treated with TA also indicated that DNA fragmentation was increasing in dying cells, suggesting that PCD was still activated in response to TA (Additional file 1 Fig. S1 G-I). These results suggest that a sub-population of TA(-)hab cells remained susceptible to TA. Because the modified composition of the cell walls of TA(-)hab cells was reminiscent of that of DCB- and IXB-habituated cells, TA(-)hab cells were tested for resistance to these inhibitors. A concentration of 5.0 μM was used for IXB as poplar cells were more tolerant to this inhibitor than other species, with a level of cell death lower than 40% after a 48 h-treatment with 5.0 μM IXB compared to about 45% of cell death after a 48 h-treatment with 100 nM IXB in Arabidopsis thaliana [ 12 ]. Induction of cell death after treatment with DCB or IXB was always less pronounced in TA(-)hab cells when compared to non-habituated cells in all four assays over a three-year period. As shown in Figure 3 , more than 72% of hybrid poplar cells were killed by DCB after 48 h compared to 37% in TA(-)hab cells. IXB treatment induced 32% of cell death in hybrid poplar cells compared to 19% in TA(-)hab cells. Hence, habituation to TA not only provided specific resistance to the TA toxin itself but also enhanced cell survival in response to two other molecules also known to inhibit cellulose synthesis. Therefore, it is unlikely that resistance to TA is simply due to a detoxification mechanism that would transform TA to less toxic metabolites, as it was reported in the presence of the fungus Aspergillus niger [ 40 ]. Such a specific mechanism could not operate on structurally different molecules such as DCB and IXB. It is also unlikely that enhanced resistance in TA(-)hab cells would be due to a modification of the inhibitors' target, since each inhibitor is thought to perturb cellulose synthesis by targeting specific molecules, with IXB possibly targeting CESA subunits 3 and 6 [ 13 , 14 ], and DCB proposed to target either a small protein of 12-18 kD [ 41 ] or the microtubule-associated protein MAP20 [ 42 ]. In any cases, habituation to TA most probably activated a mechanism that enhanced resistance to inhibition of cellulose synthesis per se rather than enhancing resistance to the inhibitory molecules themselves.
Since TA-, DCB- and IXB-habituated cells all presented a modified cell wall composition where pectins accumulated to compensate for reduced cellulose level, it is tempting to speculate that enhanced resistance to inhibition of cellulose synthesis was due to cell wall adaptations that occurred during habituation. As found for TA(-)hab cells, it was reported that DCB-habituated bean cells cultured in the absence of DCB for several months (DCB-dehabituated cells) were still resistant to lethal levels of DCB [ 38 , 43 ]. The fact that dehabituated cells retained a high level of resistance even when cultured in the absence of the inhibitor supports previous reports suggesting that a durable mechanism is activated during the habituation process [ 26 , 30 , 38 ]. However, while DCB-dehabituated cells were still resistant to DCB, the composition of their cell walls was progressively restored close to control levels after being cultured in absence of DCB for more than 6 months, retaining a higher proportion of pectins with lower degree of methyl-esterification than in habituated cells [ 31 , 38 , 44 ]. This contrasts with TA(-)hab cells which had a reduced cellulose content even when cultured for more than 18 months in the absence of TA. This suggests that the major changes in cell wall composition, such as reduced cellulose and increased pectins, were not required for resistance to DCB. Garcia-Angulo et al. (2009)[ 43 ] have proposed that the cellulose synthesis machinery in DCB-dehabituated cells would be less effective but more resistant to DCB. Mutations affecting the cellulose biosynthesis machinery could be responsible for the enhanced and durable resistance to DCB in those cells [ 43 ]. It is possible that mutations in components of the cellulose synthesis machinery could lead to defective cellulose synthesis in TA(-)hab cells. However, it is less likely that these mutations would lead to an increased tolerance to different inhibitors of cellulose synthesis. Further investigations will be required to determine whether reduced cellulose synthesis in TA(-)hab cells is caused by mutations affecting the cellulose synthesis machinery or due to the activation of a mechanism of adaptation to inhibition of cellulose synthesis.
Habituation to TA is associated with important transcriptional changes
To study the genetic mechanisms that may be involved in TA resistance and in maintaining this resistance in TA(-)hab cells, we have performed a global transcriptional analysis in TA(-)hab. While transcriptional changes do not directly represent the overall physiological or metabolic state of plant cells, modifications in gene expression provide good indications on how plant cells respond to changing environments and how these responses are sustained at the gene expression level. Microarray analysis was carried out using the Affymetrix GeneChip Poplar Genome Array. Data were normalized and analyzed by Robust Multi-Array Average (RMA) [ 45 ] using the FlexArray software [ 46 ]. Probesets with a more than 2.5-fold change (FC) in expression in TA(-)hab cells when compared to non-habituated cells and a P value ≤ 0.05 following significance analysis of microarrays (SAM) were selected as being up- or downregulated (Additional file 2 Table S2 and Additional file 3 Table S3). Overall, 404 probesets corresponding to 346 predicted genes were upregulated in TA(-)hab cells and 880 probesets associated with 764 predicted genes were downregulated. Validation of microarray results was performed using qPCR for five genes upregulated and five genes downregulated in TA(-)hab cells. As shown in Figure 4 and Additional file 4 Table S4, qPCR results were strongly correlated with the microarray data. Regression analysis of log 2 -transformed FC generated slope y = 1.022 - 0.0027 and R 2 = 0.9542 ( P < 0.0001), demonstrating the high precision of the GeneChip Poplar Genome Array data.
Candidate gene annotations were performed using PLEXdb [ 47 ], PopArray database [ 48 ] and the NetAffx from the Affymetrix website http://www.affymetrix.com as described in Methods. Gene products and functions were mainly predicted based on sequence homology. The names of predicted poplar genes were indicated when available. Otherwise, the putative function of the closest Arabidopsis homologous gene was indicated to facilitate comparison (Additional file 2 Table S2). Because the actual function of most poplar genes remains to be shown, some of the predicted functions may be incorrect as similar sequences may have different functions in diverse species. Gene ontology analysis was performed using the AgriGO analysis toolkit and database (Figure 5 ) [ 49 ]. Predicted genes that had no GO annotations (258 downregulated genes, 128 upregulated genes) were classified in the "unknown biological process" category. In downregulated genes (Figure 5 ), the most frequent annotations were related to metabolic process (24.1%, including 3.3% in secondary metabolic process), cellular process (22.0%), response to stimulus (11.1%, including 6.4% in the stress category), localization and transport (7.3%) and biological regulation (7.2%). These same categories were also highly represented in upregulated genes, with 24.0% annotations in metabolic process, 26.6% for cellular process (including 5.8% for transcription), 8.1% for response to stimulus (including 5.5% for response to stress) and 6.7% for localization and transport. Moreover, upregulated genes included 5 GO annotations (1.4%) for chromatin assembly or disassembly, in a reference group that contains only 79 genes.
Comparison with other habituation experiments
In 2004, Manfield et al. have characterized global gene expression using the Affymetrix ATH1 GeneChip in Arabidopsis cells that were habituated to IXB [ 32 ]. These cells contained less glucose and more pectins in their cell walls. IXB-habituated (referred hereafter to "IXBhab") cells were still grown in the presence of IXB in contrast to TA(-)hab cells that were subcultured in the absence of TA. As mentioned earlier, there is experimental evidence suggesting that the mode of action of TA resembles that of IXB, although each molecule individually activates a few distinctive responses [ 10 , 12 ]. Hence, the identification of conserved patterns of gene expression in both experiments could help identify the mechanisms that are involved in providing resistance to inhibitors of cellulose synthesis. However, it is essential to keep in mind the important differences in species, growth conditions, method of habituation and type of microarray analyses when examining these results. In order to compare gene expression data in IXBhab cells with those of TA(-)hab cells, raw microarray data (CEL file) from IXBhab cells available at GEO (GSE6181) or NASC (NASCARRAYS-27) were analyzed using RMA and SAM with the Flexarray software. Genes that displayed a change of expression that was more than 2 FC and a P value ≤ 0.05 following SAM were selected for comparison (Additional file 5 Table S5). With this method, more genes were considered to be significantly up- or downregulated in IXBhab cells than previously reported, but the expression of genes already reported to be upregulated or downregulated followed the same trend [ 32 ]. Gene expression in TA(-)hab cells was first compared with data from IXBhab cells using the closest AGI predicted for each poplar probeset (Additional file 2 Table S2). However, since matching AGIs are predicted on the basis of sequence homology, it is possible that similar sequences may encode proteins with different functions and conversely, that divergent sequences encode proteins with similar functions. To overcome some of the difficulties in comparing gene expression between different species, we have chosen to use the MapMan software [ 50 , 51 ] to evaluate globally how different cellular processes and metabolic pathways are affected in TA(-)hab cells when compared to IXBhab cells. We assembled a MapMan mapping file based on expression data from TA(-)hab cells using the poplar Ptrich_AFFY_09 mapping file that was updated with information from the most recent annotation. MapMan results for "Metabolism overview" are presented in Figure 6 for TA(-)hab cells and in Additional file 1 Fig. S4 for IXBhab cells. Results for "Regulation overview" and "Cellular response" are presented in Additional file 1 Fig. S5 and S6. Differential gene expression was observed in cell wall synthesis and modification pathways as well as in secondary metabolism, with more genes downregulated in TA(-)hab cells than in IXBhab cells. A notable difference was in the photosynthesis process, where several genes were upregulated in IXBhab cells with little changes in gene expression in TA(-)hab cells. We speculate that different growth conditions may explain this difference, as TA(-)hab cells were grown in the dark, and we suspect that IXBhab cells were grown in light, although this has not been stated. To facilitate comparison, we have also used MapMan to generate a list of differentially expressed genes in IXBhab cells that are classified according to the major BinCode functional categories (Additional file 5 Table S5).
Expression of cell wall-related genes
TA(-)hab cells have a modified cell wall, with less cellulose and more pectins. To help determine how TA(-)hab cells adjusted their cell wall composition, we have looked more closely at the expression of genes involved in cell wall synthesis, modification or degradation corresponding to the BinCode category 10 (Additional file 2 Table S2). Most predicted genes belonging to this category were downregulated. Cellulose is synthesized by large membrane complexes constituted by CESAs [ 52 ]. Expression of CESA genes was not significantly modified by more than 2.5 FC in TA(-)hab cells. Only one predicted CESA -like gene (predicted ortholog of CSLG3 ) was downregulated. Hence, the reduced cellulose content was not associated with differential expression of cellulose synthase genes, as it was reported for IXBhab cells [ 32 ]. However, since there is increasing evidence that CESA complexes are associated with other proteins that aid microfibril formation and that link the complexes to nearby microtubules for guidance along the membrane [ 15 ], it is possible that expression of genes encoding some of these unidentified proteins could be altered in TA(-)hab cells. Other downregulated genes included genes encoding proteins involved in cell wall degradation (glycosyl hydrolase, xyloglucan endotransglucosylases/hydrolases (XTH), polygalacturonases), cell wall modification (polygalacturonases, pectin(acetyl)esterases, XTHs) and cell wall proteins (fasciclin-like arabinogalactan-proteins and extensins). Only a few genes were upregulated, such as genes predicted to encode beta-xylosidases, a beta-mannan endohydrolase, a polygalacturonase, a pectinesterase, two expansins and a lyase.
Expression data in TA(-)hab cells was compared to that of Arabidopsis IXBhab cells [ 32 ] using matching AGIs (Additional file 2 Table S2 and Additional file 5 Table S5). Several genes encoding predicted orthologs had a similar pattern of expression in both cell types, except for two XTHs ( XTH9 and XTR7 ), one pectinacetylesterase and one polygalacturonase inhibiting protein gene ( PGIP1 ) that were upregulated in IXBhab cells. Moreover, a callose-synthase gene ( CALS1 ) downregulated in IXBhab cells was upregulated in TA(-)hab cells. However, two other callose synthase genes ( AtGSL09 and AtGSL12 ) were upregulated in IXBhab cells. Several predicted cell wall-related poplar genes differentially expressed in TA(-)hab cells did not have a matching Arabidopsis gene differentially regulated in IXBhab cells. However, these poplar genes had a predicted function that was similar to that of at least one of the genes that were differentially expressed in IXBhab cells. For instance, a proline-rich extensin like gene downregulated in TA(-)hab cells was also downregulated in IXBhab cells. Therefore, TA(-)hab and IXBhab cells exhibited similar changes in the expression of a large overlapping set of genes involved in cell wall modifications, even though TA(-)hab cells were no longer cultured in the presence of TA. Moreover, this analysis shows that despite species differences, it is possible to correlate expression data in TA(-)hab poplar cells with those of IXBhab Arabidopsis cells, at least at the level of cell wall-related genes. It would certainly be of interest to determine whether similar transcriptional changes also occurred in DCB-habituated cells. This could eventually help pinpoint a potential conserved mechanism of adaptation to inhibition of cellulose synthesis. On the other hand, we suspect that most of these changes would be lost during the DCB-dehabituation process since the cell wall composition was then restored close to initial levels [ 31 , 38 ]. Nonetheless, some modifications were retained in DCB-dehabituated cells, such as a reduced level of arabinogalactan proteins and the accumulation of modified pectins [ 31 , 38 ]. We found that some genes predicted to encode arabinogalactan proteins and pectin modifying enzymes were downregulated by more than 2.5 FC in TA(-)hab cells, suggesting that less arabinogalactan proteins and pectin modifications were present in the TA(-)hab cell walls. The implication of these modifications for the establishment of durable resistance to inhibitors of cellulose remains to be shown.
Genes involved in the phenylpropanoid pathway
The phenylpropanoid pathway leads to the synthesis of a wide range of natural products in plants, including lignans, lignin, flavonoids and anthocyanins, several of which are induced by stress [ 53 ]. In poplar, genes involved in the synthesis of phenylpropanoids are part of expanded families that contain genes with conserved functions as well as new members whose biochemical function may be distinct [ 54 - 56 ]. Several genes predicted to belong to these large gene families were downregulated in TA(-)hab cells. These include genes predicted to encode one cinnamyl-alcohol dehydrogenase ( CAD14 ), one caffeic acid/5-hydroxyferulic acid O-methyltransferase ( COMT6 ), two trans-caffeoyl-CoA 3-O-methyltransferases ( CCoAOMT1 and 2 ), and three different hydroxycinnamoyl-Coenzyme A shikimate/quinate hydroxycinnamoyltransferases ( HCT2 , HCT5 and HCT7 ). The poplar CCoAOMT1 and 2 have been shown to be specifically involved in lignin synthesis, as reduced CCoAOMT activity in poplar led to reduced lignin synthesis [ 56 ]. Lignin is deposited in the secondary cell walls to provide rigidity and impermeability to the cells. It is possible that reduced expression of these genes in TA(-)hab cells also turns down the production of lignin. However, HCT2 , 5 and 7 , as well as COMT6 and CAD14 , are barely expressed in lignifying tissues, suggesting that they may be involved in other processes [ 55 , 56 ]. While ectopic lignification was observed in mutants with reduced cellulose synthesis [ 57 ] and in Arabidopsis seedlings treated with TA or IXB [ 10 ], IXBhab cells did not show any ectopic lignificaton [ 32 ]. Supporting these results, several genes specifically involved in lignin synthesis (BinCode 16, Additional file 5 Table S5) were also downregulated in Arabidopsis IXBhab cells, such as genes encoding a CCoAMT, a caffeic acid/5-hydroxyferulic acid O -methyltransferase ( AtOMT1 ), a cinnamoyl CoA reductase ( CCR2 ) and a cinnamyl-alcohol dehydrogenase 4 ( CAD4 ).
Flavonoids function as sunscreen and as defense compounds and have been shown to accumulate in response to various stresses [ 58 , 59 ]. Some genes involved in the synthesis of flavonoids were also downregulated in TA(-)hab cells. These genes were predicted to encode a chalcone synthase ( CHS6 ), which is the committed step to flavonoid synthesis, a flavonol synthase ( FLS ), which participates in the synthesis of flavonols, and an anthocyanidin reductase ( ANR/BAN1 ), which is involved for the formation of proanthocyanidins [ 55 , 59 ]. However, the specific function of each isoform remains to be shown.
In poplar, several genes of the lignin and flavonoid synthesis pathways were dramatically upregulated during infection by Melampsora medusae leaf rust [ 60 , 61 ]. In contrast, gray poplar roots exposed to hypoxic stress displayed a reduced expression in lignin and flavonoid synthesis-related genes [ 62 ]. It was proposed that repression of the phenylpropanoid pathway in these conditions would be a way of inhibiting energy demanding mechanisms in favor of glycolysis to maintain carbon and energy metabolism in periods of O 2 deficiency [ 62 ]. Similarly, downregulation of lignin and flavonoid synthesis pathways in TA(-)hab cells may help repress high energy consuming pathways to redirect carbohydrates to other processes that may be required for cell survival in response to reduced cellulose synthesis. However, while the metabolic outcome of repressing these pathways is unknown, we suspect that a significant fraction of the phenylpropanoids produced will not be incorporated in lignin and flavonoids and could either be accumulated or directed to other pathways. Accumulation of phenolics in vacuoles has been frequently reported [ 63 ]. It is possible that the electron dense material that was observed in vacuoles of TA(-)hab cells (Figure 1 ) were phenylpropanoids that accumulated due to repressed lignin and flavonoid synthesis, but this hypothesis remains to be tested. Whether these changes were related to enhanced resistance to TA is unknown at this time. While some of the genes involved in lignin synthesis were also dowregulated in IXBhab cells, we observed very limited changes in the expression of flavonoid synthesis-related genes, suggesting that modulation of this pathway may either be a specific response to TA or related to species differences in response to inhibition of cellulose synthesis.
Expression of cell death-related genes
We have shown previously that TA and IXB activate a program of cell death in Arabidopsis cell suspensions [ 8 ] and in poplar (this work). Since TA(-)hab cells were able to survive in high concentrations of TA, it is possible that genes encoding proteins involved in regulating the onset of cell death were differentially regulated in TA(-)hab cells. We had found in previous work that more than half of the genes that were upregulated in common after a short exposure of Arabidopsis cells to TA or IXB were downregulated in IXBhab cells, suggesting that some stress-related mechanisms were turned down in those cells [ 12 ]. Interestingly, several genes predicted to control the process of cell death were differentially regulated in TA(-)hab cells. For example, a gene predicted to be the ortholog of STP13 , which encodes a hexose transporter whose expression is correlated with PCD [ 64 ] was downregulated in TA(-)hab cells (FC -3.9). Another gene predicted to encode an ortholog of the Arabidopsis DMR6 was drastically downregulated in TA(-)hab cells (FC -37.5) and in IXBhab cells (FC -14.6). This gene has been shown to play a role in the onset of PCD during plant-pathogen interactions. Hence, absence of DMR6 in the Arabidopsis mutant dmr6 led to resistance to Hyaloperonospora parasitica that was associated with the absence of PCD and reactive oxidative intermediates with no induction of the expression of the defense-associated gene PR-1 [ 65 ]. Several other defense-related genes were downregulated in TA(-)hab cells, including numerous disease resistance proteins that may play a role in the regulation of the hypersensitive cell death [ 66 ].
Another set of genes predicted to function in protecting against cell death was upregulated in TA(-)hab cells. These include a gene putatively encoding a spermine synthase orthologous to the Arabidopsis ACAULIS5 ( ACL5 ) gene that was upregulated 6.5 times in TA(-)hab cells (7.1 in IXBhab cells). Mutant analysis has shown that ACL5 is involved in xylem specification. Expression of ACL5, a spermine synthase, is thought to prevent premature death of the developing vessel element [ 67 ]. This is corroborated by the fact that exogenous application of spermine can prolong xylem element differentiation while stimulating cell expansion and cell wall elaboration. Another gene was the predicted poplar gene encoding an ortholog of AtBAG6 (upregulated 2.8 times), a member of BAG family proteins also believed to be involved in cell survival [ 68 ]. It is possible that differential regulation of cell genes regulating the PCD that is induced in response to TA could significantly contribute to cell survival in TA(-)hab cells.
Expression of genes involved in cell cycle
Several genes predicted to be involved in the control of cell division and cell cycle (Bincode 31.2 and 31.3) were upregulated in poplar TA(-)hab cells as well as in Arabidopsis IXBhab cells (Additional file 1 Fig. S5 and S6; Additional file 2 Table S2 and Additional file 5 Table S5). These include genes predicted to encode for the cyclin-dependent kinase CDKB1;2, which accumulates in a cell cycle-dependant manner to reach a maximum level at the G2/M transition where its activity is required [ 69 ]; the cyclin-dependent kinase regulators, CYCB2;4, CYCB1:4, whose expression also peaks at the G2/M transition and during M phase transition; and the cell division cycle-like protein CDC45 that accumulates in the G1/S transition [ 70 ]. Other members were also upregulated in IXBhab cells, e.g. CYCB2;2 , CYCD3;1 , CYCB1;4 and CYCB2;1 . Cellulose synthesis fluctuates during the cell cycle, as it is required for cell elongation, differentiation and cell plate formation. It was shown that cellulose is deposited in cell plates at the late M phase after callose deposition [ 71 ]. Results obtained in the dinoflagellate Crypthecodinium cohni have suggested that cell cycle progression is coupled with cellulose synthesis at the G1 phase [ 72 ]. Hence, inhibition of cellulose synthesis would halt cell growth by introducing a G1 cell cycle delay that could lead to a cell cycle arrest in late M phase [ 72 ]. Upregulation of cell cycle-related genes in TA(-)hab and IXBhab cells may be a consequence of the reduced cellulose content, which in turn could signal changes in the progression of the cell cycle.
Expression of genes involved in DNA and chromatin modifications
Another important feature of TA(-)hab cells was their capacity to remain resistant to TA over several generations. Therefore, most of the changes in gene expression that were induced during the habituation process and that are important for resistance to TA must be conserved after cell division. Mitotically transmitted changes in gene expression can be caused by direct and irreversible alterations in the original DNA sequence (mutations) or may be mediated by epigenetic processes, such as reversible DNA methylation, histones modifications and chromatin remodeling [ 73 ]. It is known that both mutations and epigenetic modifications are more frequently induced during plant tissue culture than in whole plants [ 74 ]. Work by Pishke et al. (2006) [ 33 ] has shown that hormone habituation of Arabidopsis cells was associated with transcriptional activation of epigenetic-related genes involved in DNA methylation, histone methylation and deacetylation, as well as chromatin remodeling factors. DNA and chromatin modifications occurring during hormone habituation may be critical for the acquisition of cytokinin habituation. In TA(-)hab cells, several poplar genes predicted to encode histones were upregulated, including histone H1 ( HON901 ), H2 ( HTA902/HTA912 ), H3 ( HTR910/HTR914 ) and H4 ( HFO905/HFO907 ), except for the gene predicted to encode the histone variant H1.2 that was downregulated. Histone proteins are important for nucleosome and chromatin formation. In particular, histone variants may be important for specialized functions as their incorporation at certain regions of the chromosomes may confer specific structural or functional features to chromatin [ 75 ]. Arabidopsis orthologs of several of these genes were also found to be significantly upregulated (> 2.0 FC) in hormonally habituated cells ( H2A , H2B , H3 , H3.2 and H4 ) and in IXBhab cells ( H2A , H2A.Z , H2B , H3 ), but downregulation was also observed for histone H1. Several genes participating in DNA and chromatin modifications were also found to be upregulated in TA(-)hab cells, including genes predicted to encode: a cytosol-specific methyltransferase ( DMT909 ) which is involved in DNA methylation (FC 2.7); high-mobility-group HMGA (FC 3.2) and HMGB (FC 2.6) proteins, which are members of chromatin-associated proteins that would act as architectural factors in nucleoprotein structures and which regulate DNA-dependent processes including transcription [ 76 ]; a chromatin remodeling complex subunit ( CHR942 ) that is a member of SNF2 domain-containing protein family (FC 2.8), which includes proteins that are proposed to play a role in gene silencing and that would interact with histone variants to alter chromatin structure [ 75 ]; a trithorax-related protein/SET-domain containing protein ( SDG933 ; FC 2.5) whose Arabidopsis predicted ortholog TXR5 was shown to encode a H3K27 monomethyltransferase that is required for gene silencing through histone methylation [ 77 ].
Changes in DNA methylation patterns and chromatin modification events have also been correlated with activation of transposons [rev. in [ 78 , 79 ]], as it was observed in hormone habituated cells [ 33 ]. However, we did not detect differential expression of transposon-related sequences in TA(-)hab cells. It may be that differential expression of transposon-related sequences took place in TA(-)hab cells but at a level that was below 2.5 FC. Alternatively, activation of transposons could have occurred at an earlier stage of the habituation process (e.g., in the presence of TA) to be silenced later on due to epigenetic modifications [ 79 ].
However, activation of transposons was clearly induced in IXBhab cells that were still cultured in the presence of IXB (Additional file 5 Table S5). At least 10 transposon-related sequences, including copia-like retrotransposons, gypsy-like retrotransposons and a CACTA-like transposase family were differentially regulated in IXBhab cells. This was also associated with upregulation of the expression of several genes coding for DNA and chromatin modification enzymes, such as: DDM1 , a member of the broad SWI2/SNF2 protein family DNA promoting chromatin remodelling (FC 3.8); cytosine methyltransferase MET1 (FC 3.7); histone deacetylases 2A (FC 3.9), 2B(FC 3.3), 2C(FC 2.7), and HDT4 (FC 9.5); the histone-lysine N-methyltransferase SUVH6 (FC 2.6) and SUVR2 (FC 3.7); high-mobility-group HMG1/2 family protein (FC 3.8) [ 78 ].
Differential expression of epigenetic-related genes suggest that DNA and chromatin modifications occur during the process of habituation to TA or IXB and are possibly involved in maintaining some of the features of TA(-)hab cells. Moreover, the fact that these changes occurred in IXBhab that were still grown in the presence of IXB suggests that DNA and chromatin modifications were initiated when cells were still in the presence of the inhibitor of cellulose synthesis. Whether these changes are related to durable resistance to inhibitors of cellulose synthesis or simply associated with the habituation process is still unknown.
Other genes of interest
Several other processes were affected in TA(-)hab cells, as many differentially expressed genes were found to be involved in processes like hormone metabolism, transport, stress responses, regulation of transcription, protein modifications and signal transduction (Additional file 2 Table S2 and Additional file 5 Table S5). Most genes involved in biotic stress-related responses were downregulated, and this included genes known to be generally upregulated in response to pathogens, such as pathogenesis-related proteins, several members of disease-resistance family proteins and chitinases. However, expression of several small heat shock factors was upregulated in TA(-)hab cells. Moreover, numerous genes encoding members of different transcription factors families, including WRKY, C2H2-type zinc finger protein, MYB and NAC domain containing proteins, were differentially regulated in TA(-)hab cells, with about half being upregulated and half downregulated. A similar pattern of expression was also found in IXBhab cells, although there were more genes involved in heat shock responses that were downregulated. The fact that many transcription regulator genes are differentially expressed in TA(-)hab cells is not surprising since there are so many different processes that were affected in those cells. These data clearly indicate that the process of habituation is associated with very complex changes in gene expression that certainly alter the general metabolism of the habituated cells. | Conclusions
Analysis of expression data in poplar TA(-)hab cells demonstrated that durable resistance to inhibitors of cellulose synthesis was linked with a complex reprogramming of gene expression that was associated with expression of epigenetic-related genes. How these changes correlated with resistance to inhibitors of cellulose synthesis remains to be determined. Reprogramming of gene expression could occur in response to inhibition of cellulose synthesis during the habituation process or may be associated with or due to the effect of mutations that enhanced resistance to inhibition of cellulose synthesis. It is also possible that DNA and chromatin modifications were involved in establishing and/or maintaining the resistance to TA. In that case, these changes could theoretically be reverted, resulting in the restoration of cell sensitivity to the inhibitor of cellulose synthesis. However, while the composition of the cell walls of DCB-habituated bean cells cultured in the absence of DCB for several months was almost fully restored to control levels, resistance to high concentrations of DCB was still maintained in DCB-dehabituated cells, suggesting that resistance to DCB cannot be reversed in these cells [ 43 ]. Consequently, it is possible that stable and irreversible changes in DNA sequences (mutations) were required for resistance to DCB. Further characterization of the process of habituation to TA will be necessary to determine what changes are essential for the maintenance of resistance to TA. These results will not only be useful to understand how plant cells respond to the toxin, but may provide key information on a wide range of processes, including cellulose synthesis, cell wall organization, intracellular communication between the cell wall and the nucleus, and the activation of epigenetic-related changes in response to inhibition of cellulose synthesis. | Background
Thaxtomin A (TA), a phytotoxin produced by the phytopathogen Streptomyces scabies , is essential for the development of potato common scab disease. TA inhibits cellulose synthesis but its actual mode of action is unknown. Addition of TA to hybrid poplar ( Populus trichocarpa x Populus deltoides ) cell suspensions can activate a cellular program leading to cell death. In contrast, it is possible to habituate hybrid poplar cell cultures to grow in the presence of TA levels that would normally induce cell death. The purpose of this study is to characterize TA-habituated cells and the mechanisms that may be involved in enhancing resistance to TA.
Results
Habituation to TA was performed by adding increasing levels of TA to cell cultures at the time of subculture over a period of 12 months. TA-habituated cells were then cultured in the absence of TA for more than three years. These cells displayed a reduced size and growth compared to control cells and had fragmented vacuoles filled with electron-dense material. Habituation to TA was associated with changes in the cell wall composition, with a reduction in cellulose and an increase in pectin levels. Remarkably, high level of resistance to TA was maintained in TA-habituated cells even after being cultured in the absence of TA. Moreover, these cells exhibited enhanced resistance to two other inhibitors of cellulose biosynthesis, dichlobenil and isoxaben. Analysis of gene expression in TA-habituated cells using an Affymetrix GeneChip Poplar Genome Array revealed that durable resistance to TA is associated with a major and complex reprogramming of gene expression implicating processes such as cell wall synthesis and modification, lignin and flavonoid synthesis, as well as DNA and chromatin modifications.
Conclusions
We have shown that habituation to TA induced durable resistance to the bacterial toxin in poplar cells. TA-habituation also enhanced resistance to two other structurally different inhibitors of cellulose synthesis that were found to target different proteins. Enhanced resistance was associated with major changes in the expression of numerous genes, including some genes that are involved in DNA and chromatin modifications, suggesting that epigenetic changes might be involved in this process. | Authors' contributions
VB carried out the habituation experiment, resistance assays and sugar quantification and help to draft the manuscript; MGM carried out the pectin quantification, resistance assays, transcriptional analysis and help to draft the manuscript; ID participated in the conception of the study and carried out the first habituation experiments and resistance assays; SL carried out the qPCR validation of microarray results; GG carried out the microscopy analyses; OD participated in the first habituation experiments and resistance assays; CB participated in the design of the study and coordination; NB conceived the study, participated in its design, carried out some of the transcriptional analysis, annotation and drafted the manuscript. All authors read and approved the final manuscript.
Supplementary Material | Acknowledgements
We would like to thank M. Trottier and E. Capek for their help with sugar quantifications, L.-P. Hamel for help with poplar microarray analyses and Genome Québec, Innovation Center (McGill University, Montréal, Canada) for microarray data collection. This work was supported by grants from FQRNT, NSERC and Centre SEVE. | CC BY | no | 2022-01-12 15:21:46 | BMC Plant Biol. 2010 Dec 10; 10:272 | oa_package/88/9f/PMC3016406.tar.gz |
PMC3016407 | 21162722 | Background
Abscisic acid (ABA) is a phytohormone that controls seed germination, seedling growth and seed development [ 1 ]. In particular, ABA plays an essential role in the protective responses of plants to adverse environmental conditions, such as drought, high salinity and extreme temperatures [ 2 ].
At the molecular level, ABA-dependent processes entail changes in gene expression patterns. Numerous genes are either up- or down-regulated by ABA in seedlings [ 3 , 4 ]. The ABA regulation of these genes is generally at the transcriptional level, and a number of cis -regulatory elements responsible for the regulation by ABA have been determined [ 5 ]. One of the cis -elements consists of ACGTGGC core sequence. The element, which is similar to the G-box (CACGTG) present in many light-regulated promoters [ 6 ], is ubiquitous among ABA-regulated gene promoters and generally known as ABA response element (ABRE). Although necessary, a single copy of the G-box type ABRE is not sufficient to mediate ABA regulation, and multiple copies of ABRE or combinations of ABRE with another cis -element are required for the full ABA-induction of genes. For instance, an element known as CE3 (coupling element 3, A CGCGTG TCCTC) is required for the ABA-induction of barley HVA1 and OsEm genes [ 7 ]. Thus, CE3 and ABRE constitute an ABA response complex. Another coupling element, CE1 (TG CCACC GG), is necessary for the ABA-regulation of HVA22 gene [ 8 ]. In RD29A gene, DRE (Dehydration-responsive element, T ACCGAC AT) functions as a coupling element to ABRE [ 9 ].
A subfamily of bZIP proteins has been identified that mediate the ABA response via the G-box type ABRE in Arabidopsis [ 10 , 11 ]. Referred to as ABFs or AREBs, these proteins not only bind the ABRE but also mediate stress-responsive ABA regulation in Arabidopsis seedlings [ 12 ]. On the other hand, ABI5, which belongs to the same subfamily of bZIP proteins as ABFs/AREBs [ 13 , 14 ], mediates ABA response in the embryo. ABFs/AREBs were isolated based on their binding to ABRE. Subsequent study showed that they also bind the coupling element CE3 [ 10 ], which is functionally equivalent to ABRE [ 15 ].
The transcription factors that bind the CE1 element have not been reported yet. Among the known transcription factors involved in ABA response, ABI4 has been shown to bind the CE1 element [ 16 ]. However, the preferred binding site of ABI4 is CACCG, which differs from the CE1 element consensus CCACC. Thus, it has been suggested that AP2 domain proteins other than ABI4 would interact with CE1 [ 17 ].
To isolate CE1 element binding factors, we conducted yeast one-hybrid screens. From the screen of 3.6 million yeast transformants, we isolated 78 positive clones. Analysis of the clones revealed that a group of AP2/ERF domain proteins bind the CE1 element in yeast. Most of the CE1 binding factors (CEBFs) belong to the B-3 or the A-6 subfamily of AP2/ERF domain proteins [ 18 , 19 ]. We also found that overexpression of some of the CEBFs alters ABA and/or sugar responses in Arabidopsis. | Methods
One-hybrid screen
One-hybrid screen was conducted as described before [ 10 ]. To prepare reporter gene constructs, a trimer of the oligonucleotides, 5'-CAT TGCCACCGG CCC-3', and its complementary oligonucleotides were annealed and cloned into the Zero Blunt TOPO (Invitrogen) vector. The insert was then excised out by Spe I-Eco RV or Kpn I-Xho I digestion. The fragments were then cloned into pSK1, which was prepared by Bam HI digestion followed by Klenow treatment and Spe I digestion, and Kpn I-Xho I digested pYC7-I, respectively. The reporter constructs were sequentially introduced into YPH500 to prepare reporter yeast harboring HIS3/lacZ double reporters. Yeast transformation was carried out as described before [ 10 ], using the cDNA library DNA representing mRNA isolated from Arabidopsis seedlings treated with ABA and salt.
Approximately 3.6 million yeast transformants were screened, and 78 positive clones were isolated. The positive clones were grouped according to the restriction patterns after EcoR1 and/or Hae III digestion of the insert DNA, which was prepared by PCR. Plasmid DNA was rescued from the representative clones of each group and other non-grouped clones and sequenced. Fifty two positive clones were analyzed and sequenced. For the confirmation test shown in Figure 1A , plasmid DNA from the positive clones was re-introduced into the yeast reporter strain and activation of the lac Z reporter was examined by a filter lift assay.
Transcriptional assay
Transcriptional activity was determined employing a yeast system, as described previously [ 20 ]. The coding regions of CEBFs were prepared by PCR and individually cloned into the Sma I-Not I sites of pPC62/LexA bait vector containing Lex A DB. Primer sequences are available upon request. The bait constructs were subsequently introduced into L40 (MATα, his3Δ200, trp1-901, leu2-3112, ade2, LYS2::[LexAop(x4)-HIS3], URA3::LexAop(x8)-LacZ, GAL4) (Invitrogen, USA), which carries a lac Z reporter gene with Lex A binding sites in its promoter. β-galactosidase activity was measured by liquid assay using ONPG (O-nitrophenyl- β -D-galactopyranoside) as a substrate and expressed in Miller units.
RNA isolation and expression analysis
RNA was isolated employing the RNeasy plant mini kit (Qiagen, USA). Northern blot analysis was carried out, as described previously [ 20 ]. For RT-PCR analysis, RNA was treated with DNase I to remove possible contaminating DNA before cDNA synthesis, and the first strand cDNA was synthesized using Superscript III (Invitrogen) according to the manufacturer's instructions. cDNA amplification was carried out within a linear range using gene-specific primers. For quantitative RT-PCR, the cDNA was diluted 10-fold, and PCR was performed with SsoFast EvaGreen supermix in a Bio-Rad CFX96 Real-Time PCR Systems (Bio-Rad) according to the supplier's instructions. Quantitation was carried out using the CFX96 Real-Time PCR Systems software, employing actin-1 as a reference gene. Primer sequences are available upon request.
Determination of promoter activity and subcellular localization
To prepare promoter-GUS constructs, approximately 2.5 kb 5' flanking sequences of AtERF13, RAP2.4 and RAP2.4L were prepared and cloned into pBI101.2 [ 22 ]. For AtERF13, the promoter fragment was amplified from genomic DNA using the primer set 5'-AAG CTT GGT ACT AGT ACT GCT AGG TTT CTC-3' and 5'-AAT GGA TTC TTG AAT GCT TCT GAA-3'. The resulting fragment was digested with Hind III and then ligated with PBI101.2, which was predigested with Hand III and Sma I. For RAP2.4 and RAP2.4L, the primer sets, 5'-acg cgtc gac CAT CCC TGT ACC ACT CAC TAT CTT ATT C -3' and 5'-GAA TCC GAA AAA ATT GAA CCT GAG AC-3', and 5'-acg cgt cga cTA ACA CAC AAA ATG TAC CGA AAG AAG-3' and 5'-CTG TGT AGA TTT CTG AGA GGA GGG A-3' were employed to amplify the promoter fragments. The PCR products were then digested with Sal I and ligated with pBI101.2 cut with Sal I-Sma I. Transformation of Arabidopsis plants (Ecotype, Landsberg erecta , L er ) were according to Bechtold and Pelletier [ 40 ]. Histochemical GUS staining was conducted as describe before [ 41 ], using T2 or T3 generation plants.
To investigate the subcellular localization, the coding regions of AtERF13, RAP2.4 and RAP2.4L were fused with the EYFP coding region of p35S-FAST/EYFP in frame. The coding region of AtERF13 was prepared by PCR using the primers 5'-aag ccc ggg ATG AGC TCA TCT GAT TCC GTT AAT-3' and 5'- aag ccc ggg TAT CCG ATT ATC AGA ATA AGA ACA TT-3', and the amplified fragment was digested with Xma I prior to ligation with Xma I-cut p35S-FAST/EYFP. The coding region of RAP2.4 was amplified using the primer set 5'-aag gag ctc ATG GCA GCT GCT ATG AAT TTG TAC-3' and 5'- aag ccc ggg AGC TAG AAT CGA ATC CCA ATC GAT-3', whereas the coding region of RAP2.4L was amplified using the primers 5'- aag gag ctc ATG ACA ACT TCT ATG GAT TTT TAC AG-3' and 5'-aag ccc ggg ATT TAC AAG ACT CGA ACA CTG AAG-3'. The amplified fragments were treated with Sac I and Sma I and subsequently ligated with p35S-FAST/EYFP digested with the same enzymes.
Agrobacterium infiltration was according to Witt et al. and Voinnet et al. [ 42 , 43 ]. Tobacco ( Nicotiana benthamiana ) leaves were co-infiltrated with the Agrobacterium strains (C58C1) containing the above constructs and p19, respectively. The images of the tobacco epidermal cells were taken with the Olympus BX51 microscope with a YFP filter 40 hr after infiltration.
Generation of transgenic plants and phenotype analysis
To prepare OX vector constructs, the coding regions of AtERF13, RAP2.4 and RAP2.4L were amplified from a cDNA library and cloned into pBI121. The RAP2.4 coding region was amplified using the primers 5'- TAG GAT CCA TGG CAG CTG CTA TGA ATT TGT ACA CTT G-3' and 5'- TTG CCC CTA AGC TAG AAT CGA ATC CCA ATC-3'. The RAP2.4L coding region was amplified using the primers 5'- CCG GAT CCA TGA CAA CTT CTA TGG ATT TTT ACA GT-3' and 5'- CAA CAT CTA ATT TAC AAG ACT CGA ACA CT-3'. The amplified fragments were digested with Bam HI and cloned into pBI121, which was prepared by the removal of the GUS coding region after Bam HI-Eco ICRI digestion. The coding region of AtERF13 was prepared by PCR using the primer set 5'- CGT CTA GAA TGA GCT CAT CTG ATT CCG TTA ATA ACG G-3' and 5'- AAC TAA TTA TAT CCG ATT ATC AGA ATA AG-3'. The fragment was treated with Xba I and ligated with GUS-less pBI121, which was prepared by removal of the GUS coding region after Xba I-Eco ICRI digestion.
For the AtERF13 RNAi construct, the primers 5'-GGG GCG CGC CGC ATT TGA TTG GTT CTT GTA AGT ATG AG-3' and 5'- CGT AAA TTT ATA CTA TGG AAC CGA ATT TAG AAG-3' were used to amplify the 387 bp sense orientation fragment. The fragments were cloned into pFGC5941 after Asc I-Swa I digestion. Primers 5'- GGT CTA GAG CAT TTG ATT GGT TCT TGT AAG TAT GAG-3' and 5'- CGG GAT CCT ACT ATG GAA CCG AAT TTA GAA G-3' were employed to amplify the antisense fragment, which was cloned into pFGC5941 containing the sense fragment after Bam HI-Xba I digestion. The intactness of the cloned sequences of all of the constructs used in this study was confirmed by DNA sequencing.
Arabidopsis transformation was carried out as described above. More than ten transgenic lines were recovered for each CEBF, and T3 or T4 generation homozygous lines were employed for phenotype analysis, which was carried out as described before [ 41 ].
Seeds of knockout lines, SALK_093377 and SALK_091654 for RAP2.4 and RAP2.4L, respectively, were obtained from the Arabidopsis stock center. In the case of SALK_093377, homozygous knockout sublines were recovered from the plants whose progeny segregated with 3:1 ratio of kanamycin resistance and kanamycin susceptible seeds. In the case of SALK_091654, the plants were susceptible to kanamycin, and homozygous knockout sublines were recovered after genomic PCR of individual plants. Insertion of the T-DNA into the annotated position was confirmed by genomic PCR and sequencing of the amplified fragments. | Results
Isolation of CE1-binding proteins
To isolate genes encoding the proteins that bind the CE1 element, we conducted yeast one-hybrid screens [ 10 ]. A trimer of the CE1 element was cloned in front of the minimal promoters of the lac Z and the HIS3 reporter genes, respectively. The reporter constructs were then introduced into a yeast strain to create reporter yeast, which was subsequently transformed with cDNA library DNA. The library was prepared from mRNA isolated from ABA-and salt-treated Arabidopsis seedlings. The resulting transformants were screened for reporter activities. From the screen of 3.6 million yeast transformants, we obtained 78 positive clones and analyzed more than 50 clones.
Grouping of the positive clones based on their insert restriction patterns and subsequent DNA sequencing revealed that they encode a group of AP2/ERF superfamily transcription factors (Table 1 ). Twelve isolates encoded AtERF15 (At2g31230), ten isolates encoded ERF1 (At3g23240) and nine isolates encoded RAP2.4 (At1g78080). Other multiple or single isolate encoded AtERF1 (At4g17500), AtERF5 (At5g47230), AtERF13 (At2g44840) and seven other AP2/ERF family proteins. Among the 13 AP2/ERF proteins isolated, nine belong to the B-3 subfamily, three belong to the A-6 subfamily and one belongs to the B-2 subfamily. Thus, a group of AP2/ERF proteins, especially those belonging to the subgroup B-3, was isolated as CE1-binding factors in our one-hybrid screen. We designated the proteins CEBFs (CE1 binding factors).
DNA-binding and transcriptional activities of CEBFs
Binding of a number of CEBFs, which were isolated as multiple isolates (Table 1 ), to the CE1 element was confirmed in yeast. Plasmid DNA was isolated from the positive clones, and their binding to CE1 was determined by investigating their ability to activate the CE1-containing lac Z reporter gene. Figure 1A shows the results obtained with six different positive clones: AtERF15, AtERF5, AtERF1, AtERF13, RAP2.4 and RAP2.12. The four AtERFs, which belong to the B-3 subfamily, could activate the reporter gene containing the CE1 element but not the reporter gene lacking the CE1 element. The CE1-dependent reporter activation was observed with medium containing galactose but not with the medium containing glucose. Thus, reporter activation was also dependent on the presence of galactose, which is an inducer of the GAL1 promoter that drives the expression of the cDNA clones. Similarly, RAP2.12 and RAP2.4, which belong to the B-2 and the A-6 subfamily, respectively, could also activate the reporter gene, and the activation was CE1- and galactose-dependent.
CEBFs are putative transcription factors; accordingly, we wanted to determine if they possess transcriptional activity. To accomplish this, the transcriptional activity of CEBFs was examined employing a yeast assay system. The coding regions of seven CEBFs were individually cloned in frame with the Lex A DB in the vector pPC62LexA [ 20 ]. The hybrid constructs were then introduced into the yeast strain L40, which carries a lac Z reporter gene with an upstream Lex A operator in its promoter. Figure 1B shows that AtERF13 has the highest transcriptional activity among the seven CEBFs. RAP2.12 also possesses high transcriptional activity, while RAP2.4, RAP2.4L (At1g22190), AtERF5 and AtERF15 displayed relatively lower transcriptional activity. AtERF1 was found to have very low transcriptional activity.
Expression patterns of CEBFs
The expression patterns of nine CEBFs in seedlings were examined by coupled reverse transcription and polymerase chain reaction (RT-PCR). Because the tissue-specific expression patterns of many AP2/ERF domain proteins have been reported [ 21 ], we focused on the ABA and stress induction patterns of CEBFs. Figure 2A shows that the expression of AtERF1, AtERF2, AtERF13 and AtERF15 was induced by high salt. In the case of AtERF13, its expression was also induced by high osmolarity (i.e., mannitol). The expression of other CEBFs was largely constitutive or their induction levels were very low.
For AtERF13, RAP2.4 and At1g22190, which was designated RAP2.4L (RAP2.4-like) because of its high similarity to RAP2.4, we examined their tissue-specific expression patterns in detail by investigating their promoter activity. Transgenic plants harboring the promoter-GUS reporter constructs were prepared, and histochemical GUS staining was carried out to determine their temporal and spatial expression patterns.
With ATERF13, GUS activity was observed only in the shoot meristemic region and the emerging young leaves in seedlings (Figure 2B ). Thus, AtERF13 expression in seedlings was specific to the shoot meristem region. During the reproductive stage, GUS activity was observed in the carpels. On the other hand, GUS activity was observed in most of the tissues with the RAP2.4L promoter (Figure 2C ). GUS activity was not observed in the immature embryo, but it is detected in the mature embryo and most of the seedling tissues. The GUS activity was strong in most of the tissues, although relatively weaker activity was observed in young leaves and the lateral root tips including the meristem and the elongation zone. Strong GUS activity was also observed in reproductive organs such as sepals, filaments, style and abscission zone. The GUS staining pattern of the transgenic plants harboring the RAP2.4 promoter construct was similar to that of the plants harboring the RAP2.4L promoter construct (Figure 2D ). In general, stronger GUS activity was observed with the RAP2.4 promoter, and, unlike the RAP2.4L promoter, its activity was detected in the emerging young leaves.
To obtain further clues to the function of AtERF13, RAP2.4L and RAP2.4, we determined their subcellular localization. The coding regions of the CEBFs were individually fused to EYFP under the control of the 35 S promoter, and the localization of the fusion proteins was examined after Agroinfiltration of tobacco leaves. Figure 2E shows that YFP signal is detected in the nucleus with the AtERF13 construct. Similarly, the YFP signal was also observed in the nucleus with RAP2.4L and RAP2.4. Thus, our results indicate that AtERF13, RAP2.4L and RAP2.4 are localized in the nucleus.
In vivo functions of CEBFs
Our transcriptional assay (Figure 1B ) showed that AtERF13 has the highest transcriptional activity among CEBFS, and its expression was highly inducible by high salt (Figure 2A ). Hence, we chose AtERF13 for functional analysis. To determine the in vivo function of AtERF13, we generated its overexpression (OX) lines. The coding region of AtERF13 was fused to the CaMV 35 S promoter employing the pBI121 vector [ 22 ], and after transformation of Arabidopsis, T3 or T4 generation transgenic plants were recovered and used for phenotype analysis.
AtERF13 OX lines exhibited minor growth retardation (Figure 3A ), and mature plants were slightly smaller than the wild type plants (not shown). However, other than minor dwarfism, the overall growth pattern was normal. Because the CE1 element is an ABA response element, we determined the ABA-associated phenotypes to address whether AtERF13 overexpression affected ABA response. The germination rates of the transgenic plants were slightly slower (~2hr) in ABA-free medium (not shown), but the ABA sensitivity of the transgenic seed germination was similar to that of the wild type plants (not shown).
Unlike the seed germination, postgermination growth of the AtERF13 OX lines exhibited altered ABA response. Figure 3B and Figure 3C show that shoot development of the transgenic plants was inhibited severely at low concentrations of ABA. For instance, cotyledons of less than 50% of the transgenic plants turned green at 0.5 μM ABA, and true leaf development was not observed with any of the transgenic plants. By contrast, shoot development of wild type seedlings was not affected significantly by the same concentration of ABA. Similarly, root growth of the AtERF13 OX lines was also severely inhibited at 0.5 μM ABA, whereas root growth of the wild type plants was affected much less (Figure 3D ). Thus, postgermination growth of the AtERF13 OX lines was hypersensitive to ABA.
We next examined the glucose sensitivity of the AtERF13 OX lines. Glucose inhibits shoot development, i.e., cotyledong greening and true leaf development, and the inhibition process is mediated by ABA [ 23 ]. Figure 3E and Figure 3F show that glucose-dependent arrest of shoot development was much more severe in the AtERF13 OX lines. At 3% glucose, cotyledon greening of the wild type plants was not affected noticeably. By contrast, cotyledon greening efficiency of the transgenic plants was reduced to approximately 50%. At 4% glucose, shoot development was observed with approximately 50% of the wild type plants, whereas less than 10% of the OX lines develop green cotyledons. Thus, our results indicated that AtERF13 OX lines are hypersensitive to glucose. We did not observe changes in mannitol (Figure 3E ) or salt (Additional file 1 ) response in parallel experiments, suggesting that the effect is glucose-specific.
We conducted similar experiments to investigate the in vivo function of RAP2.4L, which belongs to the A-6 subfamily and whose function has not been reported yet. RAP2.4L OX lines were constructed, and their phenotypes were scored to address its involvement in ABA response. The transgenic plants displayed minor growth retardation (Figure 4A ), but no distinct changes in ABA sensitivity were observed. On the other hand, the RAP2.4L OX lines displayed altered response to glucose. Figure 4B and Figure 4C show that shoot development of the RAP2.4L OX lines was more severely inhibited by 3% and 4% glucose than the wild type plants. As mentioned above, RAP2.4 is highly homologous to RAP2.4L. Therefore, we prepared RAP2.4 OX lines and analyzed their phenotypes as well (see Discussion). We did not observe changes in ABA sensitivity; however, similar to RAP2.4L OX lines, the RAP2.4 OX lines were hypersensitive to glucose (Figure 4B and Figure 4D ). We also examined the salt tolerance of RAP2.4L and RAP2.4 OX lines. The results showed that postgermination growth, i.e., cotyledon greening and true leaf development of both transgenic lines was more severely inhibited at 125 and 150 mM NaCl than wild type plants. The salt sensitivity of RAP2.4 OX lines was more pronounced than that of RAP2.4L. We did not observe changes in mannitol sensitivity in either the RAP2.4 or the RAP2.4L OX lines (Additional file 2 ).
To further confirm their involvements in ABA response, we analyzed knockout lines of RAP2.4L and RAP2.4 and RNAi lines of AtERF13. We did not observe distinct phenotypes with the transgenic plants, presumably because of the functional redundancy among CEBFs.
To investigate the target genes of AtERF13, we determined the changes in the expression levels of a number of ABA-responsive genes by Real-Time RT-PCR. Among the genes we investigated, the expression level of COR15A increased significantly in the AtERF13 OX lines (Figure 5 ). Slight increases in ADH1 expression were also observed. By contrast, RAB18 expression decreased or increased slightly in the transgenic lines. Similar analysis showed that COR15A and ADH1 expression levels were enhanced in the RAP2.4L and the RAP2.4 OX lines. Increase in the RAB18 expression level was also observed in the RAP2.4 OX line (#3). The three genes whose expression levels were altered in the transgenic lines have the G-box type ABREs in their promoter regions and are inducible by both ABA and various abiotic stresses. Additionally, COR15A and RAB18 genes have a sequence element (i.e., CCGAC) that can function as another coupling element, DRE, although the CE1 core sequence, CCACC, was not found. | Discussion
We isolated genes encoding CE1 element binding factors (CEBFs) employing a yeast one-hybrid system. CEBFs belong to the AP2/ERF superfamily of transcription factors [ 18 , 19 ]. The AP2/ERF proteins are classified into three families: AP2, ERF and RAV. Whereas AP2 and RAV family members possess an additional AP2 or B3 DNA-binding domain, ERF family members possess a single AP2/ERF domain. The ERF family is further divided into two subgroups, the DREB/CBF subfamily (group A) and the ERF subfamily (group B) [ 19 ]. Among the 52 positive clones we analyzed, 39 encoded B group proteins (i.e., B-3 subfamily members), whereas 13 encoded A group proteins (i.e., A-6 subfamily members) (Table 1 ).
The in vitro binding sites of many AP2/ERF superfamily proteins have been studied in detail. The DRE core sequence, i.e., the binding site for DREB1A and DREB2A, which are representative members of the DREB/CBF subfamily, is A/GCCGAC [ 19 ]. The GCC box core sequence, which is the consensus binding site for ERF family members, is AGCCGCC [ 24 ]. Thus, the two sequences share the CCGNC consensus sequence, the central G being essential for high affinity binding. On the other hand, the core sequence of the CE1 element is CCACC, which differs from the DRE and the GCC box core sequences. The results of our one-hybrid screen indicate that a subset of AP2/ERF family members (i.e., at least ten B-3/B-2 subgroup members and three A-6 subgroup proteins) bind the CE1 element in yeast.
Several of the CEBFs have been reported as GCC box binding proteins. For example, the preferred in vitro binding site of AtERF1, AtERF2 and AtERF5 is the wild type GCC box, AGCCGCC [ 25 ]. Mutations of the Gs at the second and fifth positions reduced their binding activity to less than 10% of that obtained with the wild type sequence. Similarly, the mutation of the second G of the core sequence greatly reduced the in vitro binding of RAP2.4 [ 26 ]. However, in our one-hybrid screen, AtERF1, AtERF5 and RAP2.4 were isolated as multiple isolates (i.e., 4, 5 and 9 isolates, respectively). The result suggests that these proteins can interact with the non-GCC box sequence, CCACC, under physiological conditions (i.e. in yeast).
AP2/ERF proteins are involved in various cellular processes, including biotic and abiotic stress responses [ 18 , 19 ]. Many DREB/CBF family proteins (e.g., DREB1A, DREB1B, DREB1C, DREB2A, RAP2.1 and RAP2.4) are involved in ABA-independent abiotic stress responses [ 19 , 26 , 27 ], whereas ERF family members (e.g., ERF1, ORA59, AtERF2, AtERF4, AtERF14, and RAP2.3) are generally involved in ethylene and pathogen defense responses [ 18 , 28 - 34 ]. In particular, several of the AP2/ERF proteins are involved in ABA response. ABI4, which belongs to the DREB/CBF subfamily, is a positive regulator of ABA and sugar responses [ 35 ]. DREB2C and maize DBF1 are also positive regulators of ABA response [ 36 , 37 ]. On the other hand, AtERF7 [ 38 ], ABR1 [ 39 ] and AtERF4 [ 34 ] are ERF subfamily proteins that are negative regulators of ABA response.
To determine the in vivo functions of CEBFs in ABA response, we generated their OX lines and acquired knockout lines for phenotype analysis when available. As mentioned above, several CEBFs (i.e., ERF1, AtERF2 and ORA59) are known to regulate defense responses. However, their involvement in ABA response and the functions of other CEBFs have not been reported yet. Here, we present our results obtained with CEBFs, AtERF13 and RAP2.4L. AtERF13 was found to possess very high transcriptional activity in yeast (Figure 1B ) and localized in the nucleus. Its expression was limited to the shoot meristem region and young emerging leaves (Figure 2B ), implying that it may play a role in shoot growth or development. Consistent with this notion, AtERF13 OX lines exhibited minor dwarfism (Figure 3A ). The growth retardation observed in the OX lines may reflect the normal inhibitory role of AtERF13 or be the result of its ectopic overexpression. However, we think that AtERF13 probably play a role in growth regulation. Because we could not obtain its knockout lines, we prepared and analyzed its RNAi lines. Our results showed that the RNAi lines grew faster than wild type plants (Additional file 3 ), suggesting that AtERF13 may inhibit seedling growth.
Overexpression of AtERF13 conferred ABA hypersensitivity during postgermination growth. As shown in Figure 3 both shoot and root growth was severely inhibited by the low concentration of ABA, which had little effect on wild type seedling growth. Additionally, the AtERF13 OX lines were hypersensitive to glucose, whose effect is mediated by ABA. We did not carry out extensive expression analysis of ABA-responsive genes in AtERF13 OX lines. However, our limited target gene analysis showed that expression of several ABA-responsive genes was affected by AtERF13 (Figure 5 ). Thus, our results strongly suggest that AtERF13 may be involved in ABA response. As mentioned in the Results, we did not observe distinct phenotypes with AtERF13 RNAi lines except faster seedling growth, presumably because of the functional redundancy among CEBFs.
In the case of RAP2.4L, we did not observe changes in ABA sensitivity in its OX lines, although we observed up-regulation of several ABA-responsive genes (Figure 5 ). However, the transgenic lines were glucose-hypersensitive, suggesting that it may be involved in sugar response (Figure 4B ). We also analyzed its knockout lines, but did not observe distinct phenotypes (not shown). RAP2.4 is the closest homologue of RAP2.4L; therefore, we also analyzed its OX and knockout phenotypes. We did not observe alterations in ABA response in either the OX or the knockout lines of RAP2.4 (not shown). The results are consistent with those observed by Lin et al. [ 26 ], who reported that RAP2.4 is involved in light, ethylene and ABA-independent drought tolerance but not in ABA response. However, similar to RAP2.4L OX lines, RAP2.4 OX lines were glucose-sensitive and both RAP2.4 and RAP2.4L OX lines were salt-sensitive (Figure 4E-4G ). Additionally, single or double knockout lines of RAP2.4 and RAP2.4L grew faster than wild type plants (Additional file 3 ), suggesting their role in seedling growth control.
It is not known whether other CEBFs are involved in ABA response. Another important question to be addressed is the mechanism of their function, if they are involved in ABA response. CE1 constitutes an ABA response complex with the G-box type ABRE and functions in combination with ABRE. Therefore, CEBFs are expected to interact with the transcription factors ABFs/AREBs, which mediate ABA response in seedlings via the G-box type ABRE. In the case of DREB2C, which binds another coupling element DRE, its physical interaction with ABFs/AREBs has been demonstrated [ 37 ]. It would be worthwhile to determine whether CEBFs can physically interact with ABFs/AREBs. As described before, several CEBFs mediate plant defense response. Thus, our results raise an interesting possibility that CE1 may be a converging point of ABA and defense responses. | Conclusions
We conducted one-hybrid screen to isolate proteins that interact with the coupling element CE1 and isolated a group of AP2/ERF superfamily proteins designated as CEBFs. To determine the function of CEBFs, we examined their expression patterns and prepared OX lines for phenotype analysis. Our results showed that the AtERF13 OX lines are ABA-and glucose-hypersensitive. The OX lines of two other CEBFs, RPA2.4 and RAP2.4L, were glucose-hypersensitive. Thus, overexpression of the three CEBFs resulted in alterations in ABA and/or sugar response. In addition, several ABA-regulated genes were up-regulated in the transgenic lines. Taken together, our data strongly suggest that the three CEBFs evaluated in this study are involved in ABA or stress response. The functions of other CEBFs remain to be determined. | Background
Abscisic acid (ABA) is a plant hormone that controls seed germination, protective responses to various abiotic stresses and seed maturation. The ABA-dependent processes entail changes in gene expression. Numerous genes are regulated by ABA, and promoter analyses of the genes revealed that cis -elements sharing the ACGTGGC consensus sequence are ubiquitous among ABA-regulated gene promoters. The importance of the core sequence, which is generally known as ABA response element (ABRE), has been demonstrated by various experiments, and its cognate transcription factors known as ABFs/AREBs have been identified. Although necessary, ABRE alone is not sufficient, and another cis -element known as "coupling element (CE)" is required for full range ABA-regulation of gene expression. Several CEs are known. However, despite their importance, the cognate transcription factors mediating ABA response via CEs have not been reported to date. Here, we report the isolation of transcription factors that bind one of the coupling elements, CE1.
Results
To isolate CE1 binding proteins, we carried out yeast one-hybrid screens. Reporter genes containing a trimer of the CE1 element were prepared and introduced into a yeast strain. The yeast was transformed with library DNA that represents RNA isolated from ABA-treated Arabidopsis seedlings. From the screen of 3.6 million yeast transformants, we isolated 78 positive clones. Analysis of the clones revealed that a group of AP2/ERF domain proteins binds the CE1 element. We investigated their expression patterns and analyzed their overexpression lines to investigate the in vivo functions of the CE element binding factors (CEBFs). Here, we show that one of the CEBFs, AtERF13, confers ABA hypersensitivity in Arabidopsis, whereas two other CEBFs enhance sugar sensitivity.
Conclusions
Our results indicate that a group of AP2/ERF superfamily proteins interacts with CE1. Several CEBFs are known to mediate defense or abiotic stress response, but the physiological functions of other CEBFs remain to be determined. Our in vivo functional analysis of several CEBFs suggests that they are likely to be involved in ABA and/or sugar response. Together with previous results reported by others, our current data raise an interesting possibility that the coupling element CE1 may function not only as an ABRE but also as an element mediating biotic and abiotic stress responses. | Authors' contributions
SL conducted the expression analysis and analyzed the OX and KO lines. JHP conducted yeast one-hybrid screens. MHL and JY prepared OX lines and analyzed their phenotypes. SYK designed experiments and wrote the paper. All authors read and approved the final manuscript.
Supplementary Material | Acknowledgements
This work was supported in part by the Korea Research Foundation grant funded by the Korean government (MOEHRD) (KRF-313-2007-2-C00700) and the Mid-career Researcher Program through NRF grant funded by the MEST (No. 2008-0059137). The authors are grateful to the Kumho Life Science Laboratory of Chonnam National University for providing equipments and plant growth facilities. | CC BY | no | 2022-01-12 15:21:46 | BMC Plant Biol. 2010 Dec 16; 10:277 | oa_package/d6/7a/PMC3016407.tar.gz |
PMC3016408 | 21246049 | Introduction
Human short-term memory is severely limited. While the existence of such limits is undisputed, there is ample debate concerning their nature. Miller [1] described the ability to increase storage capacity by grouping items, or “chunking”. He argued that the span of attention could comprehend somewhere around seven information items. Chunk structure is recursive; as chunks may contain other chunks as items: Paragraphs built out of phrases built out of words built out of letters built out of strokes. This mechanism is used to explain the cognitive capacity to store a seemingly endless flux of incoming, pre-registered, information, while remaining unable to absorb and process new (non-registered) information in highly parallel fashion.
Miller's ‘magic number seven’ has been subject of much debate over the decades. Some cognitive scientists have modeled such limits by simply using (computer-science) “pointers”, or “slots” (e.g, [2] , [3] —see [4] , [5] for debate). However, such approaches do not seem plausible given the massively parallel nature of the brain, and we believe memory limits are an emergent property of the neural architecture of the human brain. As Hofstadter put it a quarter of a century ago [6] : the “problem with this [slot] approach is that it takes something that clearly is a very complex consequence of underlying mechanisms and simply plugs it in a complex structure, bypassing the question of what those underlying mechanisms might be.”(p. 642)
Our objective in this paper is to study these memory limits as emergent effects of underlying mechanisms. We postulate two mechanisms previously discussed in the literature. The first is a mathematical model of human memory brought forth by Kanerva [7] , called Sparse Distributed Memory (SDM). We also presuppose, following [8] , an underlying mechanism of chunking through averaging. It is not within the scope of this study to argue for the validity of SDM as a cognitive model; for incursions on this broader topic, we refer readers to [9] – [11] , which discuss the plausibility of this Vector Symbolic Architecture family of models (in which SDM is contained).
This work, while similar in its mathematical foundations, is different from previous capacity analyses: In [7] , the memory capacity analysis of SDM relates to its long-term memory mechanisms, while we study its short–term memory limits. Our work also differs from that of Plate, in that, regardless of the number of items presented, the memory will only store (and subsequently retrieve) a psychologically plausible number of items. The difference becomes salient in Plate's own description [12] : “As more items and bindings are stored in a single HRR the noise on extracted items increases. If too many associations are stored, the quality will be so low that the extracted items will be easily confused with similar items or, in extreme cases, completely unrecognizable”(p. 139). Plate is focused on long–term memory; and we will focus on Miller's STM limits.
A number of theoretical observations are drawn from our computations: i) a range of plausible numbers for the dimensions of the memory, ii) a minimization of a current controversy between different ‘magic number’ estimates, and iii) potential empirical tests of the chunking through averaging assumption. We should start with a brief description of our postulates: i) the SDM, and ii) chunking through averaging.
Sparse Distributed Memory
The Sparse Distributed Memory (SDM), developed in [7] , defines a memory model in which data is stored in distributed fashion in a vast, sparsely populated, binary address space. In this model, (a number of) neurons act as address decoders . Consider the space : SDM's address space is defined allowing possible locations, where defines both the word length and the number of dimensions of the space: the memory holds binary vectors of length . In SDM, the data is the same as the medium in which it is stored (i.e. the stored items are -bit vectors in -dimensional binary addresses).
SDM uses Hamming distance as a metric between any two -bit vectors (hereafter memory items, items, elements, or bitstrings—according to context). Neurons, or hard locations (see below), in Kanerva's model, hold random bitstrings with equal probability of 0's and 1's—Kanerva [13] , [14] has been exploring a variation of this model with a very large number of dimensions (around 10000). (With the purpose of encoding concepts at many levels, the Binary Spatter Code—or BSC—, shares numerous properties with SDM.) By using the Hamming distance as a metric, one can readily see that the average distance between any two points in the space is given by the binomial distribution, and approximated by a normal curve with mean at with standard deviation . Given the Hamming distance, and large , most of the space lies close to the mean. A low Hamming distance between any two items means that these memory items are associated. A distance that is close to the mean means that the memory items are orthogonal to each other. This reflects two facts about the organization of human memory: i) orthogonality of random concepts , and ii) close paths between random concepts .
Orthogonality of random concepts : the vast majority of concepts is orthogonal to all others. Consider a non-scientific survey during a cognitive science seminar, where students asked to mention ideas unrelated to the course brought up terms like birthdays , boots , dinosaurs , fever , executive order , x-rays , and so on. Not only are the items unrelated to cognitive science, the topic of the seminar, but they are also unrelated to each other.
Close paths between concepts : The organization of concepts seems to present a ‘small world’ topology–for an empirical approach on words, for instance, see [15] . For any two memory items, one can readily find a stream of thought relating two such items (“Darwin gave dinosaurs the boot ”; “she ran a fever on her birthday ”; “isn't it time for the Supreme Court to x-ray that executive order ?” ...and so forth). Robert French presents an intriguing example in which one suddenly creates a representation linking the otherwise unrelated concepts of “coffee cups” and “old elephants” [16] . In sparse distributed memory, any two bitstrings with Hamming distance around would be extremely close, given the aforementioned distribution. And is the expected distance of an average point between two random bitstrings.
Of course, for large (such as ), it is impossible to store all (or even most) of the space—the universe is estimated to carry a storage capacity of bits ( bits if one considers quantum gravity) [17] . It is here that Kanerva's insights concerning sparseness and distributed storage and retrieval come into play: —or a number around one million—physical memory locations, called hard locations, could enable the representation of a large number of different bitstrings. Items of a large space with, say, locations would be stored in a mere hard locations—the memory is indeed sparse.
In this model, every single item is stored in several hard locations, and can, likewise, be retrieved in distributed fashion. Storage occurs by distributing the item in every hard location within a certain threshold ‘radius’ given by the Hamming distance between the item's address and the associated hard locations. Different threshold values for different numbers of dimensions are used (in his examples, Kanerva used 100, 1000 and 10000 dimensions). For , the distance from a random point of the space to its nearest (out of the one million) hard locations will be approximately 424 bits [7] (p.56). In this scenario, a threshold radius of 451 bits will define an access sphere containing around 1000 hard locations. In other words, from any point of the space, approximately 1000 hard locations lie within a 451-bit distance. All of these accessible hard locations will be used in storing and retrieving items from memory. We therefore define the function and a hard location iff , where defines an access radius around of size (451 if ; is the Hamming distance).
A brief example of a storage and retrieval procedure in SDM is in order: to store an item at a given (virtual) location (in sparse memory) one must activate every hard location within the access sphere of (see below) and store the datum in each one. Hard locations carry adders, one for each dimension. To store a bitstring at a hard location , one must iterate through the adders of : If the -th bit of is 1, increment the -th adder of , if it is 0, decrement it. Repeating this for all hard locations in 's access sphere will distribute the information in throughout these hard locations.
Retrieval of data in SDM is also massively collective and distributed: to peek the contents of each hard location, one computes its related bit vector from its adders, assigning the -th bit of as a 1 or 0 if the -th adder is positive or negative, respectively (a coin is flipped if it is 0). Notice, however, that this information in itself is meaningless and may not correspond to any one specific datum previously registered. To read from a location in the address space, one must activate the hard locations in the access sphere of and gather each related bit vector. The stored datum will be the majority rule decision of all activated hard locations' related bit vectors. If, for the -th bit, the majority of all bit vectors is 1, the final read datum's -th bit is set to 1, otherwise to 0. Thus, “SDM is distributed in that many hard locations participate in storing and retrieving each datum, and one hard location can be involved in the storage and retrieval of many data” [18] (p. 342).
All hard locations within an access radius collectively point to an address. Note also that this process is iterative. The address obtained may not have information stored on it, but it provides a new access radius to (possibly) converge to the desired original address. One particularly impressive characteristic of the model is its ability to simulate the “tip-of-tongue” phenomenon, in which one is certain about some features of the desired memory item, yet has difficulty in retrieving it (sometimes being unable to do so). If the requested address is far enough from the original item (209 bits if ), iterations of the process will not decrease the distance—and time to convergence goes to infinity.
The model is robust against errors for at least two reasons: i) the contribution of any one hard location, in isolation, is negligible, and ii) the system can readily deal with incomplete information and still converge to a previously registered memory item. The model's sparse nature dictates that any point of the space may be used as a storage address, whether or not it corresponds to a hard location. By using about one million hard locations, the memory's distributed nature can “virtualize” the large address space. The distributed aspect of the model allows such a virtualization. Kanerva [7] also discusses the biological plausibility of the model, as the linear threshold function given by the access radius can be readily computed by neurons, and he suggests the interpretation of some particular types of neurons as address decoders. Given these preliminaries concerning the Sparse Distributed Memory, we should now proceed to our second premise: chunking through averaging .
Chunking through averaging
To chunk items, the majority rule is applied to each bit: given bitstrings to be chunked, for each of the bits, if the majority is 1, the resulting bitstring's chunk bit is set to 1; otherwise it is 0. In case of perfect ties (no majority), a coin is flipped.
We have chosen the term ‘chunking’ to describe an averaging operation, and ‘chunk’ to describe the resulting bitstring, because, through this operation, the original components generate a new one to be written to memory. The reader should note, in SDM's family of high-dimensional vector models, called Vector Symbolic Architectures (VSA), the operation that generates composite structures is commonly known as superposition [10] – [12] .
Obviously, this new chunked bitstring may be closer, in terms of Hamming distance, to the original elements, than the mean distance between random elements (500 bits if = 1000), given a relatively small . The chunk may then be stored in the memory, and it may be used in future chunking operations, allowing, thus, for recursive behavior. With these preliminaries, we turn to numerical results in the analysis section. | Results and Discussion
The chunking through averaging postulate
Consider the assumption of chunking through averaging. We propose that it is plausible and worthy of further investigation, for three reasons.
First, it minimizes the current controversy between Miller's estimations and Cowan's. The disparity between Miller's or Cowan's observed limits may be a smaller delta than what is argued by Cowan. Our “chunking-through-averaging” premise may provide a simpler, and perhaps unifying, position to this debate. If chunking 4 items has the same probability as 5 items, and chunking 6 items is equivalent to chunking 7 items, one may find that the ‘magic number’ constitutes one cumulative probability degree (say, 4-or-5 items) plus or minus one (6-or-7 items).
A mainstream interpretation of the above phenomenon may be that, as with any model, SDM is a simplification; an idealized approximation of a presumed reality. Thus, one may see it as insufficiently complete to accurately replicate the details of true biological function due to, among other phenomena, inherent noise and spiking neural activity. In this case, one would interpret it as a weakness, or an inaccuracy inherent to the model. An alternative view, however improbable, may be that the model is accurate in this particular aspect, in which case, the assumption minimizes the current controversy between Miller's estimations and Cowan's.
The success factors computed above show that for either 4 or 5 items, we have , while for 6 or 7 items we have . If we assume an intermediary value of —which is reasonable, due to noise or lack of synchronicity in neural processing—the controversy vanishes. We chose to base our experiments on the mean value ( ), and the results herein may be adapted to other estimates as additional experiments settle the debate.
Moreover, a chunk tends to be closer to the chunked items than these items are between themselves. For example, with and , the Hamming distance between a chunk and a random item is drawn from a distribution with and ; in here, from the point of view of the chunked item , the closest 1% of the space lies at 53 bits, while 99% of the space lies at 84 bits. Contrast this with the distances between any two random, orthogonal, items, which are drawn from and : from the point of view of a random item, the closest 1% of the space lies at 89 bits, while 99% of the space lies at 122. This disparity reflects the principles of orthogonality between random concepts and of close paths between concepts (or small worlds [15] ): the distance between 2 items from any 5 is large, but the distance to the average of the set is small. Of course, as grows, the distance to also grows (since ), and items become irretrievable. One thing is clear: with 5 chunked items, the chance of retrieving a false positive is minute.
Finally, the assumption of chunking through averaging is empirically testable. Psychological experiments concerning the difference in ability to retain items could test this postulate. The assumption predicts that (4, 5) items, or more generally that ( ) for integer will be registered with equal probability. It also predicts how the probability of retained items should drop in relation to if . This is counterintuitive and can be measured experimentally. Note, however, two qualifications: first, as chunks are hierarchically organized, these effects may be hard to perceive in experimental settings. One would have to devise an experimental setting with assurances that only chunks from the same level are retrievable–neither combinations of such chunks, nor combinations of their constituting parts. The final qualification is that, as grows, the aforementioned probability difference tends to zero. Because of the conjunction of these qualifications, this effect would be hard to perceive on normal human behavior.
Concluding remarks
Numerous cognitive scientists model the limits of human short-term memory through explicit “pointers” or “slots”. In this paper we have considered the consequences of a short-term memory limit given the mechanisms of i) Kanerva's Sparse Distributed Memory, and ii) chunking through averaging. Given an appropriate choice for the number of dimensions of the binary space, we are able to model chunks that limit active memory's storage capacity, while allowing the theoretically endless recursive association of pre-registered memory items at different levels of abstraction (i.e., chunks may be chunked with other chunks or items, indiscriminately [1] , [21] ). This has been pointed out in [22] , however, in here we use the short-term memory limitations as a bounding factor to compute plausible ranges for .
Some observations are noteworthy. First, our work provides plausible bounds on the number of dimensions of a SDM—we make no claims concerning Kanerva's recent work (e.g., [14] ). Given our postulates, it seems that 100 dimensions is too low a number, and 1000 dimensions too high. In our computations, assuming , variance of the number of items retained (as a function of the number of presented items and at least one retrievable item) was minimized at 212 dimensions. This value was chosen as our optimal point of focus for it provided stable, psychologically plausible behavior for a wide range of set sizes. We have concentrated on the SDM and chunking through averaging postulates, yet future research could also look at alternative neural models; for it is certain that the brain does not use explicit slots or pointers when items are chunked. One can reasonably argue: what good can come from replacing one magic number with another? There are two potential benefits: first, by fixing parameter , we can restrict the design space of SDM simulations and ensure that a psychologically plausible number of items is chunked. Another advantage is theoretical: the number 212 suggests that we should look for neurons that seem to have, or respond majoritarily to, such a number of active inputs in their linear threshold function.
Of course, a single 212 bit vector in SDM does not encode meaningful content at all. The existence of a bitstring can only be meaningful in relation to other bitstrings close to it. Consider, for instance, an A4 sheet of paper, of size 210mm×297mm (8.3in×11.7in). A 1200×1200 dots-per-inch printer holds less than potential dots in an entire sheet. While the space of possible black and white printed A4 sheets is a very large set of possible pages, the vast majority of them, rather like the library of Babel, are composed of utter gibberish. Any single dot needs only 28 bits to be described, and because the dots usually cluster into strokes, chunks can be formed. Moreover, because strokes cluster to form fonts, which cluster to form words, which cluster to form phrases and paragraphs; combinations of large sets of 212 dimensional bitstrings can encode the meaningful content of pages and books—provided those items have been previously chunked in the reader's mind. Without chunks there can be no meaning; this paragraph, translated to Yanomami (assuming that's possible), would become unreadable to its intended audience and to its authors.
Sparse Distributed Memory holds a number of biologically and psychologically plausible characteristics. It is associative, allowing for accurate retrieval given vague or incomplete information (which is relevant given the potential for asynchronous behavior [23] ); it is readily computable by neurons; it seems suitable for storage and retrieval of low-level sensorimotor information [24] , it is a plausible model of the space of human concepts, and it exhibits a phenomenon strikingly similar to the tip-of-the-tongue situation. With the results presented herein, sparse distributed memory also reflects the natural limits of human short-term memory. | Results and Discussion
The chunking through averaging postulate
Consider the assumption of chunking through averaging. We propose that it is plausible and worthy of further investigation, for three reasons.
First, it minimizes the current controversy between Miller's estimations and Cowan's. The disparity between Miller's or Cowan's observed limits may be a smaller delta than what is argued by Cowan. Our “chunking-through-averaging” premise may provide a simpler, and perhaps unifying, position to this debate. If chunking 4 items has the same probability as 5 items, and chunking 6 items is equivalent to chunking 7 items, one may find that the ‘magic number’ constitutes one cumulative probability degree (say, 4-or-5 items) plus or minus one (6-or-7 items).
A mainstream interpretation of the above phenomenon may be that, as with any model, SDM is a simplification; an idealized approximation of a presumed reality. Thus, one may see it as insufficiently complete to accurately replicate the details of true biological function due to, among other phenomena, inherent noise and spiking neural activity. In this case, one would interpret it as a weakness, or an inaccuracy inherent to the model. An alternative view, however improbable, may be that the model is accurate in this particular aspect, in which case, the assumption minimizes the current controversy between Miller's estimations and Cowan's.
The success factors computed above show that for either 4 or 5 items, we have , while for 6 or 7 items we have . If we assume an intermediary value of —which is reasonable, due to noise or lack of synchronicity in neural processing—the controversy vanishes. We chose to base our experiments on the mean value ( ), and the results herein may be adapted to other estimates as additional experiments settle the debate.
Moreover, a chunk tends to be closer to the chunked items than these items are between themselves. For example, with and , the Hamming distance between a chunk and a random item is drawn from a distribution with and ; in here, from the point of view of the chunked item , the closest 1% of the space lies at 53 bits, while 99% of the space lies at 84 bits. Contrast this with the distances between any two random, orthogonal, items, which are drawn from and : from the point of view of a random item, the closest 1% of the space lies at 89 bits, while 99% of the space lies at 122. This disparity reflects the principles of orthogonality between random concepts and of close paths between concepts (or small worlds [15] ): the distance between 2 items from any 5 is large, but the distance to the average of the set is small. Of course, as grows, the distance to also grows (since ), and items become irretrievable. One thing is clear: with 5 chunked items, the chance of retrieving a false positive is minute.
Finally, the assumption of chunking through averaging is empirically testable. Psychological experiments concerning the difference in ability to retain items could test this postulate. The assumption predicts that (4, 5) items, or more generally that ( ) for integer will be registered with equal probability. It also predicts how the probability of retained items should drop in relation to if . This is counterintuitive and can be measured experimentally. Note, however, two qualifications: first, as chunks are hierarchically organized, these effects may be hard to perceive in experimental settings. One would have to devise an experimental setting with assurances that only chunks from the same level are retrievable–neither combinations of such chunks, nor combinations of their constituting parts. The final qualification is that, as grows, the aforementioned probability difference tends to zero. Because of the conjunction of these qualifications, this effect would be hard to perceive on normal human behavior.
Concluding remarks
Numerous cognitive scientists model the limits of human short-term memory through explicit “pointers” or “slots”. In this paper we have considered the consequences of a short-term memory limit given the mechanisms of i) Kanerva's Sparse Distributed Memory, and ii) chunking through averaging. Given an appropriate choice for the number of dimensions of the binary space, we are able to model chunks that limit active memory's storage capacity, while allowing the theoretically endless recursive association of pre-registered memory items at different levels of abstraction (i.e., chunks may be chunked with other chunks or items, indiscriminately [1] , [21] ). This has been pointed out in [22] , however, in here we use the short-term memory limitations as a bounding factor to compute plausible ranges for .
Some observations are noteworthy. First, our work provides plausible bounds on the number of dimensions of a SDM—we make no claims concerning Kanerva's recent work (e.g., [14] ). Given our postulates, it seems that 100 dimensions is too low a number, and 1000 dimensions too high. In our computations, assuming , variance of the number of items retained (as a function of the number of presented items and at least one retrievable item) was minimized at 212 dimensions. This value was chosen as our optimal point of focus for it provided stable, psychologically plausible behavior for a wide range of set sizes. We have concentrated on the SDM and chunking through averaging postulates, yet future research could also look at alternative neural models; for it is certain that the brain does not use explicit slots or pointers when items are chunked. One can reasonably argue: what good can come from replacing one magic number with another? There are two potential benefits: first, by fixing parameter , we can restrict the design space of SDM simulations and ensure that a psychologically plausible number of items is chunked. Another advantage is theoretical: the number 212 suggests that we should look for neurons that seem to have, or respond majoritarily to, such a number of active inputs in their linear threshold function.
Of course, a single 212 bit vector in SDM does not encode meaningful content at all. The existence of a bitstring can only be meaningful in relation to other bitstrings close to it. Consider, for instance, an A4 sheet of paper, of size 210mm×297mm (8.3in×11.7in). A 1200×1200 dots-per-inch printer holds less than potential dots in an entire sheet. While the space of possible black and white printed A4 sheets is a very large set of possible pages, the vast majority of them, rather like the library of Babel, are composed of utter gibberish. Any single dot needs only 28 bits to be described, and because the dots usually cluster into strokes, chunks can be formed. Moreover, because strokes cluster to form fonts, which cluster to form words, which cluster to form phrases and paragraphs; combinations of large sets of 212 dimensional bitstrings can encode the meaningful content of pages and books—provided those items have been previously chunked in the reader's mind. Without chunks there can be no meaning; this paragraph, translated to Yanomami (assuming that's possible), would become unreadable to its intended audience and to its authors.
Sparse Distributed Memory holds a number of biologically and psychologically plausible characteristics. It is associative, allowing for accurate retrieval given vague or incomplete information (which is relevant given the potential for asynchronous behavior [23] ); it is readily computable by neurons; it seems suitable for storage and retrieval of low-level sensorimotor information [24] , it is a plausible model of the space of human concepts, and it exhibits a phenomenon strikingly similar to the tip-of-the-tongue situation. With the results presented herein, sparse distributed memory also reflects the natural limits of human short-term memory. | Conceived and designed the experiments: AL DMC CNA. Performed the experiments: AL DMC CNA. Analyzed the data: AL DMC. Wrote the paper: AL DMC CNA.
Human memory is limited in the number of items held in one's mind—a limit known as “Miller's magic number”. We study the emergence of such limits as a result of the statistics of large bitvectors used to represent items in memory, given two postulates: i) the Sparse Distributed Memory; and ii) chunking through averaging. Potential implications for theoretical neuroscience are discussed. | Analysis
Computing the Hamming distance from a chunk to items
Let be the set of bitstrings to be chunked into a new bitstring, . The first task is to find out how the Hamming distance is distributed between this averaged bitstring and the set of bitstrings being chunked. This is, as discussed, accomplished through majority rule at each bit position. Imagine that, for each separate dimension, a supreme court will cast a decision with each judge choosing yes (1) or no (0). If there is an even number of judges, a fair coin will be flipped in the case of a tie. Given that there are votes cast, how many of these votes will fall in the minority side? (Each minority-side vote adds to the Hamming distance between an item and the average .)
Note that the minimum possible number of minority votes is one, and that it may occur with either 3 votes cast or two votes and a coin flip. If there are two minority votes, they may stem from either 5 votes or 4 votes and a coin flip, and so forth. We thus have that, for votes, the maximum minority number is given by (and the ambiguities between an odd number of votes versus an even number of votes plus a coin flip are resolved by considering total votes). This leads to independent Bernoulli trials, with success factor , and the constraint that the minority view differs from the majority bit vote. Let be a random variable with the number of minority votes. Obviously in this case, , hence we have, for items, the following cumulative distribution function of minority votes [19] :
While we can now, given votes, compute the distribution of minority votes, the objective is not to understand the behavior of these minority bits in isolation , i.e., per dimension on the chunking process. We want to compute the number of dimensions to (in a psychologically and neurologically plausible way) store and retrieve around items—Miller's number of retrievable elements—through an averaging operation. Hence we need to compute the following:
Given a number of dimensions and a set of items, the probability density function of the Hamming distance from to the chunked elements ,
A threshold : a number of dimensions in which, if an element 's Hamming distance to is farther from that point, then cannot be retrieved,
As grows, how many elements remain retrievable?
Given bitstrings with dimension , suppose elements have been chunked, generating a new bitstring . Let be the Hamming distance from the chunked element to , the -th element of . What is the distance from to elements in ? Here we are led to Bernoulli trials with success factor . Since is large, for can be approximated by a Normal distribution, we may use and . To model human short term memory's limitations, we want to compute a cutoff threshold which will guarantee retrieval of around items averaged in and “forget” an item if —where is Miller's limiting number. Hence to guarantee retrieval of around 95% ( ) of items, we have , where is the success factor corresponding to . Note that Cowan [20] has argued for a “magic number” estimate of items—and the exact cognitive limit is still a matter of debate. The success factor for 4 (or 5) elements is = .3125; and for 6 (or 7) elements it is = .34375. By fixing the success factor at plausible values of (at {4,5}, or at an intermediary value between {4,5} and {6,7}, or at {6,7}), different threshold values are obtained for varying , as shown in Table 1 . In the remainder of this study, we use the intermediary success factor for our computations; again without loss of generality between different estimates of .
We thus have a number of plausible thresholds and dimensions. We can now proceed to compute the plausibility range: Despite the implicit suggestion in Table 1 that any number of dimensions might be plausible, how does the behavior of these combinations vary as a function of the number of presented elements, ?
Varying the number of presented items
Consider the case of information overload, when one is presented with a large set of items. Suppose one were faced with dozens, or hundreds, of distinct items. It is not psychologically plausible that a large number of elements should be retrievable. For an item to be impossible to retrieve, the distance between the averaged item and must be higher than the threshold point of the corresponding . When we have an increasingly large set of presented items, there will be information loss in the chunking mechanism, but it should still be possible to retrieve some elements within plausible psychological bounds.
Figure 1(a) shows the behavior of three representative sizes of : 100, 212 and 1000 dimensions. (100 and 1000 were chosen because these are described in Kanerva's original examples of SDM.) has shown to be the most plausible number of dimensions, preserving a psychologically plausible number of items after presentations of different set sizes. It is clear that quickly diverges, retaining a high number of items in a chunk (as the number of presented items grows). Conversely, if , the number of preserved memory items rapidly drops to zero, and the postulated mechanisms are unable to retrieve any items at all—a psychologically implausible development. Figure 1(b) zooms in to illustrate behavior over a narrower range of -values and a wider range of presented items. Varying the number of presented items and computing the number of preserved items (for a number of representative dimensions) yields informative results. Based on our premises, experiments show that to appropriately reflect the storage capacity limits exhibited by humans, certain ranges of must be discarded. With too small a number of dimensions, the model will retrieve too many items in a chunk. With too large a number of dimensions, the model will retrieve at most one or two—perhaps no items at all. This is because of the higher number of standard deviations involved in the dimension sizes: for , the whole space has 20 standard deviations, and is less than 2 standard deviations below the mean—which explains why an ever growing number of items is “retrieved” (i.e., high probability of false positives). For , the space has over 63 standard deviations, and , is around 8.99 standard deviations below the mean. There is such a minute part of the space below that item retrieval is virtually impossible.
With an intermediary success factor between and established by the cognitive limits 4 and 7, we have computed the number of dimensions of a SDM as lying in the vicinity of 212 dimensions. Variance is minimized when —and retrieval results hold psychologically plausible ranges even when hundreds of items are presented (i.e., the SDM would be able to retrieve from a chunk no more than nine items and at least one or two, regardless of how many items are presented simultaneously). Finally, given that this work rests upon the chunking through averaging postulate, in the next section we will argue that the postulated mechanism is not only plausible, but also empirically testable. | The authors would like to thank Eric Nichols for numerous valuable comments. | CC BY | no | 2022-01-13 08:14:22 | PLoS One. 2011 Jan 5; 6(1):e15592 | oa_package/37/98/PMC3016408.tar.gz |
||
PMC3016409 | 21143908 | Background
Genetic studies are challenged with identifying and characterizing the underlying genetic etiology of common, complex human diseases. Recently, genomewide-association studies (GWAS) have contributed an abundance of well-replicated findings that have identified regions of the genome likely to harbor disease genes (see [ 1 ]). The current limitation is the ability to move from these initial association signals to identification of the underlying critical variants. Analytical approaches that consider haplotypes will be useful to guide the mapping of underlying variants, in particular rare variants [ 2 , 3 ]. Furthermore, multi-center collaborative efforts and use of resources enriched for genetic disease will be helpful in the effort to identify underlying variants. Potentially powerful family-based resources already exist for many diseases, such as those previously ascertained for linkage studies. The ability to utilize these family-based resources for haplotype association studies and combine family-based and singleton resources for joint analyses would be extremely valuable. Such analyses, however, present complex statistical challenges, such as haplotype inference and accounting for phase uncertainty in family data and the identification of appropriate statistics. Here we present a Monte-Carlo method, hapMC, designed to perform valid haplotype association analyses in mixed resources.
A significant issue for use of haplotypes in association analyses is the estimation of phase conditional on the observed genotype data. In population-based data of independent individuals, haplotype frequencies can be estimated using Bayesian methods [ 4 ]) or expectation-maximization (EM) approaches [ 5 , 6 ]. Such methods are well established and are scalable to thousands of markers with thousands of subjects. For family-based data, extensive work has been done to properly estimate haplotypes (see [ 7 , 8 ]), but there are still considerable limitations with regard to missing data and large pedigrees. Linkage analysis software, such as Genehunter [ 9 ], Merlin [ 10 ] and SIMWALK2 [ 11 ], can phase SNP data in pedigrees, however, these software either require that markers are in linkage equilibrium or cluster the markers for analysis, conditions not suited to the situation we are interested in here. Thomas (2007) [ 12 ] developed a Markov chain Monte Carlo (MCMC) linkage method that considers markers in linkage disequilibrium (LD) in general pedigrees. However, the method remains impractical for large pedigrees due to mixing problems and high computational burden. Other efforts have focused on phasing tightly-linked markers in small nuclear families and moderate sized general pedigrees [ 13 - 17 ]. By focusing on markers within minimum- or zero-recombinant regions these methods reduce the complexity of the haplotype reconstruction problem. Additional reductions in the haplotype configuration space are made by minimizing haplotype ambiguities and missing data with rules based on Mendelian inheritance [ 14 , 15 , 18 , 19 ] and genotype elimination [ 20 , 21 ]. Concentration on regions with minimal recombination is reasonable to address specific regions, such as candidate genes or follow-up regions identified from GWAS. However, the attention to only small to moderate pedigrees remains restrictive. Thus far, no method has integrated haplotyping strategies for SNPs in LD for larger pedigree structures with missing data.
Given any chosen method for estimating haplotypes, the uncertainty from this estimation must be accounted for at the analysis stage. For independent individuals in a classical case-control design, a likelihood approach is the usual solution, which allows consideration of all possible haplotype pairs for each individual, each weighted by the appropriate probability. For small families and transmission-disequilibrium statistics this also has been dealt with in a variety of valid ways [ 22 - 26 ]. Three published approaches have attempted to extend haplotype association analyses to large pedigrees and allow for combination of pedigrees and singleton data. The first approach uses a weighting scheme to account for correlation between related cases. It is limited in its requirement for independent controls when using pedigree cases and only conducts a global haplotype likelihood ratio test [ 17 ]. The second approach is an extension of a full likelihood approach for combining nuclear family and singleton data [ 27 ]. The extension to general pedigrees is by splitting these into nuclear family components and treating these components as-if independent, which can lead to invalid tests. The third is a Monte Carlo (MC) approach, proposed for single marker analyses, but with some restrictive opportunities for haplotype analyses [ 28 ]. In this MC method, to perform haplotype analyses population haplotype frequencies and phase-known observed data must be provided by the user. Haplotype inference programs can be used to provide population haplotype frequencies, because, even with related individuals, point estimates of the haplotype frequencies are unbiased for zero-recombinant regions [ 29 ]. Phase-known observed data, however, is not a realistic condition, and treating estimated haplotypes as-if phase-known is not valid [ 30 - 32 ]. Certainly, MC methods can be valid for association testing provided the MC procedure is performed appropriately [ 33 , 34 ], however, a more sophisticated approach is required than for single marker analyses.
Beyond haplotype inference and uncertainty, to perform association in pedigrees attention must be made to the controls utilized in the family data. In particular, the parents of affected offspring are intuitively not ideal for explicit use as controls because they must share exactly one allele with the affected individual. Previously, "pseudocontrols" have been suggested for family data where parents are available [ 22 , 23 , 35 - 37 ]. Pseudocontrols for an affected offspring can be generated from the parental alleles or haplotypes not transmitted to the affected offspring. Methods have been developed to generate up to three pseudocontrols per case to perform a matched case/control analysis and provide statistics robust to population stratification [ 22 , 23 ]. The use of pseudocontrols may offer more power for classical association tests in family-based resources. Data for pseudocontrols can be used in the usual association statistics, thus also providing an easy way to combine association evidence across family and case-control data - an important consideration for joint analyses in mixed resource structures.
Here we introduce an MC approach for haplotype association analyses, hapMC, which allows for valid analyses in large pedigrees and resources of mixed structure. Our method incorporates a general EM phasing method that estimates phase considering pedigree structure for a set of tightly linked markers in a non-recombinant region. Our phasing algorithm builds upon previous methods by providing a pedigree-splitting preprocessing step, a set of simplified rules optimized for SNP markers [ 14 , 15 , 18 , 19 ], and an integrated genotype elimination procedure in haplotype configuration construction. Valid haplotype association testing is achieved using an appropriate MC procedure, and includes full length and sub-haplotypes analyses, allowing for imputation of missing data based on the complete marker set. This new approach also allows for the use of either explicit or pseudocontrols in family data. HapMC is implemented in a Java software package, which is incorporated as a module in the freely available Genie software suite ( http://www-genepi.med.utah.edu/Genie/hapMCDetail.html ). | Methods
Haplotype phasing
We have implemented a general haplotype phasing algorithm designed to estimate population haplotype frequencies as well as determine maximum likelihood estimate (MLE) haplotype pairs for a set of tightly linked markers in general pedigrees. It can also be used for singleton data (independent cases and/or controls). The method involves three parts: (1) data preprocessing; (2) identification of all possible haplotype configurations in pedigrees; (3) and an EM algorithm across the haplotype configuration state space to estimate haplotype frequencies and MLE haplotype pairs.
Part 1: data preprocessing
Although the algorithm is general to pedigree structure, the missing data inherent in large pedigrees may make it intractable to consider all haplotype configurations for the total structure. To address this we have developed a preprocessing algorithm to determine the sub-structures within large pedigrees to retain for phasing. Our algorithm selects these substructures to maintain tractability. The algorithm works by determining all the nuclear families within the full pedigree. For each nuclear family, if both parents and at least one offspring have sufficient genotype data (a user-defined parameter) then the nuclear family unit is maintained for the phasing process. After iterating through all nuclear families, those selected for retention are connected back together if overlapping individuals exist. Individuals that are part of non-retained nuclear families are considered as independent individuals, or alternatively are removed from the analysis if they do not have sufficient genotype data. This process is designed to remove pedigree structure that will lead to a prohibitively large haplotype configuration space for the EM algorithm, while also maintaining as much pedigree structure as possible. Only the substructures identified are integrated into the estimation of the MLE haplotypes and haplotype frequencies. However the full pedigree structure is always maintained for the statistical analysis in the MC procedure, such that the correction for all known relationships is maintained in the analyses.
Also involved in the data preprocessing is an iterative process whereby a series of rules are used to reduce phase ambiguities and missing data across all markers within each pedigree. We assume a zero-recombinant autosomal region and mutation- and error-free SNP data. All parent-offspring trios that were maintained for phasing are considered with these rules, which are repeated iteratively until no more updates can occur. The four steps involved in this part of the preprocessing are detailed below. In step 1, genotypes are loaded into variables efficient for updating. Steps 2 and 3 are designed to use known homozygous genotypes to resolve both parent and child unphased and missing locus positions. Step 4 considers the parent-offspring trios to further reduce phase ambiguities based on basic rules of inheritance and transmission. Step 1 is performed once per individual, steps 2 and 3 are performed once per parent-offspring trio, and step 4 is repeated until no further updates can be completed.
Step 1 - Load haplotype variables
This step reads each individual's genotype data into six variables that are used to fully define and store the genotype data. Each of these variables is an n -length string of '0' and '1' values, or bits, with the i th position in the string storing information for the i th SNP locus. The value '1' indicates that a condition is satisfied, '0' that it is not. The first two variables indicate the 'heterozygous' and 'homozygous' status of each locus. The third is the 'unphased' variable that indicates whether the data at locus i remains unphased (1) or has been phased (0). These three variables apply to a haplotype pair and thus there is only one of each of these defined per individual. The remaining three variables are haplotype specific and hence two of each are defined per individual -one for each haplotype. The fourth 'set' variable indicates whether the allele at locus i has been assigned. The fifth 'missing' variable indicates whether the allele at locus i is missing. The sixth 'value' variable indicates whether the allele at locus i is the minor allele.
Storing genotype data in this way allows for comparisons between individuals and updates to be performed quickly using bitwise operations that can consider the full set of loci simultaneously, rather than iterating through each of the n loci separately, and is therefore computationally efficient. Table 4 shows an example of how genotype data are loaded into these variables.
The six variables hold all the pertinent pieces of information for phasing. Clearly, if the variables 'missing', 'unphased', and 'set' are 000..0, 000..0 and 111..1, respectively, then the haplotype has been completely specified at all loci and is unambiguously determined. In step1 we initialize the variables simply based on the individuals own data, and then in steps 2-4 we use any parent-offspring relationships to remove as much ambiguity as possible. That is, the variables are updated towards the fully unambiguous state. Of course, many positions will remain ambiguous after this process. These become the positions that are iterated through to identify all possible haplotype configurations in Part 2, which subsequently defines the state space for the EM algorithm in Part 3. To help the updating process we additionally define inheritance and transmission single-bit variables. These variables indicate which haplotypes are shared by the parent-offspring pair in the current variable states. This provides the basis for determining which haplotypes to transfer information between during updates. For each parent-offspring pair, one transmission and two inheritance (one for each offspring haplotype) variables are defined. The transmission variable indicates which parental haplotype is shared with the offspring (value = 0 if haplotype 1 is transmitted and shared, 1 if haplotype 2 is transmitted). An inheritance variable indicates whether an offspring haplotype is shared with the father or the mother (value = 0 if haplotype is inherited from and shared with father, 1 if inherited from mother). Along with the six genotype variables, the transmission and inheritance variables are updated and reassessed as the haplotype states change in steps 2-4.
Step 2 - Parent-to-offspring homozygous updates
With this rule, parental homozygous loci are used to resolve phase ambiguities in the offspring's haplotype. If an offspring has not yet been updated from a prior parent-offspring update (either as a parent or an offspring), then either of the offspring's haplotypes can be chosen to be updated. If the offspring was previously updated in the parent-to-offspring pair involving the other parent, then the offspring's inheritance variables will be assigned. If the offspring was previously updated as a parent, its inheritance variable will not be assigned but the haplotype configurations will be uniquely defined. In this instance, the inheritance rule is applied (see Table 5 ) to determine which haplotype to update. If the inheritance rule is inconclusive no update is made.
The update involves establishing if any loci in the chosen offspring haplotype are missing or unphased where the parent is homozygous. If so, these positions are updated in the offspring haplotype variables (set, unphased, missing, and value variables) using logical bitwise operations. Figure 1 illustrates the logical bitwise operations that take place in a parent-to-offspring homozygous update. Once the update is made to the variables for the chosen haplotype, the inheritance variables for both offspring haplotypes (for the specific parent-offspring pair) are assigned.
Step 3 - Offspring-to-parent homozygous updates
With this rule, the offspring homozygous variable is used to resolve ambiguities in the parent's haplotypes. Similarly to step 2, for each parent-offspring pair, a parental haplotype is selected for update. This is determined by checking if the parent has been updated from a prior offspring-to-parent update (either as a parent or offspring). If the parent has not been updated yet, either haplotype is chosen to be updated. If the parent has been considered as an offspring in an offspring-to-parent pair, then the transmission will not be assigned but the haplotype configurations will be uniquely defined. In this instance, the transmission rule is applied (see Table 5 ) to determine which haplotype to update. If the transmission rule is inconclusive no update is made.
The update involves establishing if any loci in the chosen parent haplotype are missing or unphased where the offspring is homozygous. If so, these positions are updated in the parent haplotype variables (set, unphased, missing, value variables) using logical operations. Once the update is made to the parent haplotype, the transmission variables for this parent to the specific offspring are assigned.
Step 4 - Reduce haplotype phase ambiguities
For further reducing haplotype phase ambiguities we have implemented and built upon the concepts outlined in rules 3-13 in Zhang et al. [ 19 ]. These rules consider parent-offspring trios and work by iteratively updating the inheritance and transmission states between the offspring and each parent, which allows ambiguity reduction between offspring and parent haplotypes. The procedure starts by first attempting to resolve the unknown transmissions from both parents to an offspring, and then resolving unknown offspring inheritance states. In our implementation, we have reordered rules 3 and 4, and 5 and 6 from Zhang et al. [ 19 ] so that known transmissions from both parents can be used to help determine inheritance. That is, if one parents' transmitted haplotype is known and one of the offspring haplotypes is known to not be equal to it, then transmission can be established and the offspring inheritance variables can be assigned. The remaining rules (7-13) are implemented as previously described [ 19 ]. Rule 7 is applied when the shared haplotype between a parent and offspring is known and either copy of this haplotype can be used to update any ambiguities in the other. Rules 8 and 9 are applied when the either the inheritance or transmission for a parent and offspring is known but not both. When neither inheritance nor transmission is known, then rules 10 and 11 are applied. Lastly, rule 12 is used to reset homozygous positions that were altered from the previous rules, and rule 13 sets the phase of one heterozygous locus if all other set alleles are homozygous. As for steps 2 and 3, all rules are implemented as logical operations.
Part 2: Identification of all possible haplotype configurations
After Part 1 is complete, the variables may still contain unknown or unphased positions. Expansion to all possible values for these positions will generate all possible haplotype pairs for an individual. As the possible haplotypes are enumerated for each offspring in a nuclear family, the nuclear family configurations are established. Rather than creating a separate step for the haplotype elimination process [ 20 , 21 ], we have integrated this directly into our procedure for creating the configurations. This is done by iterating through each offspring in the nuclear family. For the first offspring, all the haplotype possibilities for this offspring that are compatible with the parent haplotypes are used to create a nuclear family haplotype configuration. For the remaining offspring, we iterate through the haplotype possibilities and add compatible configurations or discard incompatible configurations. We perform this starting from the offspring with the minimum number of haplotype possibilities that allows us to limit the number of possible configurations created and stored. After creating all nuclear family configurations, the full haplotype configurations are assembled for the pedigree substructures chosen in our preprocessing step (that is, structures containing multiple connected nuclear families). This step works by matching together all the nuclear family configurations through the linking individuals.
Part 3: EM algorithm for haplotype frequency and MLE haplotype estimation
An EM algorithm is used to maximize the likelihood of the haplotype frequencies given the observed genotype data and pedigree structure under the assumption of Hardy-Weinberg equilibrium (HWE). Consider a pedigree with m members with marker phenotypes y and population haplotype frequencies H. Among the m members there are f founder individuals and d descendants, m = f + d . Each individual has a set of haplotype pairs consistent with their marker phenotype data, y i , which resolve to multiple pedigree haplotype configurations, c , as determined in Part 2. Each possible configuration contains a set of haplotype pairs (h 1 , h 2 , ... , h m ) across each individual in the pedigree. The haplotype pairs consist of a maternal and paternal haplotypes, h i = (h mi , h pi ). The likelihood for each pedigree is defined from the Elston-Stewart algorithm [ 44 ] is:
For the founders, the probability of the haplotype pair, P ( h f |H ), is calculated according to HWE as the product of the corresponding haplotype frequencies if the haplotypes are equal or double the product if they are unequal. For descendants, the gametic transmission probabilities, P ( h d | h mi , h pi ), are calculated based on Mendel's laws. The overall likelihood is the product across all pedigree likelihoods. The EM is an iterative process that alternates between an expectation or E-step and a maximization or M-step. The E-step estimates the probability of the haplotype configurations given the current haplotype frequency estimates. Based on the haplotype configuration estimates, the expected count for each haplotype is derived. This is done by counting the occurrences of a haplotype in a configuration and weighting this count by the probability of the haplotype configuration in the family (or individual) of which it occurs. The M-step updates the haplotype frequency estimates based on the expected haplotype counts. The iterations continue until the difference in the estimates between iterations is less than a user-defined value.
To further reduce the complexity and state space for possible haplotype configurations, we have also implemented a partition-ligation strategy in conjunction with the EM algorithm (PL-EM) [ 45 ]. The PL-EM technique works by splitting the complete marker set into smaller overlapping marker sets with a user-defined number of markers. For these smaller partition lengths, the haplotype configurations are assembled and the haplotype frequencies are estimated using the EM algorithm. When two adjacent units have been completed, they are ligated and the procedure is re-applied. The size of the haplotype configuration state space is reduced by removing haplotypes with frequencies below a set threshold within each partition. The reduction in haplotypes in each partition limits the subsequent set of possible haplotype configurations in the ligation step.
Association Testing
HapMC has been developed to allow the user to test specific hypotheses of individual SNPs, sub-haplotypes (any subset of the full SNP set) and full-length haplotypes. The hapMC module is integrated into the Genie software package [ 28 ] allowing the use of all the test statistics provided by Genie for dichotomous and quantitative outcomes. For dichotomous outcomes, these are the classical association test statistics for risk and non-independence (odds ratio, chi-squared, and chi-squared trend) which can be tested based on haploid or diploid data. Haploid models are allele-based (or haplotype-based) tests where the unit of interest is the chromosome. Diploid models are genotype (or paired-haplotype) tests where the unit of interest is the individual. Also, the TDT, sibling-TDT and combined-TDT transmission-disequilibrium test statistics are available. Here, we have added the option to generate pseudocontrols for genotyped cases where both parents of the case are genotyped [ 35 - 37 ]. For haplotype tests, the MLE haplotype pair can be estimated by ignoring all familial relationships or using the phasing algorithm described above. The haplotype pair for a pseudocontrol is composed of the two parental haplotypes not transmitted to the genotyped case. Haplotypes for pseudocontrols are then used in the standard way with the aforementioned dichotomous case-control statistics. For quantitative outcomes, the quantitative TDT, analysis of variance, and differences in means test are available. If multiple populations are present, or a difference of effect size is suspected across the multiple data sets considered, options are available to estimate haplotypes and perform gene-drops separately for each user-identified population to avoid admixture problems. These meta statistics are available for chi-square association statistics and odds ratios and the MC procedure is used to access significance [ 40 ].
Monte Carlo procedure
For each individual with sufficient genotype data (user-defined percent threshold), full-length MLE haplotype pairs are estimated as described above. Individuals with less than the specified percent threshold of genotype data are coded as completely missing. Association statistics of interest are calculated on the single SNPs, sub-haplotypes or full-length MLE haplotypes data as drawn from these full-length haplotypes. These statistics are called the observed statistics. For individuals included in the analyses but for whom missing data positions exist, these positions are imputed when the MLE haplotype pairs are estimated and therefore imputed data for these positions are used in full-haplotype, sub-haplotype and single SNP analyses.
The MC procedure generates a null distribution for each statistic to empirically determine significance. The MC procedure begins by creating "null multi-locus genotypic configurations" where the genetic data are simulated consistent with Mendelian inheritance but independent of the disease status. This is performed as follows. Haplotype-pairs are assigned to founders and independent individuals based on the estimated full length haplotype frequencies from the haplotype phasing step in observed data. Full length haplotypes are assigned to pedigree descendants using gene-dropping techniques based on Mendelian inheritance [ 46 ]. Hence, these null haplotype configurations are based on the same LD structures as the observed data. However, this creates full phase-known data. To properly match the observed data situation, the missing data structure of the observed data is imposed on each simulation and the remaining genetic data is considered as phase-unknown, thus creating a null multi-locus genotype configuration. Based on this null genotype configuration, MLE haplotype pairs are estimated. The phenotype data is the same as for the observed data. As was performed for the observed data, association statistics of interest are calculated for these null data and null statistics calculated. These null statistics are used to form a null distribution from which to assess the significance of the observed statistic.
It should be noted that although the Genie framework allows for the specification of non-zero recombination fractions (θ) between markers that our method used for haplotype frequencies and MLEs assumes no recombination between markers. Hence, the method is only relevant for limited genomic regions, such as small follow-up regions for GWAS (<1 Mb), candidate regions or sliding windows.
Generation of simulated data for validation testing
We simulated data sets to assess the improvements of the phasing algorithm and to illustrate the validity and potential power of haplotype association testing using hapMC, particularly for analyzing mixed structured resources. To simulate SNP data under realistic conditions, HapMap CEPH Utah data was used for allele frequencies and LD structure. We chose an 18 kb region on chromosome 2 (230,976,558-30,994,737 bp) that contained 15 tightly linked (θ = 0) SNP markers with low pairwise r 2 values (similar to that expected from a regional tagging-SNP approach).
Our family-based data sets were: (1) TRIOS -500 case-parent trios including a total of 500 cases and 1,000 parental controls; (2) ASPs -250 affected sib-pairs with parents including a total of 500 cases and 500 parental controls; (3) LP -large pedigrees (see Figure 2 ). The large, extended five-generation pedigrees were simulated to be high-risk (at least 14 cases were required). This structure was selected to mimic the data that would be available for large "linkage-like" pedigrees. All individuals in the top two generations were considered missing. All affected individuals (cases) were considered sampled. Regarding unaffected relatives (family controls), two LP data sets were generated: LP1 and LP2 to match the total numbers of cases and controls in the TRIO and ASP data sets. Hence, for LP1 sufficient LPs were generated to result in ~500 cases and ~1,000 controls and for LP2 the totals were ~500 cases and ~500 controls. In both LP1 and LP2, 80% of the family controls were close relatives to an affected individual (50% parents, 30% siblings) and the remaining 20% were beyond first degree relatives. We also simulated a fourth data set: (4) an independent case-control (CC) data set comprised of 500 cases and 500 controls.
Founder individuals were assigned haplotypes based on the genetic characteristics (allele frequencies and LD structure) of the selected chromosome 2 region. Next, a haplotype was selected to be the "risk haplotype" (we considered haplotype frequencies ranging between 0.04-0.17), which we assumed to have a haplotype r 2 of 0.8 with the underlying disease SNP (dSNP) allele. Conditional on each haplotype in the founders and an r 2 = 0.8, a dSNP allele was assigned to each founder haplotype. Descendants were then assigned haplotypes (including dSNP genotype) based on Mendelian inheritance rules using gene-dropping techniques. The dSNP genotypes were then removed. We assigned phenotypes under the null hypothesis of no association and under various alternate genetic models (see below). For the null scenario, affection status was randomly assigned to individuals independent of genetic data and based on a 5% sporadic rate for case-control and nuclear family data [ 47 ]. For null large pedigree simulations, phenotype clustering was simulated based on an alternate model, but genotypes were assigned independent of this phenotype. All alternative genetic models considered included a 5% sporadic rate and multiplicative genotypic relative risks at the dSNP ranging from 1.2 to 2.0. Simulations were repeated until a sufficient number of families of the types required were generated to form the data set. Each data set was replicated 500 times for the investigations of power and validity.
Phasing comparison
To explore the properties of the new phasing algorithm, we considered four data sets all simulated under the null: 500 independent cases and 500 independent controls (CC); 500 TRIOs (500 cases and 1000 parental controls); 250 ASPs (500 cases and 500 parental controls); and the LP1 data set with ~500 cases and ~1000 controls. We considered marker sets comprised of 5, 10 and 15 SNP loci, and for those individuals with genotype data, we considered missing SNP rates of 0%, 5%, 10%, and 15%. For each of the four data sets, we examined the time to phase the observed data and the accuracy of the MLE haplotypes compared to the known true haplotypes using our new pedigree-informed phasing algorithm and a population-based EM phasing method that ignores relationships (GCHap) [ 48 ]. For independent individuals and nuclear family structures we also performed HAPLORE [ 19 ] for comparison. The same parameters for partition length (5 loci), overlap between partitions (1 locus), haplotype frequency cutoff (1 × 10 -6 ) and haplotype buffer size (25 haplotypes beyond the cutoff) were used in our algorithm and HAPLORE. This process was repeated for five replicates to gain increased accuracy.
It is important to note that in our MC procedure we perform MLE haplotype estimation for both the real data and for each set of null data. Hence, it is imperative that the phasing step is efficient to gain reasonable run times.
Power and validity
For each simulated data set (CC, TRIO, ASP, LP) we investigated the validity of haplotype analyses using hapMC based on pedigree-naïve and pedigree-informed (new algorithm) MLEs for three statistics: Cochran-Armitage test for trend using explicit control, Cochran-Armitage test for trend using pseudocontrols and also the TDT statistic[ 49 , 50 ], where applicable. Power was also assessed for the new pedigree-informed algorithm for a variety of genetic models. In addition, we illustrate the power and validity of haplotype association analysis for mixed resources of different structures consisting of each of the family data sets combined with the independent cases and controls: TRIOCC, ASPCC, LP1CC, and LP2CC.
Power and validity were estimated using 1,000 replicates. For each replicate, the specific "risk haplotype" was tested and assessed for significance using 1,000 null configurations in the MC procedure. | Results
Phasing comparison
The haplotype phasing accuracy and timing results using our pedigree-informed algorithm, the pedigree-informed algorithm HAPLORE [ 19 ] and GCHap (pedigree-naive) are shown in Table 1 . Phasing accuracy was determined by the percentage of correct MLE haplotypes across all individuals. As expected, for the independent case-control data, all three algorithms produced reasonably similar accuracy results. Both pedigree-informed algorithms were marginally better (4-6% improvement) than GCHap for longer haplotypes (10 and 15 loci) due to their partition-ligation procedures; however, these marginal increases in accuracy come at the expense of increased computing time. As expected for an algorithm that is pedigree-naive, the accuracy of GCHap remained similar across all data sets, independent of the changing pedigree structures.
For the data sets that included pedigree structure (TRIO, ASP, LP), the pedigree-informed algorithms achieved significantly greater accuracy than GCHap for all loci lengths and missing rates. The accuracy of both pedigree-informed algorithms continued to be similar in all situations where both algorithms completed the phasing, with our new algorithm consistently, if only marginally, the better of the two. Our new algorithm was also able to phase all data sets and scenarios generated. However, for certain scenarios with 15 loci (TRIO and ASP data sets) HAPLORE was unable to completely phase the data due to a configuration error, which was most likely due to the inappropriate removal of a critical haplotype from a partition. HAPLORE could also not be performed for the LP1 data set because it was unable to process these large pedigree data sets in a tractable amount of time. For longer haplotypes and high missing rates the improvements made by the pedigree-informed algorithms were substantial (e.g. ASP, 15 loci, 5% missing; GCHap 31% accuracy, hapMC 91% accuracy). The increased accuracy of pedigree-informed algorithms with family-based data is perhaps expected given the nature of the two approaches. Yet, the large differences in accuracies between the two types of algorithms highlights the importance of accounting for the family structure information, particularly for analyses of larger number of loci and higher rates of missing data.
Phasing time for all algorithms increased with the number of loci considered and increased missing rates, as expected. For GCHap, the phasing time increased with the number of subjects, but this increase was independent of pedigree structure. The phasing times for the CC and ASP data sets (both containing 1,000 genotyped subjects) were similar and phasing times for TRIO and LP1 data sets (both containing 1,500 genotyped subjects) were also similar. For the pedigree-informed algorithms, the number of subjects and the pedigree structure influenced the phasing time. For the CC and ASP data sets, the number of subjects is the same (1,000), however the pedigree structure in the 250 ASPs significantly reduced the haplotype configuration space, hence, phasing time is significantly reduced in the ASP data set. For example, for 10 loci, 10% missing, CC time is 15.54 and 42.73 seconds whereas ASP time is 3.30 and 3.82 seconds for our algorithm and HAPLORE, respectively. The TRIO and LP1 data sets both contained 1,500 individuals, however, the relationship between structure and phasing time is less straightforward for this comparison. The LP1 data set has more overall structure between a larger number genotyped individuals (smaller haplotype configuration space), but the pattern of the structure is more complex. Conversely the TRIO data set has less structure between total subjects (less reduction in state space), but a uniform structure across smaller units. For 0% and 5% missing data, the computing time for both TRIO and LP1 data sets were relatively similar. However, for 10% and 15% missing data (which increases the state space), the larger amount of structural information in the LP1 data set appeared to shorten the phasing time compared to the TRIO data.
Comparisons of run times between HAPLORE and our new algorithm for ASP and TRIO data sets show that HAPLORE was faster for 5 loci, but our new algorithm was faster for 10 and 15 loci, and substantially faster for many situations with 15 loci. Both pedigree-informed algorithms scaled poorly compared to GCHap when considering data sets with no or low pedigree structure (CC and TRIO), especially with larger numbers of markers and missing data. HAPLORE was markedly faster than our algorithm with 15 loci and high missing rates for CC. For example, the new algorithm was one and two orders of magnitude slower than HAPLORE and GCHap, respectively, for 15 loci and 15% missing genotype rate.
Power and validity
Power and validity results for analyses using the simulated family and independent case-control data sets, as well as the mixed designs are shown in Tables 2 and 3 . All analyses were haplotype specific tests for the known risk haplotype and were tested at the α-level of 0.05. Table 2 shows the power and type I error rates for each data set, including results from explicit controls (EC) and pseudocontrols (PC) in a standard Cochran-Armitage test for trend. In addition, the TDT was performed for the TRIO and ASP data sets. Table 3 shows the type I error rates and power for mixed resources including mixtures of two data sets.
Based on 1,000 replicates, all type I error rates were found to be not significantly different than 0.05 (95% confidence interval [0.036, 0.064]), indicating validity of all tests within the MC framework both for hapMC using the pedigree-naïve and pedigree-informed MLEs. Primarily, these results demonstrate the versatility and potential for hapMC to perform valid analyses on mixed structure study designs.
For the TRIO and ASP data sets, we performed trend tests using EC and PC designs and also a TDT analysis. In general, the ASP data set exhibited more power than the TRIO data set even though it had a smaller overall sample size (1,000 versus 1,500), presumably due to the enrichment of disease alleles in the ASP set. The exception was at the low genotypic relative risk of 1.2, where the increased sample size of the TRIO design appears to have out-weighed the minor genetic enrichment of the ASPs at this small risk size. Within both data sets power was observed to be quite similar across the three analysis approaches for all alternative models. However, both PC and TDT statistics showed consistently higher power than the EC, although these gains were extremely marginal (≤3.5% increase in power). Formal testing of the differences between the PC and EC approaches using a Wilcoxon signed-rank (WSR) test provided evidence for significant differences (TRIO: PC v EC p WSR = 0.001; ASP: PC v EC p WSR = 0.0012) indicating consistent marginal power gains when using PC compared to EC in these designs. The power from the PC and TDT statistics differed by no more than 1% in the TRIO data set and by less than 2.2% in the ASP data set and were not statistically different (TRIO: TDT v PC p WSR = 0.392, ASP: TDT v PC p WSR = 0.168).
For the LP data sets we also compared the EC and PC approaches. In contrast to the TRIO and ASP data sets, the PC approach in large pedigrees involves a mixture of pseudo and explicit controls according to the pedigree structure. If both parents of an affected case are genotyped and are unaffected, then a pseudocontrol is generated from their data and used in place of their explicit data, otherwise controls are considered explicitly. In the LP1 data set, power differences between the EC and PC approaches were marginal across all models and neither approach was consistently better than the other. The largest difference between the two was 3.2% for the disease model with a risk haplotype of 0.10 and a GRR of 1.35. Formal testing indicated that there was no evidence that one approach was consistently superior to the other (p WSR = 0.28). However, in the LP2 data set, the EC approach most often gave marginally more power (in all but one model) with an average increase of 1.35%. This consistent marginal increase was significant (p WSR = 0.006).
Examining power across all resource designs shows that the LP1 data set, matched in sample size to the TRIO data set, was consistently more powerful than the TRIO data set for all risk haplotype frequencies and GRRs (p WSR = 0.009). The maximum difference in power of 26.2% between the two data sets is seen with the 0.04 risk haplotype frequency with 1.5 GRR. Furthermore, even though the other three designs (CC, ASP and LP2) had smaller sample sizes, these designs also out-performed the TDT design for the majority (at least 75%) of models. The LP2, matched in sample size to the CC and ASP data sets, performed comparably to the CC or ASP data sets (both p WSR >0.3). An increase in power of the LP1 compared to LP2 was evident (p WSR = 0.001) consistent with the increase sample size of LP1 which contains twice as many controls.
For the mixed nuclear family and case-control designs that include TRIOs and ASPs (TRIOCC and ASPCC), the superiority of the PC approach reflected the observations for the single data set results (TRIOCC: PC v EC p WSR = 0.008, ASPCC: PC v EC p WSR = 0.0075). In the mixed large pedigree and case-control data sets, the PC approach outperformed the EC approach in one set (LP1CC: PC v EC p = 0.036), but not in the other (LP2CC: EC v PC p = 0.61). Given the superiority of PC to EC in all but LP2CC and the marginal nature of the individual differences, only the PC results are detailed in Table 3 .
As expected, power was always increased in the joint, two-data-set mixed resources compared to either single data set. Furthermore, and previously shown by others [ 38 ], the power of the joint analysis in the mixed resource was always superior to the power of analyzing both resources separately (power = 1-{(1-p 1 )×(1-p 2 )}, where p 1 is the power for the first data set and p 2 is the power of the second). | Discussion
Here we have described a MC, MLE-based haplotype association method and software (hapMC) designed to analyze a set of tightly-linked SNPs in general pedigree and/or independent case-control based studies. HapMC allows for the haplotype MLE to be estimated by either a pedigree-naïve or pedigree-informed algorithm. A novel aspect of our method comes from the implementation of the pedigree-informed general phasing algorithm that appropriately handles related and unrelated individuals into the haplotype phasing. A variety of pedigree-informed phasing algorithms currently exist (see [ 7 , 8 ]), but none have established practical measures for dealing with large amounts of missing data in extended pedigrees and directly integrated these for haplotype association testing. Our algorithm includes a preprocessing step to optimally split large pedigrees into substructures, which enables it to consider important pedigree structure that surrounds dense genotype data and maintain tractability. While this step may appear trivial, it is a necessary step to analyze large pedigrees with missing data that current phasing programs cannot handle. Our new approach includes both incorporation of pedigree structure and a partition-ligation in the haplotype estimation procedure.
We found that the accuracy of haplotypes estimated from our pedigree-informed algorithm was always equal or superior to that estimated without these algorithm improvements. Even in the situations where there was little or no pedigree structure (CC and TRIO data sets), the new algorithm performs substantially better due to the partition-ligation alone. However, a notable issue from our investigations is that while the new pedigree-informed algorithm always results in greater accuracy than a pedigree-naïve approach, the phasing times are orders of magnitude longer for data sets with little or no pedigree structure, large number of loci (nloci = 15) and high missing rates (10-15%). Hence, for incorporation in a Monte Carlo analysis approach where the procedure must be repeated thousands of times, the new phasing algorithm is impractical for unrelated individuals with high missing rates. We therefore recommend that for a resource of unrelated individuals (CC data sets) the standard full likelihood approach (as implemented, for example, in [ 27 ]) is the best alternative. However, for data sets that include family structure we find that substantial haplotype accuracy is lost by ignoring pedigree structure, and the use of an algorithm that considers that structure (such as hapMC) is a more prudent choice. In some situations, this may require a more stringent quality control protocol with higher minimum genotyping thresholds to retain practical application of the more sophisticated algorithm.
Comparisons between our new phasing algorithm and a previously proposed pedigree-based algorithm, HAPLORE [ 19 ], show equivalent or slightly better MLE haplotype accuracy for the new algorithm for all situations and data sets considered. In terms of phasing times, our algorithm ran substantially faster in most situations considered for comparison, particularly when using family-based data with higher number of markers and missing rates. Furthermore, HAPLORE was unable to be used for in nuclear families with larger numbers of loci and high missing rates, or in the large pedigrees.
As an empirical approach, the space and time requirements for hapMC can be considered limitations. The time required to phase haplotypes and calculate the observed association statistics must be scalable to be able to practically generate the necessary MC simulations. As has been mentioned, the haplotype phasing aspect of hapMC can be computationally intense for large data sets and high missing rates. For example, to analyze ten markers in one mixed large pedigree and case control resource with ~1500 genotyped individuals the hapMC algorithm required ~21 minutes using a 2.40 Ghz processor and 2 Gb of memory. These requirements increase with increased missing data and markers and decreased pedigree structure. In our current application, the total number of loci hapMC can practically handle is approximately 20, although the precise limitation is dependent upon the data set characteristics (including missing data rate, number of individuals and families, and types of families). However, because the method is designed for tSNPs across a non-recombinant follow-up GWAS region or candidate gene, marker sets of fewer than 20 markers is not unreasonably small. The marker limitation is also consistent with other programs with similar approaches.
It is known that the use of MLE haplotypes in association analyses (that is, ignoring phase uncertainly) can lead to invalid association tests and may result in biased estimates of effect size and other parameters [ 30 - 32 ]. We emphasize here that all tests in hapMC are under the null hypothesis of no association of any haplotype, and that the key to the MC procedure in producing valid association statistics using the MLE haplotypes is to generate properly matched null data from which to generate the null distribution [ 34 ]. Our method uses a MC procedure that matches the entire phasing process and the use of MLEs in the observed data and in all null data sets used for the null distribution. HapMC therefore produces accurate significance levels for both tests for independence and effect size, as we have shown. However, the point estimates for effect size statistics (that is, odds ratios) estimated from our method may be upwardly biased. Such biased effect sizes are possible when using pedigree data that have been ascertained for disease and analyzing related controls explicitly. While the bias may be removed by using a matched case/pseudocontrol analysis within families [ 39 ], the point estimates (such as odds ratios) should be interpreted with caution both due to the use of MLE and the pedigree-based data. It is worth noting that in joint analyses of multiple resources, if the disease MAF and/or disease effect size is anticipated to differ across the component studies that a formal meta procedure should be followed. HapMC has been incorporated in to the Genie framework, and hence formal meta procedures can be implemented in the approach [ 40 ].
Not addressed here, is that while haplotype association testing is considered a reasonable approach to explore, it is often burdened with the task of determining which haplotypes or sub-haplotypes that should be tested. It may be of interest to note, that the MC, MLE haplotype association approach outlined in this paper has also been incorporated into a peripheral Genie software package called hapConstructor [ 41 ] ( http://bioinformatics.med.utah.edu/Genie/hapConstructor.html ). HapConstructor is a data mining software aimed at identifying the most significant haplotypes from a data set. However, due to computing time constraints of a data-mining approach, currently hapConstructor is limited to using the phase-naïve EM algorithm for haplotype estimation. Mixed resource structures and formal meta analyses are supported within hapConstructor and, as we have shown here, even though it may not be ideal, our MC approach with the pedigree-naïve MLEs remains valid.
We have illustrated hapMC using multiple single data sets of varying design, as well as several joint resources based on a combination of one traditional case-control data set and one family-based data set. However, the MC approach extends more generally to multiple constituent groups where each can be from any study design. Furthermore, the family structures it can analyze are not limited in size or structure. This feature was demonstrated here by the LP data sets that were five-generation pedigrees with substantial missing data. We re-emphasize that large pedigrees with missing data may necessitate pedigree splitting at the phasing step, but that the full structures are maintained when generating the null configurations to fully account for the familial relatedness in the association analyses. To our knowledge, hapMC is the only method and software currently available that can provide valid haplotype analyses in resources of mixed study designs that include general pedigrees. As previously shown by others [ 38 ], the importance of joint analyses is the increased power such analyses offer over simply combining the statistical evidence from two separate analyses.
Beyond demonstrating the validity of the method, our power results provide some insight into the relative strengths of different study designs and statistical approaches. In our single data set analyses, for TRIO and ASP data sets we found that both the trend test with a PC approach and the TDT were superior to the EC approach. There was a lack of significant difference in power between the PC approach and the TDT analysis. From this we would conclude that the TDT statistic remains the preferred statistic for analyzing nuclear family designs due to its additional robustness to population stratification. In the large pedigree data sets, the relative superiority of the two approaches (PC and EC) was not clear. In the single data sets (LP1 and LP2), the EC and PC approaches were similar in LP1 and the EC approach appeared superior in LP2. In the joint data sets (LP1CC and LP2CC), the PC approach was superior in LP1CC, but no significant difference was found in LP2CC. The lack of impact of the PC approach in LP2 may be due to the reduced control size in that data set (1,000 vs 1,500 total), however, our observations highlight the difficulty in defining optimal approaches for general pedigrees where the specific structure may influence the relative powers of different approaches. To investigate this we repeated our LP analyses, but with oversampling of parents of affected cases and less sampling of other individual as controls, thus increasing the number of occurrences that the pseudocontrol could be used in the analysis. We found that the PC approach improved in power (data not shown). In summary, the PC approach was found to have significant superiority over the EC approach in the TRIO, ASP, TRIOCC, ASPCC, and LP1CC data sets. Only one design indicated superiority of the EC approach (LP2), and the remainder indicated no significant difference (LP1, LP2CC). Our results therefore suggest that the PC approach is likely to be the better approach for mixed nuclear family and case-control designs.
We also explored a stricter definition to select familial controls. We found that if close relatives are simply not considered in the analyses (restrict controls to only those further than first degree) that the power was adversely affected by the reduced control sample size (data not shown). This indicates that the close relatives contribute positively to the power of the analysis.
Comparing different single study designs, the TRIO design consistently performed worse than all other designs. Of the remaining designs, it was interesting to note that for matched sample sizes the large high-risk pedigree design (LP2) was comparable in power to ASP and CC designs. Large pedigrees arguably contain the most redundancy in the familial cases (and controls), but are also enriched for disease alleles (ascertainment criteria of 14 cases required per large pedigree, a rate of 2.0 fold increase over the sporadic rate). Familial controls have previously been shown to increase power [ 42 , 43 ]. The comparability in power between LP2 and CC suggests that the positive effect of the disease allele enrichment in LP2 may have balanced the decrease in effective sample size due to the redundancy in information from related subjects. We also found that LP2 and ASP were not significantly different for power and both designs are enriched for disease. However, on average, large pedigree controls are less related to the cases than controls are in ASPs, hence the effective increase in control population in LP2 may balance the effect that the effective sample size of the cases is reduced. Of course, it must be noted that our results may be specific to our simulated data sets, and, for other large pedigree structures, these findings may not hold. Nonetheless, our results indicate substantial potential for large pedigree resources and using pedigree-based controls for haplotype association analyses. | Conclusions
In conclusion, we have developed a method and software to perform valid haplotype analyses in resources of mixed pedigree structure. To our knowledge this is the only method currently available that can perform such analyses. Similarly to that found by others [ 38 ], our findings illustrate the power advantage of joint analyses and, furthermore, suggest family-based resources can play a valuable role in haplotype association studies. | Background
Genomewide association studies have resulted in a great many genomic regions that are likely to harbor disease genes. Thorough interrogation of these specific regions is the logical next step, including regional haplotype studies to identify risk haplotypes upon which the underlying critical variants lie. Pedigrees ascertained for disease can be powerful for genetic analysis due to the cases being enriched for genetic disease. Here we present a Monte Carlo based method to perform haplotype association analysis. Our method, hapMC, allows for the analysis of full-length and sub-haplotypes, including imputation of missing data, in resources of nuclear families, general pedigrees, case-control data or mixtures thereof. Both traditional association statistics and transmission/disequilibrium statistics can be performed. The method includes a phasing algorithm that can be used in large pedigrees and optional use of pseudocontrols.
Results
Our new phasing algorithm substantially outperformed the standard expectation-maximization algorithm that is ignorant of pedigree structure, and hence is preferable for resources that include pedigree structure. Through simulation we show that our Monte Carlo procedure maintains the correct type 1 error rates for all resource types. Power comparisons suggest that transmission-disequilibrium statistics are superior for performing association in resources of only nuclear families. For mixed structure resources, however, the newly implemented pseudocontrol approach appears to be the best choice. Results also indicated the value of large high-risk pedigrees for association analysis, which, in the simulations considered, were comparable in power to case-control resources of the same sample size.
Conclusions
We propose hapMC as a valuable new tool to perform haplotype association analyses, particularly for resources of mixed structure. The availability of meta-association and haplotype-mining modules in our suite of Monte Carlo haplotype procedures adds further value to the approach. | Abbreviations
GWAS: genomewide-association studies; EM: Expectation-Maximization; MCMC: Markov chain Monte Carlo; LD: linkage disequilibrium; SNP: single nucleotide polymorphism; MC: Monte Carlo; TDT: transmission disequilibrium test; MLE: maximum likelihood estimate; HWE: Hardy Weinberg equilibrium; PL-EM: partition ligation expectation-maximization; CEPH: Centre d'Etude du Polymorphisme Humain; dSNP: disease single nucleotide polymorphism; ASP: affected sib-pair; LP: large pedigree; CC: case-control; EC: explicit control; PC: pseudo control
Authors' contributions
RA designed and programmed the software, drafted the manuscript and performed the statistical analyses. JW contributed to the programming of hapMC and participated in the design of the software. AT contributed to the methodological development. NJC conceived of the methodology and helped to draft the manuscript. All authors read and approved the final manuscript. | Acknowledgements
Ryan Abo is an NLM fellow, supported by NLM grant T15 LM0724. This research was also supported by NIH grants CA 098364 and CA134674 (to NJC), GM070710 and GM81417 (to AT), a Susan G. Komen Foundation grant (to NJC) and an Avon Foundation grant 02-2009-080 (to NJC). | CC BY | no | 2022-01-12 15:21:46 | BMC Bioinformatics. 2010 Dec 9; 11:592 | oa_package/87/13/PMC3016409.tar.gz |
PMC3016410 | 21144021 | Background
It is clear that sporadic colorectal cancer - as well as other cancers - is largely the product of acquired somatic mutations [ 1 ]. Though many of these mutations are functionally relevant to the tumor ("driver" genes), the most well-studied cancer driver gene remains Apc (adenomatous polyposis coli), thought to be the first hit in the majority of nonhereditary colon cancers [ 2 ]. While Apc is commonly known as an antagonist to β -catenin and WNT signaling, a growing body of evidence points to the importance of Apc in a variety of other cellular contexts - from microtubule polymerization [ 3 ] to cell migration [ 4 ]. Apc also plays important roles in chromosome segregation and stability, localizing to spindles, kinetochores, and centrosomes in mitosis [ 5 , 6 ]. The myriad aspects of Apc signaling may not be relevant in all cellular contexts, however, as signaling depends upon the background gene expression program and, in cancer biology, is often the result of multiple mutations. In fact, mouse models mutated at two driver genes simultaneously have shown a synergistic (i.e. non-additive) increase in tumor burden, such as in Pten-Apc [ 7 ], Kras-Tgfb [ 8 ], and Apc-Trp53 [ 9 ] double mutants. Such genetic synergy suggests that the pathways emanating from the two genes intersect downstream, supporting the idea that only a subset of all possible pathways are involved in a tumor harboring a mutation in Apc . We hypothesize that these mutations have distinct synergistic effects on the cancer phenotype, such that the activities of these networks are greatly associated with the measured downstream changes in the proteome of the intestine. We argue that these measured molecular changes can be leveraged to elucidate which pathways are most relevant to the disease model at hand.
In order to prioritize the various pathways associated with a cancer driver gene, we have developed a computational framework to first predict the set of pathways functionally related to Apc signaling in mouse models (Figure 1 ). Our algorithm mines chains of proteins (simple paths) from a protein-protein interaction (PPI) network; these paths are then filtered by tissue-specific mRNA coexpression and Gene Ontology (GO) [ 10 ] annotation rule mining [ 11 ]. To identify biologically relevant paths, we constrain our search space to pathways connected to previously identified cancer driver genes (CAN-genes) [ 12 ], as many of these pairings are expected to be simultaneously mutated. The set of paths linking Apc to each CAN-gene comprises a subnetwork, which we refer to as a petal in the Apc blossom . As each petal is based on in silico predictions, we then use publicly available functional genomic and proteomic data from the intestine of the Apc 1638N +/- mouse to assess the biological relevance of each petal in this mouse model. As proteins themselves are the mediators of cellular functions, we mapped proteome-level measurements identified through 2 D differential In Gel Electrophoresis (2D-DIGE) to each petal, using mRNA-level coexpression to quantify the strength of the relationship. We chose to use 2D-DIGE - a widely used 2 D gel electrophoresis based method - to illustrate our approach. However, our methods can utilize a variety of proteomics data (e.g. label-free LC/MS (Liquid Chromatography/Mass Spectrometry), protein antibody chips etc.). Though transcriptional activity (i.e. mRNA level) does not strictly correlate with translational activity (i.e. protein level) [ 13 , 14 ], coexpression information can still be helpful in uncovering regulatory hot spots in protein networks [ 15 ]. Testing each petal against such functional data correlates gene and protein expression readouts with specific driver gene relationships, thereby allowing the experimenter to identify the petal most likely to be operative in this particular mouse model. | Methods
The Blossom Algorithm
The Apc blossom is built using the Blossom algorithm, based on the PathFinder architecture [ 11 ]. A recent study compared various frameworks developed for detecting signaling networks [ 19 ], and the PathFinder architecture had the best recall rate compared to other available methods, whereas all methods described had a similar precision rate.
In the Blossom algorithm, networks (e.g. pathways) connecting proteins of interest are built by integrating and mining multiple datasets. First, the network of publicly available interactions [ 20 , 21 ] (over 80K interactions) is filtered to remove less reliable interactions, i.e. likely false positives, and, then, new interactions are added to enrich the network to account for missing interactions, i.e. false negatives. To remove false postives, a logistic regression model that incorporates ( i ) the number of times a PPI is observed, ( ii ) coexpression measurements for the corresponding genes, ( iii ) the protein's small world clustering coefficient, and ( iv ) the protein subcellular localization data of interacting partners [ 22 ].
Coexpression values (Pearson's correlation coefficient) are calculated from mRNA expression profiles of the laser-capture microdissected epithelium from the Apc Min /+ mouse (series GSE422 [ 23 ]), providing coregulatory information specific to our tissue and organism of interest. The logistic regression model that predicts the validity of interactions is trained on positive (1000 PPIs from the MIPS database [ 24 ]) and negative training data sets (1000 randomly selected PPIs not in MIPS, assuming that most interactions are unreliable or irrelevant [ 11 , 25 ]). Repeating these trials 100 times, an optimized cut off point for the probability of a true interaction is set, and a network of reliable interactions is formed (~ 30 K PPIs).
Finally, false negative interactions are inferred using sequence homology relationships, as it has been shown that similar sequences share similar interaction partners in the same organism [ 26 - 29 ]. An interaction edge is inferred among two proteins if no record of interaction exists, and there exists at least one interaction between the protein families of these two proteins (since sequences sharing similar domains share similar interaction partners [ 30 , 31 ]) (Pfam release 23.0 used [ 32 ]).
These steps resulted in a filtered network with predicted edges within which we searched for pathways linking Apc and CAN-genes. GO biological process annotations [ 10 ] are used to generate functional association rules from know pathways [ 24 , 33 - 35 ] as outlined in [ 11 ]. Association rules are tuples representing a noteworthy relationship, in this case functional relationships, between two interacting proteins. For each protein, leaf terms on the GO term graph are used. In addition, the average absolute coexpression is calculated for each path, and paths are then filtered according to a set threshold ( γ = 0.6). These rules and parameters are used to evaluate candidate paths for possible occurrences of these rules. The p -value, p φ , for a path, φ , is calculated with the null hypothesis being that every simple path connecting two proteins has a number of association rules associated with these interactions, but the average number of rules on these paths are uniform across various paths. Significant paths, i.e. p φ < p threshold , are merged into a subnetwork, thus representing a petal in the blossom. An empty set is returned when there is not a significant path.
Formally, let G ( V , E ) denote the PPI network gathered from publicly available interactions. Also, let G' and G'' be networks built on the same set of nodes, V , using the procedures described above, where false positive interactions, F , are removed, E' = ( E - F ), to obtain G' ( V , E' ), and a set of additional interactions, H , are imputed (based on sequence information) to obtain E'' = E' ∪ H forming G'' ( V , E'' ).
The objective of the proposed Blossom framework is to find a petal for a given protein c a ∈ V (in our case, Apc ) and each protein c i in the candidate set of proteins C ⊂ V (CAN-genes). To reduce the search space, Blossom employs a network diameter heuristic. Namely, for each node pair ( c a and c i ), let d i denote the shortest path between c a and c i in G ( V , E ) (PPI network without inferred edges). For each c i ∈ C , we then search G' ( V , E' ) for every path that connects c a to c i with path length smaller than d i that connect c a and c i . This guarantees at least one path for consideration if the two nodes are connected.
The paths on the network are discovered using all paths depth first search ( AllPathsDFS ), where every path connecting c a and c i that is less than d i , Φ i , is identified. In the final step of the algorithm, these paths are compared against the null distribution for significance. For the shortest path calculation, a single-source shortest path solution is used (e.g. Dijkstra's algorithm). The Blossom algorithm's run time is the same as the all-paths depth first search: where .
Input : c a , C , G ( V , E ), G'' ( V , E'' ), p threshold , γ
Output :
foreach c i ∈ C do
d i = ShortestPathDistance( G ( V , E ), c a , c i );
if d i = = ∞ then
d i = ShortestPathDistance( G'' ( V , E' ), c a , c i );
end
Φ i = AllPathsDFS( G'' ( V , E'' ), c i , c a , d i );
forall the φ ∈ Φ i do
if r ( φ ) ≤ γ and p φ < p threshold then
.
end
end
end
Algorithm 1 : The Blossom algorithm that returns the blossom network for protein c a .
Plucking Petals: Testing Bimodality of Coexpression
For a particular petal, a single node perturbation (e.g. a mutation at Apc ) within the petal itself will perturb pathways that are expected to associate with the given petal more strongly than others, assuming that the network predictions were accurate. To identify the best petal in the Apc blossom, we employed a mouse mutant, Apc 1638N +/- , representing a perturbation at the stamen. The transcript and protein levels of Apc itself have been verified in previous studies [ 18 ]; in this study, we were interested in distilling the myriad downstream effects into a coherent set of candidate pathways. As proteins are the ultimate mediators of function, targets from proteomic experiments - such as label-free, or, in our case, 2 D DIGE - represent an ideal dataset for assessing the downstream effects of such perturbations. However, proteomic technologies often sample the most abundant quartile of proteins [ 36 ], while cancer network predictions - such as those in the Apc blossom - often focus on low-abundance signaling proteins.
In order to make inferences about identified petals, a relational map must be used to connect the proteomic targets to the petal of interest. Coexpression networks are currently the most informative and accessible mapping available, as proteins correlated at the mRNA-level are hypothesized to be coregulated.
Thus, for a hypothesized petal, P , mRNA coexpression (Pearson's correlation coefficient) was calculated between the nodes, i ∈ P , and the 2D-DIGE targets, d ∈ D (where D ⊂ S and S is the set of all genes on the array) measured in the Apc 1638N +/- mouse intestinal epithelium. The 2D-DIGE targets' Mascot DAT files are available through the Proteomics Identifications Database (accession number 10638) [ 37 ].
Apc 1638N +/- microarray data is available through the Gene Expression Omnibus (GSE19338) [ 38 ]. Two fractions, representing crypts and villi, were available with four samples in each group (eight samples each, wild-type and Apc 1638N +/- ). Though the mild phenotype of the Apc 1638N +/- mouse appears to result in a low signal - in stark contrast to that observed from Apc Min /+ mice - many molecular changes are still measurable, as evidenced by the 'omic experiments. The proteins identified within each fraction were pooled to arrive at a set of 31 2D-DIGE targets shown on the periphery of Figure 4 (see [ 17 ] for detailed methods). Robust Multiarray Averaging was used to normalize mRNA expression measurements, and differential expression was calculated between the eight mutant samples versus the eight wild-type samples. For coexpression, the wild-type and Apc 1638N +/- microarray data were normalized by dChip [ 39 ] to avoid artificially inflating coexpression values [ 40 ].
Additionally, mRNA coexpression is more informative for nodes that are known to be differentially expressed, as these nodes are regulated differently between wild-type ( WT ) and mutant tissue ( MT ); a node with low differential expression may have many coexpression linkages simply due to its uniform expression profile over the samples, which is shared by the majority of genes (as most genes are not differentially regulated). To focus on genes with strong levels of both coexpression and differential expression, we compute the active coexpression as follows:
Where is the vector of coexpression between node i (in petal P ) and all other genes on the array; α i is the activity of node i , defined as the scaled, absolute differential expression:
Where μ MT , i is the average expression of a gene, i , across the samples in the mutant, MT (in our case, Apc 1638N +/- ), and σ 2 is the associated variance; these parameters are defined respectively for the wild-type ( WT ) samples. The active coexpression matrix, R' ( P , D ), between a given petal, P , and the 2D-DIGE targets, D , is then vectorized, vec ( R' ( P , D )). The distribution of vec ( R' ( P , D )) is expected to be leptokurtic (i.e. higher peak, fatter tails), as it is a product of a normal and a folded normal distribution (see Figure 5A ). With coexpression measurements, we are particularly interested in the tails of the distribution, as these are expected to exhibit two modes - one positive and one negative - if subgroups of coexpressed 2D-DIGE targets exist. Thus, we developed a measure of bimodality, β :
F P,D is the empirical cumulative distribution function (CDF) for vec ( R' ( P , D )) over the range of active coexpression values, x ; F P,S is the empirical CDF for vec ( R' ( P , S )) i.e. the expected active coexpression to all genes on the array; and the sample deviation, Δ F P , is simply the difference of the two CDFs. l x <0 is the moment arm of the distribution defined classically as:
And l x ≥0 is defined similarly. Thus, l x < 0 and l x ≥0 represent the centers of mass for the negative and positive active coexpression values' deviation from the expected distribution (Figure 5B ). The bimodality, β P , then, is simply the torque of the distribution, Δ F P ( x ), around the center: negative values of β P indicate a clockwise skewing of the tails, with greater mass distributed at extreme (high and low) values of r' than the background; positive values of β P indicate a counterclockwise skew, where the sample distribution is more leptokurtic than the background, and, hence, possesses less correlation than expected. Further insight can be gained by noting that the denominator of the center of mass, l x , cancels out, leaving:
Changing the order of integration allows us to formulate β P in terms of the probability density functions (PDFs) of our targets, f P,D ( x ), and the background, f P,S ( x ):
Where E (·) indicates the expectation. Thus, we see that β P is the difference between the second moments of the two distributions (or the difference of their variances, if both distributions are centered at zero).
While this ultimate formulation of β P is statistically simple, we present the initial formulation - in terms of the center of mass and torque - to provide an intuitive understanding of its motivation and meaning. As mentioned, we use the empirical CDF/PDF to calculate β P . We calculated the significance, p , of β P for a network-petal, P , as follows:
With β rand being the bimodality for a randomly selected set of candidate 2D-DIGE targets; 10000 such sets (of cardinality equal to that of P ) were generated. Then, the null hypothesis is that the coexpression pattern between the network-petal and the proteomic targets is random, and the p-value is the probability of attaining at least a value of | β P | via stochastic generation of 2D-DIGE targets. | Results and Discussion
In this paper, we present a method to capture the likely signaling pathways of a cancer driver gene, focusing on the signaling related to Apc as an example. The initial set of pathway predictions are mined from protein-protein interaction networks, coupled to mRNA coexpression data and Gene Ontology association rules. We refer to this data-mining process as the Blossom Algorithm (Figure 1 top), as it produces a network connecting a driver gene (e.g. Apc ) to a set of putative signaling partners, referred to as the Apc blossom (Figure 1 center). The Apc blossom is then pruned using biological evidence (microarray and proteomic data) to identify a candidate petal, or subnetwork, most likely to be involved in Apc signaling (Figure 1 bottom).
The Apc Blossom: A PETALS Network
To study CAN-gene pathways operative in the Apc 1638N +/- mouse model, we used the Blossom algorithm to identify pathways connecting Apc to 68 other CAN-genes [ 1 , 12 ]. In summary, the Blossom Algorithm mines publicly available protein-protein interaction networks to uncover paths - i.e. chains of proteins - likely to be "functional." As evidence of a path's functionality, we use mRNA coexpression and Gene Ontology association rules. As our current knowledge of molecular networks is incomplete [ 16 ], we use sequence homology to infer these missing data. The details of the Blossom algorithm follow below (see Methods for additional details; refer to Figure 1 in [ 17 ] for a diagram). First, likely false positives from the underlying PPI network are filtered out. Next, using this filtered PPI network, we were able to find paths linking Apc to 42 of the CAN-genes, forming subnetworks, which we refer to as petals . After imputing interaction edges using sequence homology [ 11 ], this number was increased to 65. However, filtering out paths whose ( i ) average mRNA coexpression was low ( r < |0.6|, a significance threshold validated in similar studies [ 11 , 17 ]) and ( ii ) support of GO annotation association rules based on known signaling pathways and functional annotations [ 11 ] was weak ( p - value > 0.05), the number of Apc -CAN-gene petals was reduced to 24 (Figure 2 ). The petals identified vary in the number of nodes (from 3 - 35) and edges (from 2 - 80) they contain, with some nodes beings shared among multiple petals.
A blossom can be constructed for a wide variety of genes, with the stipulation that corresponding microarray data is available. In our case study of Apc , we employ mRNA expression data from intestinal tumors harvested from Apc Min /+ mice. As multiple mutations are present in these samples, coexpression measurements calculated for this dataset are representative of the tumor microenvironment; as such, both Apc signaling, as well as additional CAN-gene signaling, are likely to be active simultaneously. While the presence of these multiple, active pathways increases the signal associated with cross-talk within in each petal, it does not allow us to determine which pathways are most strongly associated with Apc signaling alone. To answer this question, as outlined in the next section, we used mice with a particular heterozygous mutation in Apc - 1638N - that results in a mild intestinal cancer phenotype [ 18 ], thereby minimizing the noise arising from the many pathways activated in a full-blown tumor. Since we are interested in assessing the systems-level effects of such mutations, we focus on measuring the downstream effects of these genes via 'omic experiments.
Plucking Petals: Testing the Bimodality of Coexpression
The Apc 1638N +/- mouse model represents a perturbation of the stamen (the center node) in the Apc blossom, and such a perturbation is expected to have far-reaching molecular effects. This was supported by the 2D-DIGE proteomic experiments that identified 31 proteins with a variety of cellular functions from the intestinal epithelium of compared to wild-type. We hypothesized that if one of the petals in the blossom truly reveals signaling associated with this mutation of Apc , then the nodes of this petal are more likely to associate with the 2D-DIGE targets than a random group of proteins. To gauge this association, we used a map of coexpression compiled from the corresponding Apc 1638N +/- intestinal epithelium mRNA-expression profile. Assuming that the signaling molecules in a petal are upstream of the 2D-DIGE targets, strong coexpression between a petal and the 2D-DIGE targets can help to identify the causative signaling events that led to these measured changes in abundance of the proteome. Since coexpression is most informative when it relates to differentially expressed nodes (i.e. those that differ between the mutant and wild-type mice), we modulated the coexpression values associated with the nodes in each petal by their respective levels of differential expression. This allows for the identification of nodes where any individual node may have a low level of expression, but the collective level of expression across nodes may be high. We further posited that, if a group of proteins truly is coregulated, then we expect to see deviations in the tails of the coexpression distribution when compared to the expected (background) distribution. To gauge this deviation, we introduced the bimodality, β , of coexpression: a measure based on the mass (i.e. area under the curve) of the cumulative distribution functions' and the distance of the mass from the origin. This allowed us to prioritize the petals by their respective p -values and the top three petals are shown in Figure 3 (See Additional File 1 Table 1 for the complete list). In Figure 4 the 31 2D-DIGE targets are shown on the periphery of the petal, ranked by their degree (i.e. sum) of absolute coexpression. This representation also facilitates the prioritization of 2D-DIGE targets, placing emphasis on those targets whose regulation is supported by multiple elements of the candidate petal. Much of the coregulation can be explained by a few key signaling intermediates - notably, TGFB1 , which has both a high level of differential expression, as well as strong coexpression links. Signaling molecules like TGFB1 are hypothesized to lie upstream of 'omics measurements, and, thus, the petal at the heart of Figure 4 represents a potential set of intermediaries by which the signal arising from a mutation in Apc blossoms into proteome-level manifestations (i.e. the 2D-DIGE targets). | Results and Discussion
In this paper, we present a method to capture the likely signaling pathways of a cancer driver gene, focusing on the signaling related to Apc as an example. The initial set of pathway predictions are mined from protein-protein interaction networks, coupled to mRNA coexpression data and Gene Ontology association rules. We refer to this data-mining process as the Blossom Algorithm (Figure 1 top), as it produces a network connecting a driver gene (e.g. Apc ) to a set of putative signaling partners, referred to as the Apc blossom (Figure 1 center). The Apc blossom is then pruned using biological evidence (microarray and proteomic data) to identify a candidate petal, or subnetwork, most likely to be involved in Apc signaling (Figure 1 bottom).
The Apc Blossom: A PETALS Network
To study CAN-gene pathways operative in the Apc 1638N +/- mouse model, we used the Blossom algorithm to identify pathways connecting Apc to 68 other CAN-genes [ 1 , 12 ]. In summary, the Blossom Algorithm mines publicly available protein-protein interaction networks to uncover paths - i.e. chains of proteins - likely to be "functional." As evidence of a path's functionality, we use mRNA coexpression and Gene Ontology association rules. As our current knowledge of molecular networks is incomplete [ 16 ], we use sequence homology to infer these missing data. The details of the Blossom algorithm follow below (see Methods for additional details; refer to Figure 1 in [ 17 ] for a diagram). First, likely false positives from the underlying PPI network are filtered out. Next, using this filtered PPI network, we were able to find paths linking Apc to 42 of the CAN-genes, forming subnetworks, which we refer to as petals . After imputing interaction edges using sequence homology [ 11 ], this number was increased to 65. However, filtering out paths whose ( i ) average mRNA coexpression was low ( r < |0.6|, a significance threshold validated in similar studies [ 11 , 17 ]) and ( ii ) support of GO annotation association rules based on known signaling pathways and functional annotations [ 11 ] was weak ( p - value > 0.05), the number of Apc -CAN-gene petals was reduced to 24 (Figure 2 ). The petals identified vary in the number of nodes (from 3 - 35) and edges (from 2 - 80) they contain, with some nodes beings shared among multiple petals.
A blossom can be constructed for a wide variety of genes, with the stipulation that corresponding microarray data is available. In our case study of Apc , we employ mRNA expression data from intestinal tumors harvested from Apc Min /+ mice. As multiple mutations are present in these samples, coexpression measurements calculated for this dataset are representative of the tumor microenvironment; as such, both Apc signaling, as well as additional CAN-gene signaling, are likely to be active simultaneously. While the presence of these multiple, active pathways increases the signal associated with cross-talk within in each petal, it does not allow us to determine which pathways are most strongly associated with Apc signaling alone. To answer this question, as outlined in the next section, we used mice with a particular heterozygous mutation in Apc - 1638N - that results in a mild intestinal cancer phenotype [ 18 ], thereby minimizing the noise arising from the many pathways activated in a full-blown tumor. Since we are interested in assessing the systems-level effects of such mutations, we focus on measuring the downstream effects of these genes via 'omic experiments.
Plucking Petals: Testing the Bimodality of Coexpression
The Apc 1638N +/- mouse model represents a perturbation of the stamen (the center node) in the Apc blossom, and such a perturbation is expected to have far-reaching molecular effects. This was supported by the 2D-DIGE proteomic experiments that identified 31 proteins with a variety of cellular functions from the intestinal epithelium of compared to wild-type. We hypothesized that if one of the petals in the blossom truly reveals signaling associated with this mutation of Apc , then the nodes of this petal are more likely to associate with the 2D-DIGE targets than a random group of proteins. To gauge this association, we used a map of coexpression compiled from the corresponding Apc 1638N +/- intestinal epithelium mRNA-expression profile. Assuming that the signaling molecules in a petal are upstream of the 2D-DIGE targets, strong coexpression between a petal and the 2D-DIGE targets can help to identify the causative signaling events that led to these measured changes in abundance of the proteome. Since coexpression is most informative when it relates to differentially expressed nodes (i.e. those that differ between the mutant and wild-type mice), we modulated the coexpression values associated with the nodes in each petal by their respective levels of differential expression. This allows for the identification of nodes where any individual node may have a low level of expression, but the collective level of expression across nodes may be high. We further posited that, if a group of proteins truly is coregulated, then we expect to see deviations in the tails of the coexpression distribution when compared to the expected (background) distribution. To gauge this deviation, we introduced the bimodality, β , of coexpression: a measure based on the mass (i.e. area under the curve) of the cumulative distribution functions' and the distance of the mass from the origin. This allowed us to prioritize the petals by their respective p -values and the top three petals are shown in Figure 3 (See Additional File 1 Table 1 for the complete list). In Figure 4 the 31 2D-DIGE targets are shown on the periphery of the petal, ranked by their degree (i.e. sum) of absolute coexpression. This representation also facilitates the prioritization of 2D-DIGE targets, placing emphasis on those targets whose regulation is supported by multiple elements of the candidate petal. Much of the coregulation can be explained by a few key signaling intermediates - notably, TGFB1 , which has both a high level of differential expression, as well as strong coexpression links. Signaling molecules like TGFB1 are hypothesized to lie upstream of 'omics measurements, and, thus, the petal at the heart of Figure 4 represents a potential set of intermediaries by which the signal arising from a mutation in Apc blossoms into proteome-level manifestations (i.e. the 2D-DIGE targets). | Conclusions
To understand how a mutation affects information flow in a tumor, one must consider both the proximal and distal signaling effects. Proximally, a mutation in a gene may result in a truncated protein product that affects physical interactions, or it may result in a hyperphosphorylated and active state. These small, upstream effects are then amplified and result in distal changes in signaling, affecting mRNA and protein levels of tens to hundreds of seemingly unrelated nodes. While the field of cell signaling is adept at dissecting the proximal effects of a mutation - mechanistically mapping out perturbed pathways - it has not yet developed the tools to fully understand the distal effects and, more importantly, their connection with more proximal signaling. Indeed, currently available commercial software for network analysis can only associate these distal effects amongst themselves, with no regard to the upstream causative mutation. In this study, we present a method by which the distal effects measured in two 'omics experiments - microarray and proteomics - can be simultaneously leveraged to test network-based hypotheses. After testing the hypotheses (petals) against proteomic evidence, the refined petal subnetworks we present not only reveal the relationship between upstream genetic interference and its downstream effects at the proteomics level, but also allow us to prioritize other cancer-driver genes that are likely to act cooperatively with Apc to drive tumorigenesis. This new approach - linking in silico predictions with experimental measurements - provides a way forward in mining context-specific pathways that may prove to be useful in identifying pathways active in individual cancer patients. | Background
Colon cancer is driven by mutations in a number of genes, the most notorious of which is Apc . Though much of Apc 's signaling has been mechanistically identified over the years, it is not always clear which functions or interactions are operative in a particular tumor. This is confounded by the presence of mutations in a number of other putative cancer driver (CAN) genes, which often synergize with mutations in Apc .
Computational methods are, thus, required to predict which pathways are likely to be operative when a particular mutation in Apc is observed.
Results
We developed a pipeline, PETALS, to predict and test likely signaling pathways connecting Apc to other CAN-genes, where the interaction network originating at Apc is defined as a "blossom," with each Apc -CAN-gene subnetwork referred to as a "petal." Known and predicted protein interactions are used to identify an Apc blossom with 24 petals. Then, using a novel measure of bimodality, the coexpression of each petal is evaluated against proteomic (2 D differential In Gel Electrophoresis, 2D-DIGE) measurements from the Apc 1638N +/- mouse to test the network-based hypotheses.
Conclusions
The predicted pathways linking Apc and Hapln1 exhibited the highest amount of bimodal coexpression with the proteomic targets, prioritizing the Apc-Hapln1 petal over other CAN-gene pairs and suggesting that this petal may be involved in regulating the observed proteome-level effects. These results not only demonstrate how functional 'omics data can be employed to test in silico predictions of CAN-gene pathways, but also reveal an approach to integrate models of upstream genetic interference with measured, downstream effects. | Authors' contributions
GB and VP designed, carried out the experiments and drafted the manuscript. GB and VP equally contributed to this article. MRC supervised the study. All authors read and approved the final manuscript.
Supplementary Material | Acknowledgements
Authors would like to mention support from National Institutes of Health grants R25T-CA094186, P30-CA043703 and UL1-RR024989. We are grateful to Dr. Mehmet Koyutürk for critically reviewing this manuscript and for his insightful advice. | CC BY | no | 2022-01-12 15:21:46 | BMC Bioinformatics. 2010 Dec 13; 11:596 | oa_package/9e/e9/PMC3016410.tar.gz |
PMC3016411 | 21246050 | Introduction
In their niche, cells are presented with an array of complex biophysical and biochemical signals from the surrounding extracellular matrix (ECM) [1] , [2] , [3] . The Young's modulus, E , often referred to in a biological context simply as elasticity or stiffness, is an intrinsic ECM characteristic that has a profound effect on cell spreading, morphology, and function [4] , [5] , [6] , [7] . In particular, stem cells show lineage-specific differentiation when cultured on substrates matching the stiffness corresponding to native tissue; neural stem cells become either neural or glial lineages depending on matrix elasticity [8] , pre-osteoblasts most efficiently form calcified deposits when cultured on optimally stiff substrates [9] , and multipotent mesenchymal stem cells (MSCs) [10] become neurogenic, myogenic, and osteogenic when cultured on substrates mimicking neural, muscle, and bone stiffness environments, respectively [11] , [12] , by regulating their cell tension [11] , [13] . However these studies utilize polymer systems that have static parameters while their native counterparts reside in a dynamic environment in which elasticity may change spatially and/or temporally. For example, epicardial stiffness increases approximately 3-fold during development [14] while myocardium post-infarction forms a fibrotic scar that is 3- to 4-fold more stiff than surrounding muscle [15] . Elasticity also varies naturally at interfaces, e.g. hard, calcified bones are connected to soft cartilage [11] , [16] . As MSCs egress from bone marrow and hone to these interfaces or migrate through tissue [17] , they may encounter such stiffness gradient(s), and it is not clear whether the MSC response to these stimuli is to remain in place and differentiate, as with static materials [11] , [12] , or migrate in response to the stiffness gradient as with fibroblasts [18] .
Several methods have developed in vitro elasticity gradients starting with polymerizing adjacent solutions of differing polymer concentrations to obtain a gradient at the solution interface [18] . More complex methods have employed microfluidic devices [19] or photolithographically-patterned photoactivated initiators [19] , [20] , [21] , [22] to generate monomer and/or crosslinking density gradients. A hallmark of these studies is the observation that most somatic cells, e.g. fibroblasts, endothelial cells, and vascular smooth muscle cells [18] , [19] , [20] , [21] , [23] , migrate in response to stiffness gradients in a process called “durotaxis,” with specific exceptions for cells originating from highly stratified structures [22] . However gradient strength, i.e. the degree of stiffness change per length, for these studies is typically in a pathological rather than physiological range [15] . A notable exception has shown that somatic cell migration is dependent on gradient strength, though the shallowest gradient – 10 kPa/mm – was still within a pathological range [24] . While some somatic cells may durotax in physiological gradients [20] , each mature cell type exhibits lineage specific behavior within a physiologically relevant stiffness range [4] , [6] , [25] .
On the other hand, undifferentiated MSCs lack such a preference and are in fact programmed by these surroundings [11] , [12] , [13] . Since much of their migration is likely to occur through tissue with physiological rather than pathological gradient(s) before reaching the site in need of regeneration, perhaps a more fundamental question is whether they durotax when presented with a physiological stiffness gradient <1 kPa/mm in the absence of other stimuli, e.g. soluble growth factor gradients which could induce chemotaxis. To better understand the role this potential signal could play in MSC fate, we cultured MSCs on a photopolymerized polyacrylamide (PA) hydrogel of varying stiffness and provide the first evidence that MSCs indeed appear to undergo durotaxis rather than remain stationary. Morphological and lineage marker assessment indicates that MSCs, even within shallow durotactic gradients, migrate to stiffer matrix and then differentiate into a more contractile cell, though this behavior is complicated by some degree of ‘memory’ of the previously soft environment from which they migrated. | Materials and Methods
Cell culture
Low passage number human MSCs (Lonza, Inc., Switzerland) were subconfluently cultured at 37°C, 5% CO 2 in low glucose Dulbecco's Modified Eagle Medium supplemented with penicillin, streptomycin and 20% fetal bovine serum (Hyclone; Logan, UT). Cells were plated onto hydrogels at either 250 cells/cm 2 except for comparisons with high cell density experiments where cells were plated at 1000 cells/cm 2 . Media change was performed every 4 days. To inhibit proliferation, the MSCs were treated with mitomycin C at 10 μg/mL for 3 hrs and rinsed three times with media before plating. The murine myoblast cell line C2C12 (ATCC) was cultured as a positive control in their normal growth media: 78% High Glucose DMEM+20% FBS+1% Chicken Embryo Extract+1% Penicillin/Streptomycin. C2C12 cell were maintained in their undifferentiated myoblast state and were not chemically induced to differentiate. All cell culture reagents and chemicals were obtained from Invitrogen (Carlsbad, CA) and Sigma-Aldrich (St. Louis, MO), respectively, unless otherwise noted.
Preparation and Functionalization of Polyacrylamide (PA) Substrates
Polyacrylamide substrates with a uniform elasticity were prepared according to a previously established protocol by Pelham and Wang [5] . Briefly, solutions of varying acrylamide and bis-acrylamide concentrations were polymerized by ammonium persulfate (10% w/v; 1/100 v/v) and tetramethylethylenediamine (1/1000 v/v; Bio-Rad; Hercules, CA). The hydrogels were cast between a glass coverslip activated with 3-aminopropyltrimethoxysilane and a glass slide activated with dichlorodimethylsilane.
The polyacrylamide substrates with a gradient elasticity were prepared according to a previously established protocol by Tse and Engler [26] . Solutions of 10% w/v acrylamide, 0.3% bis-acrylamide were polymerized with a free radical photoinitiator, 0.5% Irgacure 2959 (1-[4-(2–Hydroxyethoxy)-phenyl]-2-hydroxy-2-methyl-1-propane-1-One; Ciba, Tri-Iso, CA), under a 254 nm UV light source through a photomask. The photomask was created using the gradient tool in Photoshop and printed at 1200 dpi on nitrocellulose film. Hydrogels were activated with a heterobifunctional crosslinker N -Sulfosuccinimidyl-6-(4′-azido-2′-nitrophenylamino) hexanoate (Sulfo-SANPAH) (Pierce; Rockfield, IL) in a two step reaction performed in non-amine containing HEPES buffer at pH 8.5. First, the nitrophenyl azide portion of the Sulfo-SANPAH was covalently bonded to amine groups within the polyacrlyamide surface upon activation with 365 nm UV light, outcompeting NHS groups for amines due to its promiscuity in binding. After significant washing with HEPES, 0.10 mg/mL type I collagen (BD Biosciences; San Jose, CA) in pH 8.5 HEPES buffer was incubated over overnight at 37°C to allow NHS groups to bind with the collagen. To assess the uniformity of the type I collagen coating, functionalized hydrogels were stained with monoclonal anti-type I collagen IgG (Sigma-Aldrich) and Alexa Fluor 546-conjugated secondary antibodies or 1 μm diameter Fluoresbrite carboxylate beads coated (Polysciences; Warrington, PA). A bicinchoninic acid (BCA) assay (Bio-rad) was also performed to measure total protein conjugated to the entire surface of each hydrogel.
Characterization of Polyacrylamide Hydrogels
AFM was used to measure the elastic modulus at the nano-scale of both static and gradient hydrogels. Photopolyerized hydrogels were allowed to swell in water overnight before testing their mechanical properties by atomic force microscopy. Samples were placed on an Asylum 3D-BIO AFM (Asylum Research; Santa Barbara, CA) and probed with a pyramid-tipped cantilever (Olympus; Japan) having a nominal spring constant of ∼20 pN/nm as determined from thermal calibration. Samples were indented by the probe to yield force-indentation curves from which the elastic modulus, E , or stiffness was obtained using a Hertz cone model [42] , [43] , [44] , fit up to 2 μm indentation. Samples were indented hundreds of times in a random pattern for static hydrogels and at known locations in a radial pattern for gradient hydrogels using an XY-piezoelectric motor-controlled stage to determine the rate of increase in modulus. To confirm a uniform coating of collagen I, stained samples were examined by a CARV II confocal microscope (BD Biosciences; San Jose, CA) mounted on a Nikon Eclipse TE2000-U microscope with a motorized, programmable stage using a CoolSnap HQ camera controlled by Metamorph 7.6 software. Image J software was used to quantify the relative fluorescent intensity of the attached type I collagen as a function of elasticity.
Cell proliferation, viability, and assessment of Durotaxis
Overall cell distributions were determined for durotactic studies by assessing the spatial distribution of Hoescht 33342-stain nuclei using Image J software. 10 μM 5-bromodeoxyuridine (BrdU) was added to cell culture medium 12 hrs prior to fixation. Cells were washed once with PBS and then fixed in a solution of 3.7% formaldehyde in PBS for 15 minutes. Cells were then permeabilized with 0.1% Triton-X 100 (EMD Chemicals; San Diego, CA) for 15 minutes, treated with 1 M HCl for 30 minutes at room temperature, 2 M HCl for 30 minutes at 37°C, and blocked with 2% fetal bovine albumin in PBS for 60 minutes prior to staining with Alexa Fluor 647-conjugated mouse monoclonal anti-BrdU antibody overnight at room temperature followed by Hoechst 33342 (1∶10000) for 10 minutes at room temperature. To access cell viability, the cells were rinsed with PBS and stained with 0.25 μL calcein acetoxymethyl ester and 0.50 μL ethidium homodimer-1 in PBS for 30 minutes at 37°C.
Lineage Specification Assays
For lineage specific proteins, cells were instead blocked with 2% ovalbumin in PBS and then stained using rhodamine phalloidin, Hoescht 33342, mouse monoclonal anti-MyoD IgG (Santa Cruz Biotechnology), and/or rabbit polyclonal anti-β3 tubulin (Sigma). Antibody detection was performed with Alexa Fluor 488, 546, and 647-conjugated secondary antibodies. All samples were examined by a CARV II confocal microscope (BD Biosciences; San Jose, CA) mounted on a Nikon Eclipse TE2000-U microscope with a motorized, programmable stage using a CoolSnap HQ camera controlled by Metamorph 7.6 software. Image J software was used to determine spindle factor [11] , i.e. length of the cell's major divided by minor axes. Staining intensity of MyoD was also assess by Image J by thresholding the Hoescht 33342-stained nucleus image and using it as a mask on the transcriptional factor image to determine the integrated nuclear staining intensity. For β3 tubulin, a thresholded rhodamine-phallodin image was used to as a mask. Intensity was normalized to the negative control, i.e. static hydrogels for the lineage which was not induced at that stiffness, e.g. MyoD on 1 kPa. To aid in image presentation, image intensity for Fig. 5D was enhanced 2-fold in Image J, though original image intensity was used for quantification in Fig. 5E . With each measurement, n>100 cells from triplicate experiments.
Statistical Analyses
All statistical analyses were performed using Origin 8.0 (Origin Lab, Northampton, MA). Differences among groups were assessed by ANOVA with Tukey's post hoc analysis to identify statistical differences among three or more treatments when p is at least less than 0.05. Differences between two treatments, as in Figs. 2B and S2 , were assessed by Student's t-test to identify statistical differences when p is at least less than 0.05. All data is presented as mean ± standard error with each data point's x-value representing the average modulus or position for that image. Given image width, each point is ±0.4 mm or 0.4 kPa though error bars have been omitted for clarity of data presentation. | Results
Surface Characterization of Gradient Hydrogels
A photomask with a radial grayscale pattern was used to create a crosslinking gradient in a 10% acrylamide/0.3% bis-acrylamide hydrogel via selective activation of the photoinitiator Irgacure 2959 ( Fig. 1A [26] ). The elastic modulus with respect to distance from the edge to center of the hydrogels was measured by atomic force microscopy (AFM) and found to have a range of 1 to 14 kPa ( Fig. 1B ). Data was found to have a gradient strength of 1.0±0.1 kPa/mm. Such a gradient is within the physiological range of natural cardiac tissue variations, e.g. 0.6±0.9 kPa/mm, and considerably less than the pathophysiological range of infarct cardiac tissue, e.g. 8.7±1.5 kPa/mm, as previously measured [15] . To permit cell attachment, both gradient and static hydrogels were covalently functionalized with type I collagen using Sulfo-SANPAH, which showed relatively uniform attachment via antibody staining when observed in the XZ cross-section by confocal microscopy ( Fig. 1C ). Quantitative comparison of fluorescent intensity along the surface of the hydrogel indicated that any intensity variations were not statistically different (p = 0.87). Micron-sized antibody-bound beads were substituted for secondary antibodies to ensure that protein was surface accessible across the stiffness gradient and between static hydrogels of different stiffness or those with similar stiffness but different bulk polymer concentration ( Fig. S1 ). Biochemical assessment of protein concentration ( Fig. S1B inset) also demonstrated uniform bulk functionalization.
MSCs Durotax to Stiffer Regions of the Gradient
Human mesenchymal stem cells (MSCs) were cultured on the collagen I-coated gradient hydrogels to determine whether MSCs will undergo directed migration on hydrogels or will differentiate in place. Cells were seeded at a low density (250 cells/cm 2 ) to minimize cell-cell adhesion and traction forces transmitted to adjacent cells initially had a uniform spatial distribution, e.g. 12 hours. After 4 and 7 days of culture, cells' spatial distribution showed a 2-fold increase between the stiffest and softest regions of the hydrogel ( Fig. 2A ). Though nearly all cells remain viable as observed from calcein AM staining on soft and stiff static hydrogels (not shown), cell density ( Fig. 2A inset) and proliferation rates–assessed by the percent of BrdU positive cells ( Fig. 2B )–differed after 4 to 7 days in culture, which may explain why at the stiffest regions of the gradient, cells reached local confluency. To prevent proliferation and observe only durotaxis, MSC were pretreated with mitomycin C, a potent DNA crosslinker that prevents cell division, and allowed to migrate for up to 21 days. Again, MSC spatial distribution was biased towards the stiffest regions of the hydrogel after an initial uniform distribution when plated at low density ( Fig. 3A ). By 21 days, the center of the hydrogel became locally confluent ( Fig. 3A , right), but given the mitomycin C treatment, this was created by all cells undergoing directed migration to the stiffest region of the hydrogel ( Fig. 3B ). Durotaxis can also be observed in mitomycin C-treated MSCs plated at higher densities, i.e. 1000 cells/cm 2 [11] ( Fig. 3C ), and again a loss of cells at the softest regions and an accumulation of cells at the stiffest regions can be observed ( Fig. 3C inset).
Differentiation and Lineage Plasticity on Durotactic Gradients
MSCs on static 11 kPa hydrogels adopt spindle-shaped morphology by 4 days in culture ( Fig. 4A ), characteristic of C2C12 myoblasts [11] , and subsequently express MyoD, a myogenic regulatory factor (see Fig. 5C , inset). MSCs on gradient gels are less polarized and randomly distributed initially but become spindle-shaped in a spatially-dependent manner after 4 days in culture ( Fig. 4B, C ). Spindle factor does not change between days 4 and 7 despite the accumulation of cells at the hydrogel's center, thus cell morphology may only reflect local absolute hydrogel stiffness as with smooth muscle cells [24] . Conversely, MSCs on gradients change durotactic speed the most over this time frame: the rate of change in MSC spatial distribution with respect to time peaks at 4 days in culture ( Fig. S2 ). So to better examine cell fate, shifts in MSC lineage marker expression were monitored over time. For gradient hydrogels, some cells on the stiffest regions began to express MyoD between days 1 and 7 ( Fig. 5A ) with a nuclear localized staining pattern similar to C2C12 myoblasts ( Fig. 5B ). Unlike static hydrogels where MSCs begin to increase MyoD expression at day 4 ( Fig. 5C inset), spatially-dependent MyoD expression increases in MSCs on gradient hydrogels only by day 7 ( Fig. 5C ). Together these data suggest greater durotaxis before expressing MyoD when a stiffness gradient is present.
While a myogenic phenotype is likely for cells that are always on stiffer regions of the hydrogel [11] , [12] , the vast majority of cells first durotax, and it is not certain if those cells display ‘memory’ of the soft region via continued expression of neural markers, e.g. β3 tubulin [11] . At day 7, 1 and 11 kPa static hydrogels show β3 tubulin and MyoD-positive MSCs, respectively, and cells remaining on soft regions of gradient hydrogels expressed β3 tubulin ( Fig. 5D , open arrowheads). However, MSCs on the stiffer regions of the gradient displayed a mixed phenotype consisting of cells positive for MyoD alone (open arrowheads) and those also expressing low amounts of β3 tubulin (filled arrowheads). When β3 tubulin and MyoD fluorescent intensities were quantified and normalized to the non-permissive static hydrogel, i.e. 11 and 1 kPa hydrogels respectively, MSCs on stiffer regions had on average a 3-fold higher β3 tubulin fluorescent intensity versus the control static hydrogel. On the other hand, MSCs on softer regions had less than a 50% difference in MyoD fluorescence versus the control static hydrogel ( Fig. 5E ). Closer inspection of the distribution of β3 tubulin intensity indicates a degree of bimodality ( Fig. S3 ), suggesting the existence of two cell populations. | Discussion
The in vivo niche for MSCs is a complex array of biophysical and biochemical signals [1] , [2] , [3] containing numerous signaling gradients created by injury to which MSCs hone [17] . As MSCs traverse to through normal tissue, they must encounter physiological gradients, including stiffness [15] . Stiffness-induced differentiation is becoming well appreciated (see [4] , [7] , [27] ), and when micropatterned, supra-physiological but spatially-controlled stiffness has been shown to regulate MSC position but not exclusively migration [28] . However it is not certain if or to what degree MSCs respond to shallow physiological stiffness gradients, i.e. 0.6±0.9 kPa/mm [15] . Here we presented MSCs with physiological stiffness gradients and demonstrated that an entire population will preferentially accumulate on stiffer hydrogel regions, regardless of cell seeding density. Concurrently during the fastest period of migration, i.e. day 4, cells displayed spatially-dependent morphology but did not show spatially-dependent changes in myogenic lineage marker expression as on static hydrogels. By day 7 however, MSCs showed spatially-dependent myogenic lineage marker expression despite residual expression of a neural fate in a subset of cells which may have first undergone directed migration.
The observation that a ‘differentiation hierarchy’ may exist, i.e. that there is greater durotaxis before MyoD expression, supports the idea that MSCs may be able to hone to injury sites using other mechanisms in addition to haptotactic [29] and chemotactic gradients [30] , though an insoluble stiffness gradient is not likely to drive initial MSC egression from marrow. Once within the periphery of a tissue however, migration due to both chemo- and duro-tactic gradients are likely to be cooperative when the disease induces localized stiffening as with myocardial infarction [15] . On the other hand opposing gradients are unlikely in vivo , but MSC plasticity has been previously demonstrated using stiffness and growth factor cues of opposing lineages: MSCs on 1 kPa hydrogels challenged with either muscle or osteo-inductive media displayed a mixed phenotype after 1 week but were unaffected by the inductive media after 3 weeks [11] . Here we have tested plasticity using essentially two different stiffness values in the MSCs that migrated from soft to stiff regions of the gradient. We observed that markers characteristic of both myogenic and neurogenic lineages were expressed in a subset of the overall population leading to a bimodal distribution in β3 tubulin intensity. Should these doubly positive cells represent the durotactic fraction of the population, it would imply a degree of cell ‘memory.’ Though single cell migration and phenotype tracking using multiple fluorescently-labeled lineage marker proteins is perhaps ideal, especially considering the possibility of MSC ‘memory,’ these data here at least suggest that MSCs can remain plastic and express differentiation program(s) triggered by stiffness from a region in which they previous resided.
As with smooth muscle cells [24] , these data show that MSC migration is independent of local hydrogel stiffness, i.e. regardless of where the cell is within the gradient, it continues to migrate towards the stiffer substrate. However, MSC fate is directly affected by local hydrogel stiffness and gradient range, e.g. 1–14 kPa. This range over which cells migrated is not likely to be physiological, i.e. the stiffness of healthy muscle only varies approximately between 8 and 15 kPa [4] , [7] , [27] . Moreover, multi-lineage MSCs in vivo do not occur as such large ranges within a tissue are unlikely; therefore the plasticity observed here may not be likely in vivo . On the other hand, in vivo gradient strength can at least range between 0.6 and 8.7 kPa/mm [15] , and since we show here that MSC fate can be regulated even by a shallow gradient, it raises the question of whether MSC fate can be regulated by gradient strength. While many of the questions above can be investigated using this current gradient technique, i.e. increasing overall stiffness or adjusting the gradient by increasing UV cure time or changing the photomask gradient, respectively, it is important to note this method's limited stiffness range and gradient strength [26] . Microfluidic approaches to create gradients can provide a wider stiffness range, and gradient strength can be precisely tuned by microfluidic geometry [19] , [21] . Regardless of the device, physiologically-appropriate gradient strength [15] and stiffness range is necessary [4] , [7] , [27] . Yet to more completely mimic pathological conditions, it may be appropriate to have a composite gradient that changes from physiological to pathological gradients, e.g. 0.6 to 8.5 kPa/mm, as does heart muscle post-myocardial infarction [15] .
Two other critical aspects not accounted for in this gradient system are in vivo ECM structure and dimensionality. Matrix is naturally a fibrillar structure [1] , [3] whereas the hydrogel is not. Natural ECM's alignment can significantly increase matrix stiffness anisotropically, i.e. create a 1D gradient, relative to one that is not organized or is not fibrillar [31] . Transglutaminases also stiffen matrix via crosslinking without significant increase in ligand density [32] , but simple gradient increases in ligand density can also result in stiffness gradients [15] . While the specific mechanism in vivo is not certain, the 2D hydrogel here can sufficiently decouple these effects and illustrate the importance of durotactic considerations in therapies. This 2D system also may have significant predictive power for 3D behavior; within physiological ranges, 3D computational models and fibrillar collagen gels have illustrated durotactic increases coupled with haptotactic migration [33] , [34] , [35] . Stem cell stiffness-dependent differentiation also appears similar to 2D cases, though tension dependence is due to integrin ligation rather than spreading [36] . What these data perhaps indicate is that while the are subtle differences and coupling of different migration modes, ultimately 2D studies here provide substantial motivation to understand MSC homing to injury sites and their changes in phenotype along the way.
Perhaps one additional aspect that 2D models can easily provide is to improve our understanding of how cells sense stiffness and durotax, especially with such shallow gradients. To put MSCs' mechano-sensitivity in context, if the average spindle-shaped MSC is 40 μm long (see Fig. 4B ) and perfectly aligned with the stiffness gradient, it will at most feel a difference of 40 Pa along its major axis. To durotax, an MSC must be sensitive enough to detect that small stiffness difference, which is at most only 4% of overall stiffness at the softest part of the hydrogels. Much of our current understanding of mechano-sensing comes from static hydrogels where stress fiber alignment has a non-monotonic relationship with stiffness [37] that parallels differentiation [11] , [12] and in a tension-dependent mechanism [11] , [13] : inhibition of myosin ablates the cell's response. For durotaxis specifically, directed migration has been successfully modeled by applying elastic stability theory to stress fibers under tension [38] . Although these data demonstrate that the actomyosin cytoskeleton maintains polarized morphology and requisite tension necessary for durotaxis, focal adhesion complexes at the leading edge of cells likely establish critical intracellular signaling gradients for durotaxis. For example, receptor-like protein tyrosine phosphatases [39] and focal adhesion kinase [40] have been implicated in mechanosensing at the leading edge of cells, and in a localized region of the cell, these proteins could undergo strain-induced conformational changes to expose binding sites beneficial to establishing intracellular signaling gradients [27] . Gradients of Rho activation [41] and calcium signaling may also be likely [18] , but ultimately sensing may be a function of all of these mechanisms as well as others yet to be described. Moreover, it is important to note that while such tension-dependent mechano-sensing processes can occur in the absence of specific growth factors, MSC maintenance requires a non-trivial amount of serum [11] , [12] . Other reports note that tension activates MSC responses to specific growth factors in serum-containing cultures [13] . Whether growth factors are required for durotactic sensing or whether they simply maintain cell survival during durotaxis is uncertain, but it is clear that tension is required for durotaxis [18] .
Together though, migration and lineage specification data suggest that MSCs differentiate after undergoing durotaxis and that they also exhibit a degree of plasticity. While the in vivo presence of chemotactic and haptotactic gradients and the aphysical stiffness range investigated here may complicate the predictive ability of our data, these in vitro results at least complement previous infarction studies that show MSCs calcification 4 weeks post-injection into fibrotic muscle tissue [8] where large stiffness gradients are present [15] . These observations emphasize the importance of ECM properties as fundamental regulators of stem cell fate and demonstrate that known variation in these properties can have a profound affect on undifferentiated stem cell behavior. | Conceived and designed the experiments: JRT AJE. Performed the experiments: JRT. Analyzed the data: JRT AJE. Contributed reagents/materials/analysis tools: JRT AJE. Wrote the paper: JRT AJE.
Mesenchymal stem cell (MSC) differentiation is regulated in part by tissue stiffness, yet MSCs can often encounter stiffness gradients within tissues caused by pathological, e.g., myocardial infarction ∼8.7±1.5 kPa/mm, or normal tissue variation, e.g., myocardium ∼0.6±0.9 kPa/mm; since migration predominantly occurs through physiological rather than pathological gradients, it is not clear whether MSC differentiate or migrate first. MSCs cultured up to 21 days on a hydrogel containing a physiological gradient of 1.0±0.1 kPa/mm undergo directed migration, or durotaxis, up stiffness gradients rather than remain stationary. Temporal assessment of morphology and differentiation markers indicates that MSCs migrate to stiffer matrix and then differentiate into a more contractile myogenic phenotype. In those cells migrating from soft to stiff regions however, phenotype is not completely determined by the stiff hydrogel as some cells retain expression of a neural marker. These data may indicate that stiffness variation, not just stiffness alone, can be an important regulator of MSC behavior. | Supporting Information | We would like to thank Dr. Wilda Helen, Somyot Chirasatitsin, Jennifer Young, Andrew Holle, and Dayu Teng for technical support and reagents. | CC BY | no | 2022-01-13 08:14:22 | PLoS One. 2011 Jan 5; 6(1):e15978 | oa_package/f8/da/PMC3016411.tar.gz |
|
PMC3016412 | 21246051 | Introduction
In living tissues, many cell types including smooth muscle cells (SMCs), fibroblasts, bone cells, and tumor cells are exposed to interstitial fluid flow. Interstitial flow can modulate many cellular processes in a 3-dimensional (3D) microenvironment including proliferation, apoptosis, differentiation, and migration [1] – [5] . Interstitial flow therefore plays important roles in tissue physiology and pathology. For example, during the early stages of vascular injury, elevated interstitial flow has been hypothesized to contribute to neointima formation by affecting vascular wall cell phenotype and motility [1] , [2] , [6] – [8] .
To investigate effects of interstitial flow on biology of tissue interstitial cells including vascular wall cells, bone cells, and tumor cells, application of fluid flow shear stress to cells cultured in 2D has been widely used [6] , [9] – [11] . It is now well recognized that culturing cells in a 3D extracellular matrix (ECM) cell culture better mimics in vivo cell physiology than traditional 2D planar culture [12] . It has been reported that interstitial flow can induce cytokine release, cell migration, capillary morphogenesis, and stem cell differentiation in 3D environments [1] , [3] , [7] , [13] – [15] . However, the mechanism by which cells in 3D sense interstitial flow and convert this stimulation into cellular responses (mechanotransduction) has not yet been elucidated. Shear stress-induced mechanotransduction in endothelial cells (ECs) in 2D has been well studied [16] , [17] . Cells embedded in a 3D ECM have different patterns of cell-matrix adhesions [12] and elongated morphologies compared to 2D [18] , which might give rise to different mechanotransduction mechanisms. Therefore, it is necessary to determine the mechanosensors for cells in 3D when exposed to interstitial flow.
In 2D studies, it has been suggested that cell surface glycocalyx components are responsible for sensing fluid shear stress on vascular ECs [19] – [21] and SMCs [9] . The surfaces of eukaryotic cells are decorated with a layer of glycocalyx. The glycocalyx consists primarily of proteoglycan (PG) core proteins that are incorporated into the cell membrane and several covalently bound glycosaminoglycan (GAG) chains that extend into extracellular space [9] . Heparan sulfate (HS), chondroitin sulfate, and hyaluronan are the dominant GAGs on most cell surfaces. Glycocalyx components, especially heparan sulfate proteoglycans (HSPGs), have been shown to play important roles in cellular recognition and signaling, cell growth, adhesion, spreading, and migration, regulating development, tumorigenesis, and vasculogenesis [22] – [25] . Although, in 2D, the role of cell surface glycocalyx component HSPGs in flow-induced mechanotranduction has been extensively studied [9] , [19] – [21] , and also we have shown recently that HSPGs play a role in fluid flow modulation of SMC marker expression in both 2D and 3D [2] , the role of HSPGs in flow sensing in 3D has not been well elucidated.
Focal adhesion kinase (FAK) is a widely expressed cytoplasmic protein tyrosine kinase located in integrin-mediated focal adhesions that regulates integrin signaling. FAK is a major mechanosensitive kinase that can be rapidly activated by a variety of mechanical stimuli and plays an important role in control of cell adhesion and migration [26] , [27] . It has been suggested that HSPGs (such as syndecan-1 and -4) can act cooperatively with integrins in creating signals for cell spreading and for assembly of focal adhesion plaques and stress fibers [28] – [31] . HSPGs themselves can also tether to ECM binding domains with HS chains serving as secondary cell-matrix adhesions [22] . When cells are plated on fibronectin, syndecan-4 can associate with FAK through paxillin and thus has the potential to mediate signaling events parallel to integrins [32] . In 2D, it is well known that HSPGs on the apical surface of cells can act as mechanosensors mediating the transduction of fluid shear stress into biochemical responses [20] , [33] , [34] . On the basal side, similar to integrins, syndecan HSPGs can bind to the substrate and interact with the cytoskeleton to modulate FAK and ERK activation [35] , [36] , suggesting that HSPG-mediated attachments are capable of providing separate mechanosignaling pathways.
We have shown previously that interstitial flow can activate an ERK1/2-c-Jun signaling cascade leading to increased expression of rat MMP-13 (rat interstitial collagenase should be designated as MMP-13, not as MMP-1 in our previous articles [1] , [7] ), which in turn promotes rat vascular SMC, fibroblast and myofibroblast migration in 3D collagen [1] , [7] . Based on this background, we now demonstrate, for the first time, that, with contributions from integrins, cell surface HSPGs are mechanosensors for sensing interstitial flow that leads to activation of the FAK and ERK signaling cascade and upregulation of MMP expression and cell motility in 3D. | Materials and Methods
Collagen gel preparation, flow experiment, and migration assay
As previously described [1] , [7] , rat vascular SMCs were suspended in 4 mg/ml collagen I (BD Science) (2.5×10 5 cells/ml) and then loaded into culture inserts with 8 μm pores (BD Science). After 24 h of incubation, cells were subjected to interstitial flow driven by 1 cmH 2 O pressure drop (flow velocity: 0.5 μm/s; shear stress: ∼0.05 dyn/cm 2 ) for various time periods according to the specific experimental designs. For FAK inhibition and HSPGs cleavage experiments, after 24 h of spreading, cells in gels were treated with either 10 μM PF-228 (Santa Cruz Biotechnology) or 6.7 IU/L heparinase III (IBEX Technologies, Montreal, Canada) in growth medium for 2 h and then exposed to flow. Flow medium contained either 10 μM PF-228 or 1 IU/L heparinase. (Note: The doses of PF-228 from 1 to 10 μM have been used in many other articles. We tested both 2.5 and 10 μM in our preliminary studies in 3D collagen. We observed that at both doses, flow-induced MMP-13 expression could not be completely abolished. Later we found out that flow-induced MMP-13 expression may be only partially dependent on FAK Tyr397. And also higher dose of PF-228 could affect cell growth (see Ref. [38] ), therefore we decided to use 10 μM of PF-228 for this 3D study.)
To check the effect of flow on cell motility, as described previously [1] , immediately after the flow period, 1 ml of 20 ng/ml PDGF-BB (Sigma) was added to each plate well to initiate migration to the bottom of the insert membrane. After 48 h of chemotactic incubation, the cells that had migrated to the undersides of the insert membranes were stained with Diff-Quik (Dade Behring), and five fields (100X) (1 center, 4 edges) were randomly picked for counting. The migration was then normalized to its No-Flow control case.
Immunofluorescence staining
To stain HS-GAGs, cells in the plate wells were fixed with 4% paraformaldehyde for 15 min and blocked with 4% BSA in PBS for 30 min, and followed by incubating with primary antibody HepSS-1 (US Biological) (1∶200 dilution in PBS with 4% BSA) for 2 hours and then secondary antibody Alexa Fluor 350 goat anti-mouse IgM (invitrogen) (1∶100 dilution) for 2 hours at room temperature. Finally cells were mounted by mounting medium containing propidium iodide (PI) (Vector Laboratories). To visualize cell morphology in 3D collagen gels, cells were stained with Calcein AM (invitrogen) (1∶200 in growth media).
Protein extraction and Western blotting
Protein samples were collected and western blotting was performed as described previously [7] . Collagen gels were washed once with ice-cold PBS, then 2X lysis buffer was added immediately to the gels followed by sonication for 30 seconds on ice. The 2X lysis buffer was composed of 2X RIPA buffer (300 mM NaCl, 2% NP-40, 100 mM Tris, 0.2% Brij 35, 2 mM EDTA, pH 7.5) with a supplement of 2X protease inhibitor cocktail (Roche Diagnostics), 2X phosphatase inhibitor cocktail (Roche Diagnostics), 2 mM activated Na 3 VO 4 , and 2 mM PMSF. Lysates were centrifuged in a microfuge (12,000 g for 1 hour at 4°C), and then the supernatants were collected and the remaining gel pellets were discarded. The supernatants were concentrated using Centrifugal Filter Units (Millipore). Protein concentrations in supernatants were evaluated using Protein Determination Kit (Cayman Chemical). The protein samples were then boiled for 5 minutes after mixing with 4X sample buffer (400 mM Tris-HCl, 8% SDS, 40% glycerol, 0.04% bromphenol blue, and 20% β-mercaptoethanol, pH 6.8) and stored at −80°C. Protein samples were loaded onto 10% Tris-HCl Ready Gels (Bio-Rad). After electrophoresis, proteins were transferred to PVDF membranes (Bio-Rad) and blocked at room temperature with 2% Enhanced Chemiluminescence (ECL) Advance Blocking Agent (Amersham, GE Healthcare) in TBS-T. The membranes were incubated overnight with a 1∶1000 dilution of a specific rabbit primary antibody (monoclonal antibodies: ERK1/2, phospho-ERK1/2; polyclonal antibodies: FAK, phospho-FAK (Tyr397), phospho-FAK (Tyr925), and β-actin. All antibodies were purchased from Cell Signaling), followed by a 1.5-h room temperature incubation with an ECL horseradish peroxidase (HRP)-linked anti-rabbit IgG antibody (1∶1000) (Amersham, GE Healthcare). The proteins on PVDF membranes (Bio-Rad) were then detected using Immobilon Western Chemiluminescent HRP Substrate (Millipore) and the ChemiDoc XRS system with the Quantity One software (Bio-Rad). Some membranes were stripped using RestoreTM Plus Western Blot Stripping Buffer (Thermo Scientific Pierce) for subsequent detections.
RNA interference
Two FAK shRNAs and one control shRNA (gift from Dr. Tadashi Yamamoto) were used for FAK silencing [68] . The following sequences were used:
FAK#1, 5′-GGTCCAGACCAATCACTAT-3′ ;
FAK#2, 5′-GCAGTTTGCCAACCTTAAT-3′ ;
and a control sequence, 5′-TTCTCCGAACGTGTCACGT-3′ ;
and the vector was pSIREN-RetroQ [68] . FAK#1 and #2 shRNAs were mixed together in equal amount and cotransfected into the cells. To disrupt heparan sulfate biosynthesis, a rat NDST1 shRNA was used (Origene, MD). The target sequence for rat NDST1 was:
5′-CTTACTGTGCTCCTCAATCCTATCAGCGT-3′ ,
which was subcloned into pGFP-V-RS vector. For transfection, 15 μg of each plasmid (shFAK, p-SIREN-C, shNDST1, and pGFP-V-RS) per T-75 flask were used. The transfections were conducted using LipofectamineTM LTX and PLUSTM reagents (Invitrogen) as previously described [7] . The cells were used for various experiments 2 days after transfection.
RNA extraction and gene expression analysis
Cells in collagen were directly lysed by Trizol reagent and the insoluble materials were removed by centrifugation at 12,000×g for 10 minutes at 4°C. Chloroform was added for phase separation followed by RNA purification using the RNeasy Mini Kit (Qiagen). After reverse transcription (RT), polymerase chain reaction (RT-PCR) was performed using the following protocol: pre-denaturation at 95°C for 5 minutes, then either 30 cycles (for MMP-13) or 28 cycles (for GAPDH) of denaturation at 94°C for 35 seconds, annealing at 55°C for 35 seconds, and extension at 72°C for 35 seconds, followed by a final extension at 72°C for 10 min. The amplified products were separated by electrophoresis in 2.5% agarose gels and photographed under UV light in the presence of ethidium bromide (EB). Quantitative real time PCR (RT-qPCR) was also performed for MMP-13 expression on the ABI PRISM® 7000 sequence detection system (Applied Biosystems) using the following protocol: 15 minutes at 95°C followed by 45 cycles of 30 seconds at 95°C, 30 seconds at 55°C, and 30 seconds at 72°C. Rat interstitial collagenase MMP-13 (GenBank Locus: NM_133530) primer sequences were:
forward, 5′-TCTGACCTGGGATTTCCAAAAG-3′ (1124–1145);
reverse, 5′-GTCTTCCCCGTGTCCTCAAA-3′ (1194–1175).
Rat NDST1 (GenBank Locus: NM_024361.1) primer sequences were:
forward, 5′-GATGACCCGGTGGCCCTAAA-3′ (2607–2626);
reverse, 5′-TCTGTTCGCAGCAGTTTGCC-3′ (2797–2778).
Primer sequences of GAPDH were listed previously [1] , [7] .
Data Analysis
Results are presented as mean ± SEM. Data sets were analyzed for statistical significance using a Student's t-test with a two-tailed distribution, and P<0.05 was considered statistically significant. | Results
Interstitial flow-induced MMP-13 expression and SMC motility in 3D collagen depend on HSPGs
HS-GAGs are abundantly presented on the surfaces of rat vascular SMCs, and can be substantially cleaved by a selective enzyme, heparinase III ( Figure 1A left and Figure S1 ). This is consistent with our previous observations [2] . HS chain production can be effectively suppressed by silencing N -deacetylase/ N -sulfotransferase 1 (NDST1), an enzyme that modulates HS biosynthesis with short hairpin RNA (shNDST1) ( Figure 1A right and Figure S1 ). To investigate whether the HSPGs were responsible for sensing 3D interstitial flow, heparinase and shNDST1 were used to disrupt cell surface HSPGs. Cleavage of HS-GAGs by heparinase completely abolished flow-induced MMP-13 expression ( Figure 1B ), resulting in a significant reduction in flow-induced cell motility ( Figure 1C ). Heparinase also reduced MMP-13 expression and cell motility in the no-flow control case. Knockdown of NDST1 by shNDST1 abolished the augmentation of MMP-13 expression and cell motility induced by interstitial flow ( Figures 1B and 1C ). It appears that shNDST1 and heparinase had similar effects on MMP-13 expression and cell motility. Previously we have shown that inhibition of MMP-13 or ERK1/2 does not attenuate the baseline migration (no-flow control cases) in SMCs [1] , [7] . Therefore, in this study, inhibition of cell motility after cleavage of HSPGs by heparinase was not due to reduced MMP-13. It probably was due to the reduced cell-matrix adhesion assembly and disassembly after removal of HSPGs, since HSPGs can enhance formation of cell-matrix adhesions and stress fibers [28] – [31] , [35] .
Interstitial flow-induced MMP-13 expression and cell motility in 3D depend on FAK
To investigate whether FAK was involved in flow-induced MMP-13 expression and cell motility in 3D, a FAK inhibitor (PF-228) and FAK shRNA (shFAK) were used to inhibit FAK. The efficacy of FAK knockdown was evaluated by Western blotting ( Figure 2C ). PF-228 significantly attenuated but did not completely abolish flow-induced MMP-13 expression ( Figure 2A ) and completely abolished flow-enhanced cell motility ( Figure 2B ). With knockdown of FAK, flow-induced MMP-13 expression was completely inhibited ( Figure 2A ) and cell motility was completely abolished to a level even lower than the control case ( Figure 2B ). In the no-flow control cases, PF-228 slightly reduced MMP-13 expression but not cell motility, while shFAK significantly suppressed cell motility but not MMP-13 expression ( Figures 2A and 2B ). FAK shRNA reduces the total amount of FAK, possibly resulting in less FAK available for focal adhesion turnover and therefore less cell motility [37] . PF-228 selectively inhibits FAK phosphorylation at Tyr397, but does not affect the total amount of FAK. PF-228 inhibits cell migration concomitant with the inhibition of focal adhesion turnover [38] . Therefore, our data indicate that phosphorylation of FAK at Tyr397 is critical for flow-induced cell motility through adhesion turnover. Our data also suggest that FAK phosphorylation at other tyrosine residues may play an important role in MMP-13 expression, because FAK shRNA completely abolished flow-induced MMP-13 expression but PF-228 did not. This data again shows that downregulation of MMP-13 expression by PF-228 in the no-flow case did not attenuate baseline migration, which is consistent with our previous observations [1] , [7] .
FAK and HSPGs mediate interstitial flow-induced ERK activation
We have previously demonstrated that interstitial flow-induced MMP-13 expression depends on ERK activation [7] . Above we showed that flow-induced MMP-13 upregulation also depends on both FAK and HSPGs. Therefore, we further investigated whether FAK and HSPGs regulate flow-induced ERK activation. Flow significantly stimulated ERK phosphorylation, and PF-228 dramatically reduced ERK activation in the no-flow control, and partially, but significantly, attenuated flow-induced ERK activation ( Figure 3A ), which is consistent with MMP-13 expression ( Figure 2A ). Knockdown of FAK substantially inhibited ERK activation in both no-flow and flow cases ( Figure 3B ). These results suggest that there must be FAK tyrosine sites other than Tyr397 that play a more dominant role in flow-induced ERK activation and downstream MMP-13 expression ( Figure 2A ). Cleavage of HSPGs by heparinase significantly inhibited ERK activation in both no-flow control and flow cases ( Figure 3A ), consistent with our previous findings [2] . Disruption of HSPGs by shNDST1 also significantly reduced ERK activation ( Figure 3B ). These results suggest that both FAK and HSPGs play crucial roles in ERK activation regulating MMP-13 expression. The data also show that inhibition of FAK and removal of HSPGs might not be complete because flow can still induce ERK activation, but the levels of ERK activation were not significantly higher than no-flow control (time 0 without any other treatment case) except for the case of PF-228 treatment. These data are consistent with the data in Figures 1 and 2 showing complete attenuation of the MMP-13 expression response to flow after treatment with heparinase ( Figure 1B ), shNDST1 ( Figure 1B ), and shFAK ( Figure 2A ).
HSPGs mediate interstitial flow-induced FAK and ERK activation
Knockdown of FAK or removal of HSPGs both abolished flow-induced ERK activation and MMP expression, suggesting that the mechanosensitive signaling pathways mediated by FAK and by HSPGs regulating ERK activation should be in a serial, not parallel pattern. Therefore, we hypothesized that HSPGs are flow sensors and signal transducers which sense and transmit flow stimuli to activate FAK and the downstream signaling cascade. To test this hypothesis, we eliminated cell surface HSPGs using heparinase and then investigated whether flow-induced activation of FAK and ERK was affected ( Figure 4 ). Removal of HSPGs reduced FAK phosphorylation at Tyr397 and ERK activation in the no-flow case. Flow dramatically elevated phosphorylation of FAK Tyr925 and ERK; and these activations were markedly attenuated by cleavage of HSPGs. Flow appeared to play a lesser role in phosphorylation of FAK Tyr397. The results show that phosphorylation of FAK at Tyr397 correlates with baseline (no-flow) ERK activation ( Figure 3 ) and baseline MMP expression ( Figure 2 ), and seems to play a very minor role in flow-induced ERK activation and MMP expression; while activation of FAK at Tyr925 correlates with flow-induced ERK activation ( Figures 3 and 4 ), MMP expression and cell motility ( Figure 2 ). Disruption of HSPGs attenuated flow-induced activation of FAK and ERK ( Figure 4 ) and knockdown of FAK blocked flow-induced activation of ERK ( Figure 3 ), suggesting that HSPGs are mechanosensors mediating flow-induced FAK and downstream ERK signaling cascade activation and MMP expression.
Integrin β1 but not HSPG is essential for cell spreading and integrin-mediated cytoskeletal organization is crucial for flow sensing
After we disrupted HSPGs by knocking down of NDST1, we did not see significant changes in cell attachment and spreading either on a 2D surface ( Figure 1A ) or in 3D collagen after 24 h ( Figure 5A ), suggesting that integrin-based cell-matrix adhesions were still formed. Studies have shown that knockdown of syndecan-1 HSPGs inhibits cell attachment to collagen transiently and cells can still attach to collagen after 4 h [31] , [39] . However, we also observed that cells could not spread out in collagen when β1 integrins were blocked ( Figure 5B ), suggesting that cell spreading through HSPG chain directly mediated attachments is negligible and integrins are indispensable for cell adhesion, spreading and maintaining cytoskeleton rigidity. Blockade of β1 integrins increased baseline MMP-13 expression in the No-Flow case, which is consistent with a previous report that blockade of α2β1 integrins induces MMP-1 expression in human fibroblasts [40] ; however, flow could not induce MMP-13 expression when β1 integrins were blocked ( Figure 5C ). Therefore, we conclude that integrins provide a rigid cell cytoskeleton for mechanotransduction, while HSPGs sense interstitial flow to activate FAK and the downstream ERK cascade, eventually leading to an increase in MMP expression and cell motility. | Discussion
Fluid flow in the tissue interstitium is very low due to the resistance of ECM fibrils and cells [41] . It has been shown, however, that such low flow can significantly affect cell physiology and function [1] – [3] , [7] , [13] – [15] . But, how cells sense this subtle flow in 3D remained largely unknown. Thus, the aim of this study was to determine the flow sensors on cells in 3D. We showed, for the first time, that, with contributions from integrins, HSPG-mediated activation of the FAK and downstream ERK signaling cascade plays the major mechanotransduction role in flow-induced rat MMP-13 expression and vascular SMC motility in 3D.
It has been shown that interstitial flow can promote tumor cell migration via autologous chemotaxis mechanisms [42] . In the present study, separation of the flow period from the migration period ensured that the effects of flow on the cell motility could not be interpreted as resulting from the convection of chemoattractant or other molecules produced by the suspended cells [1] . In addition, the flow velocity in this study is ∼0.5 μm/s; assuming the diffusion coefficient for cell secreted chemokines is ∼100 μm 2 /s and the cell radius is ∼5 μm, then according to Fleury et al. [42] , the Peclet number is ∼0.025 which is rather small, suggesting that convective transport effects would be very small. Furthermore, in our preliminary study, we found that after exposure to 6 h of interstitial flow, followed by incubation with DMEM without PDGF-BB for 48 h, there were barely any migrated cells on the undersides of insert membranes observed, similar to that of no-flow controls without PDGF-BB as chemoattractant (Data not shown). This suggests that, due to our special experimental design, autologous chemotaxis mechanisms did not play a significant role in the study. Therefore, in this study we are able to distinguish the mechanical role of HSPG in flow sensing from its possible involvement in autologous chemotaxis mechanisms.
HSPGs are present over the entire cell surface, binding extracellular ligands and forming signaling complexes with receptors. The binding of cell surface HSPGs to ECM components can immobilize the PGs, enabling HSPG core proteins to interact with the actin cytoskeleton [22] , [25] . Therefore, HSPGs can act as both coreceptors and mechanosensors in most ECM and cytoskeleton interactions. In the absence of integrins, binding of HSPGs to ECM via antibodies can support cell attachment and spreading through reorganization of the actin cytoskeleton and can mediate solid strain-induced mechanotransduction [22] , [29] , [36] . It has been suggested that HSPGs play important roles in EC and tumor cell migration or invasion [43] , [44] . In the present 3D study, we showed that disruption of HSPGs by heparinase reduced MMP-13 expression and SMC motility in the no-flow control case and completely abolished flow-induced MMP-13 expression and cell motility ( Figure 1 ). In contrast, a previous 2D study showed that HSPG disruption by heparinase enhanced EC migration by decreasing stress fibers and the size of focal adhesions, and increased migration under flow conditions [43] , suggesting that HSPGs may play distinct roles in cell migration in 2D and 3D.
Appropriate cell-matrix adhesions are critical for cells and tissues to maintain function. Focal adhesions are macromolecular contact complexes between cells and ECM. FAK is a signaling molecule in the focal adhesion complex involved in integrin downstream signaling. Stimulation of integrins and many other cell surface receptors can cause FAK autophosphrylation at Tyr397, generating a binding site for Src family protein tyrosine kinases. Recruitment of Src family kinases induces FAK phosphorylation at Tyr925 which triggers Ras/MAPK cascade activation [45] . FAK plays a central role in mediating cell migration [38] , [46] , [47] . Shear stress can induce ERK activation dependent on FAK in 2D [48] . In this study, we showed that inhibition of FAK suppressed interstitial flow-induced MMP expression and cell motility ( Figure 2 ) due to inhibition of ERK ( Figure 3 ), suggesting that FAK is the cytoplasmic mediator of flow-induced ERK activation in 3D.
Shear stress can induce FAK phosphorylation at Tyr397 in 2D [48] , [49] , probably mediated by HSPGs on the cell's apical surface transmitting the shear force through the cytoskeleton to focal adhesions on the basal side [50] , [51] , where they are assembled for directional migration [43] . Cyclic strain can induce FAK phosphorylation at both Tyr397 and Tyr925 [52] , [53] , however, strain-induced ERK activation is mediated by FAK phosphorylation at Tyr925, not Tyr397 [53] . Interstitial flow significantly enhanced FAK phosphorylation at Tyr925 in 3D, which correlates with flow-induced MMP expression and ERK activation ( Figures 2 and 4 ). Flow-induced ERK activation was mediated by FAK ( Figure 3 ) and disruption of HSPGs abolished flow-induced FAK and ERK activation ( Figures 3 and 4 ), but disruption of HSPGs did not affect cell spreading via integrins ( Figure 5A ), suggesting that flow sensing is mainly through HSPGs, not integrins. It has been shown that syndecan-1 can colocalize with α2β1 integrin and support integrin-mediated adhesion to collagen [31] , [39] and syndecan-4 can cooperate with α5β1 integrin and mediate focal adhesion formation on fibronectin [54] . Syndecans then regulate organization of cortical actin and induce stress fiber formation at adhesion complexes [29] , [31] , [54] , [55] and modulate FAK activation [35] . Since flow-induced mechanotransduction is mediated by FAK, the HSPGs that function as flow sensors in 3D might be directly located at the sites of cell-matrix adhesion or linked to matrix adhesion complexes via the cortical actin cytoskeleton, where these HSPGs are able to pass signals to FAK [32] . When cells are embedded in 3D, cell-matrix adhesions form all around the cell surface, however, the level of phosphorylated FAK is lower than on 2D [12] , [18] , suggesting that HSPG-mediated signaling may compensate for the reduced function from integrins.
To adhere to the ECM, integrins (α and β subunits range in size from 80 to 130 kDa) form αβ heterodimers and the extracellular domains directly attach to the binding sites in the ECM and cytoplasmic domains interact with the cytoskeleton [30] . The length of integrin-mediated adhesion is around 15 nm [56] , [57] . When exposed to solid strains, integrin-based adhesions can be easily deformed due to the relative motion between the ECM and cell membrane, resulting in activation of integrin signaling. Therefore, integrin-mediated focal adhesions have been widely described to be mechanosensors for solid strain [58] . Unlike integrins, HSPGs (especially syndecans) contain a relatively short transmembrane core protein with several long and flexible HS-GAG chains extended into the extracellular space [30] . Monomeric syndecan core proteins range in size from 20 to 45 kDa [59] . In mediating cell adhesion, syndecans form stable homodimers and bind to the heparin/heparan sulfate binding sites in the ECM with the HS chains and the cytoplasmic domains on core proteins interact with the cytoskeleton [30] . Therefore, HSPG-mediated cell-matrix adhesion can be an alternative signaling pathway to the integrin signaling cascade [36] . Noting that collagen fibers are rather rigid and the pore size of collagen gels in vitro is around 0.5–1.0 μm [3] , [60] and the space between two adjacent collagen fibers in the media of human aortas is greater than 50 nm (estimated from [61] , [62] ), there is plenty of space for HS chains (diameter <1 nm, [20] , [33] ) and even HSPG core proteins (diameter <10 nm, estimated from [63] ) to move.
Based on our experimental results and the characteristics of integrins and HSPGs, we propose a model to summarize our mechanotransduction hypothesis, shown in Figure 6 . The HS-GAG chains are rather long and flexible and may be easily deformed by shear flow, thus HSPG structures are more sensitive to interstitial flow than integrins. When exposed to interstitial flow, the flexible HS chains can be deformed causing HSPG core protein deformation that is transmitted to the cytoskeleton, leading to activation of the FAK signaling cascade. On the other hand, since integrin-mediated bonding is rather rigid, the flow-induced displacement (strain) of integrins may be much less than that of HSPGs, implying less mechanotransduction through integrins than HSPGs. However, cell spreading through HSPG alone is negligible and integrin-mediated adhesions are indispensable for maintaining cell cytoskeleton rigidity which is important for mechanosignal sensing and transduction [64] . We therefore speculate that, HSPGs play a major role in sensing interstitial flow and mediating mechanotransduction through FAK activation in 3D, by either colocalizing within integrin-mediated cell-matrix adhesion complexes ( Figure 6A right) [65] or interacting with adhesion complexes through the cortical actin cytoskeleton ( Figure 6A left) [55] , [66] , or both ( Figure 6A ). Another possibility is that deformation of HSPGs induced by flow may also cause cortical actin displacement and cell plasma membrane deformation, which can actually generate a mechanical strain on integrin-ECM bonds similar to a model suggested previously in osteocytes [67] , and thus leading to an activation of FAK and ERK ( Figure 6B ). It may also be possible that HSPGs transduce the flow signal into FAK activation through other unknown connections (chemical or mechanical). Exactly how the flow force is transmitted through HSPG and then activates FAK remains to be further investigated.
Previously we have shown that interstitial flow-induced rat MMP-13 expression and vascular SMC, fibroblast and myofibroblast motility depend on activation of the ERK-c-Jun signaling pathway [1] , [7] and surface glycocalyx HSPG-mediated ERK activation regulates fluid flow modulation of SMC and myofibroblast phenotypes [2] . In the present study, we further showed that in 3D, cell surface HSPG-mediated FAK phosphorylation is responsible for ERK activation although the specific HSPG core proteins responsible for force transmission to FAK remain to be determined. Synthesizing all of these observations, we propose, for the first time, that HSPG-mediated FAK activation is a mechanism for interstitial flow-induced mechanotransduction ( Figure 6 ). Since interstitial flow can also induce tumor cell migration via an autologous chemotaxis mechanism [42] , it would remain interesting to know whether HSPGs play any roles in this mechanism. We conclude that interstitial flow can, by a HSPG-mediated mechanism, in concert with integrin-mediated cell-matrix adhesions and cytoskeleton organization, induce activation of the FAK-ERK-c-Jun signaling axis, regulating MMP expression, vascular cell motility, and vascular cell phenotype in 3D. While this study was based on vascular cells and motivated by the response of an artery to injury, the mechanotransduction mechanism that we have proposed should be relevant to 3D flow effects on tissue morphogenesis, cancer cell metastasis and stem cell differentiation, and also have implications in tissue engineering. | Conceived and designed the experiments: ZDS JMT. Performed the experiments: ZDS HW. Analyzed the data: ZDS JMT. Wrote the paper: ZDS JMT.
Background
Interstitial flow directly affects cells that reside in tissues and regulates tissue physiology and pathology by modulating important cellular processes including proliferation, differentiation, and migration. However, the structures that cells utilize to sense interstitial flow in a 3-dimensional (3D) environment have not yet been elucidated. Previously, we have shown that interstitial flow upregulates matrix metalloproteinase (MMP) expression in rat vascular smooth muscle cells (SMCs) and fibroblasts/myofibroblasts via activation of an ERK1/2-c-Jun pathway, which in turn promotes cell migration in collagen. Herein, we focused on uncovering the flow-induced mechanotransduction mechanism in 3D.
Methodology/Principal Findings
Cleavage of rat vascular SMC surface glycocalyx heparan sulfate (HS) chains from proteoglycan (PG) core proteins by heparinase or disruption of HS biosynthesis by silencing N -deacetylase/ N -sulfotransferase 1 (NDST1) suppressed interstitial flow-induced ERK1/2 activation, interstitial collagenase (MMP-13) expression, and SMC motility in 3D collagen. Inhibition or knockdown of focal adhesion kinase (FAK) also attenuated or blocked flow-induced ERK1/2 activation, MMP-13 expression, and cell motility. Interstitial flow induced FAK phosphorylation at Tyr925, and this activation was blocked when heparan sulfate proteoglycans (HSPGs) were disrupted. These data suggest that HSPGs mediate interstitial flow-induced mechanotransduction through FAK-ERK. In addition, we show that integrins are crucial for mechanotransduction through HSPGs as they mediate cell spreading and maintain cytoskeletal rigidity.
Conclusions/Significance
We propose a conceptual mechanotransduction model wherein cell surface glycocalyx HSPGs, in the presence of integrin-mediated cell-matrix adhesions and cytoskeleton organization, sense interstitial flow and activate the FAK-ERK signaling axis, leading to upregulation of MMP expression and cell motility in 3D. This is the first study to describe a flow-induced mechanotransduction mechanism via HSPG-mediated FAK activation in 3D. This study will be of interest in understanding the flow-related mechanobiology in vascular lesion formation, tissue morphogenesis, cancer cell metastasis, and stem cell differentiation in 3D, and also has implications in tissue engineering. | Supporting Information | The authors thank Henry Qazi and Dr. Limary Cancel for valuable discussions, and Rocio Palomino for cell culture. | CC BY | no | 2022-01-13 08:14:22 | PLoS One. 2011 Jan 5; 6(1):e15956 | oa_package/4e/c2/PMC3016412.tar.gz |
|
PMC3016413 | 21246052 | Introduction
There is a critical need for safe and effective microbicides that women worldwide can use repeatedly and independently from the time of coitus, to prevent sexual transmission of human immunodeficiency virus (HIV) and other sexually transmitted infections (STIs). While promising, candidate microbicide gels containing agents that act on specific viral targets and/or antagonists of cell-virus interactions typically require mM or mg/ml amounts of the drug to significantly protect against mucosal infection with immunodeficiency virus in macaques [1] , [2] , [3] , [4] , [5] , [6] , [7] . Notably, the 1% (1 mg/ml) tenofovir gel that protected macaques against repeated vaginal infection when given 30 min prior to each challenge [5] , was shown to be effective in reducing HIV acquisition in women by 39% when applied at least 12 h before and no more than 12 h after intercourse in the CAPRISA 004 trial [8] . This provides the first proof of concept that topical microbicides can limit HIV spread in humans and that this was predicted from the macaque studies.
In microbicides it is preferable to avoid first line anti-HIV agents that are used to treat HIV-infected people, or agents with the potential to induce class- or cross-resistance to them [9] , [10] . There was no evidence of tenofovir resistance in those individuals who became infected in the CAPRISA 004 trial [8] . However, tenofovir is used to treat HIV infection [11] , therefore increasing the chances of the transmission of tenofovir resistant viruses in the future. An additional attribute that will increase the success of a microbicide formulation is the ability to exhibit long lasting protection that would allow gels to be used independently of intercourse and thus be useful in real-world settings. Therefore, identifying a formulation that (i) contains agents that are active against viruses already resistant to drugs in clinical use and (ii) provides protection when applied independent of intercourse is vital.
We explored the use of MIV-150, a novel non-nucleoside reverse transcriptase inhibitor (NNRTI) that is not used in current HIV therapies, combined with zinc acetate and formulated in carrageenan. NNRTIs are non-competitive inhibitors of RT. Carrageenan alone (Carraguard®; 95% lambda and 5% kappa carrageenan) was not effective at preventing HIV infection in women [12] , but the intrinsic rheological properties [13] , stability, acceptability, and safety of a carrageenan-based gel [12] , [14] , [15] , [16] , [17] , [18] , [19] make it a useful vehicle to deliver anti-HIV drugs. MIV-150 is a novel, potent NNRTI, which binds tightly to the HIV reverse transcriptase (RT), and has strong antiviral (IC 50 of <1 nM) and potentially virucidal (IC 50 of 400 nM) activity against R5 and X4 viruses [13] , [20] , [21] . Moreover, MIV-150 possesses a favorable resistance profile: it is effective against HIV-1 harboring common single mutations in the RT gene, requires two-to-three mutations (L100I, K103N, Y181C) to increase the IC 50 >10-fold, and takes about twice as long to select HIV-1 resistance in vitro compared to other NNRTIs like Nevirapine and Efavirenz (Fernández-Romero unpublished). Notably, the L100I/K103N double mutant resistant to Efavirenz had reduced susceptibility to MIV-150. The IC 50 of MIV-150 against this double mutant (0.9 μM vs 0.7 nM against wild type) remained at least 10-fold less than that of Efavirenz (>10 μM vs 0.6 nM against wild type). However, the Y181C mutant resistant to Nevirapine remained fully sensitive to MIV-150 (0.2 nM vs 0.7 nM against wild type). Importantly, MIV-150 also possesses a potent memory effect, since cells exposed to MIV-150 in vitro remain resistant to infection for up to 5 d (Fernández-Romero unpublished).
Formulating MIV-150 in a carrageenan gel has yielded encouraging results to date. In vitro studies demonstrated the additive effects of MIV-150 and carrageenan [13] . MIV-150 also prevented dendritic cell (DC)-facilitated infection of CD4 + T cells at nM concentrations [20] . In addition, in vivo efficacy of 500 μM MIV-150 in carrageenan was confirmed against our chimeric SHIV-RT (SIVmac239 with HIV-1 RT) when applied topically 30 min prior to vaginal challenge of either naïve or HSV-2 infected macaques, despite the increased susceptibility to SHIV-RT infection of the latter [20] , [22] .
In vitro studies suggest that zinc salts have activity against HIV as well as other viruses, including HSV-2 [23] , [24] , [25] , infection with which has been shown to facilitate HIV transmission [26] , [27] , [28] . However, documentation of anti-viral activity in vitro is complicated by the toxicity of zinc in many cell-based assays, underscoring the need to conduct in vivo studies of zinc's anti-viral activity. Moreover, we found that zinc acetate delivered in carrageenan (but not zinc acetate in solution or in the hydroxyethyl cellulose placebo) protects mice against high dose vaginal challenge with HSV-2 (Fernández-Romero, unpublished). Thus, gels containing zinc acetate represent a promising approach to impede HIV transmission both directly by virtue of anti-HIV activity, and indirectly by reducing other STIs. In addition, since condom use might be reduced upon introduction of a microbicide [29] , a formulation that targets HIV as well as other STIs, like HSV-2, is especially desirable.
To assess the potential of a novel antiretroviral/zinc combination microbicide in a manner designed to simulate how women will use gels, we evaluated the anti-viral efficacy of repeated vaginal application of MIV-150 and zinc acetate in a macaque vaginal challenge model. Herein we show that daily usage of gels containing low dose MIV-150 (50 μM) afforded >50% protection from infection for up to 8 h after 2 weeks of daily application, and that zinc acetate alone protected >65% of macaques from infection for up to 24 h. Notably, the combination of low dose MIV-150 and zinc acetate provided complete protection from infection for at least 24 h. Both zinc acetate alone and the MIV-150/zinc acetate combination gels still markedly reduced infection when applied every other day. These data will advance the development of coitally-independent combination microbicide gels to limit HIV spread. | Methods
Ethics Statement
Adult female Chinese rhesus macaques ( Macaca mulatta ) were housed and cared for in compliance with the regulations under the Animal Welfare Act, the Guide for the Care and Use of Laboratory Animals, at Tulane National Primate Research Center (TNPRC; Covington, LA). Animals were monitored continuously by veterinarians to ensure their welfare. Veterinarians at the TNPRC Division of Veterinary Medicine have established procedures to minimize pain and distress through several means. Monkeys were anesthetized with ketamine-HCl (10 mg/kg) or tiletamine/zolazepam (6 mg/kg) prior to all procedures. Preemptive and post procedural analgesia (buprenorphine 0.01 mg/kg) was required for procedures that would likely cause more than momentary pain or distress in humans undergoing the same procedures. The above listed anesthetics and analgesics were used to minimize pain or distress associated with this study in accordance with the recommendations of the Weatherall Report. Any sick animals were euthanized using methods consistent with recommendations of the American Veterinary Medical Association (AVMA) Panel on Euthanasia. All studies were approved by the Animal Care and Use Committee of the TNPRC (OLAW assurance #A4499-01) and in compliance with animal care procedures. TNPRC is accredited by the Association for Assessment and Accreditation of Laboratory Animal Care (AAALAC#000594).
Animal treatments and challenge
Animals tested negative for simian type D retroviruses, simian T cell leukemia virus-1, and SIV prior to use in the efficacy studies. The animals were sexually mature (ranging from 4–12 years old) and their weights ranged from 4–10 kg. There was no evidence of a correlation between an animal's age at the time of challenge and infection status. Uninfected and healthy SHIV-RT infected animals available from completed microbicide studies were used for the PK and biomarker studies. All blood (no more than 10 ml/kg/month), fluids, and superficial lymph node biopsies were transported overnight from the TNPRC to our laboratories in New York by overnight courier service for processing and analysis.
Five weeks prior to virus challenge, animals received a single 30 mg i.m. injection of Depo-Provera to thin the vaginal epithelium as well to control cycling [30] . At the designated times, single 3 ml applications or repeated 2 ml applications of microbicide candidate gel (versus the vehicle carrageenan and methyl cellulose [MC] controls) were applied atraumatically into the vaginal vault with a pliable pediatric feeding tube either before or after challenge with 0.5 ml (for repeated gel treated animals) or 1 ml (for animals receiving one gel dose) of 10 3 TCID 50 SHIV-RT (SIVmac239 and HIV-1 hxb2 RT). A supine position was maintained for all animals, to allow absorption of virus for 20 min post challenge. There was no evidence of leakage from the animals. Control gel-treated animals were included with every challenge. The infection frequencies for animals treated at different times with the MC placebo were comparable and so those data have been pooled to provide the MC dataset. This was also the case for animals treated with carrageenan. Individual animal information is summarized in Tables S1 – S3 . For the PK/biomarker studies the animals were treated with Depo-Provera prior to gel treatment just as in the efficacy studies, but without the virus challenge.
Virus stock
The original SHIV-RT stocks were grown in PHA-activated human peripheral blood mononuclear cells (PBMCs) (kindly provided by Disa Böttiger, Medivir AB, Sweden). A subsequent stock was generated from this in PHA-activated macaque PBMCs. Stocks were re-titered using the 174xCEM cell line (NIH AIDS Research & Reference Reagent Program), and TCID 50 values were calculated using the Reed and Muench formula. Both stocks infect with comparable frequency in vivo .
Microbicide formulations
The 2.5% (w/w) MC placebo gels (Lot numbers 32805, 110205, 032807, 011008, 080602A2005SR, 080804A2005MR, 090217A2005MR, 090610A2005MR) (Fisher) and the 3% (w/w) carrageenan vehicle PC-515 (Lot numbers 32805, 010908, 080805A515SR, 090127A515SR, 090612A515MR) were used as controls [31] . MC was originally used to parallel the human studies testing Carraguard where MC was the placebo [12] and to rule out any barrier effect of the carrageenan vehicle as was observed in our earlier macaque studies [20] . PC-817 (Lot numbers 32805, 011508, 032707-A, 080603A817MR, 080811A817MR, and 090210A817MR) and PC-815 (Lot numbers 080807A815SR, 090209A815MR, 090609A815MR) contained 3% (w/w) carrageenan, either 500 μM (PC-817) or 50 μM (PC-815) MIV-150, and 1% DMSO (Sigma, St. Louis, MO). PC-1005 (Lot numbers 080604A1005MR, 080810A1005SR, 090205A1005MR, 100126A1005MR) contained 3% (w/w) carrageenan, 50 μM MIV-150, 14 mM zinc acetate dihydrate, and 1% DMSO. PC-707 (Lot numbers 090202A707MR, 090204A707MR, 100127A707MR) contained 3% (w/w) carrageenan and 14 mM zinc acetate dihydrate. Gels were stored at room temperature and used for the studies within 7–28 days of formulation. Gel viscosity and anti-HIV activity were verified for each Lot prior to in vivo use. The pH of the gels was buffered to 6.8±0.2 in the final manufacturing step, so any change in pH due to the addition of zinc acetate was neutralized. MIV-150 was developed by Medivir AB (Sweden) and licensed to the Population Council for development as a microbicide.
Cell isolation and sample collection
PBMCs were isolated from EDTA blood using Ficoll-Hypaque density gradient centrifugation (Amersham Pharmacia Biotech, Uppsala, Sweden) as already described [32] . RPMI 1640 (Invitrogen/GIBCO, Carlsbad, CA, USA) culture medium containing 2 mM L-glutamine (Invitrogen/GIBCO) 10 mM HEPES (N-2-hydroxyethylpiperazine-N'-2-ethanesulfonic acid) (Invitrogen/GIBCO), 50 μM 2-mercaptoethanol (Sigma) penicillin (100 U/ml) and streptomycin (100 μg/ml) (Invitrogen/GIBCO) and 1% heparinized human plasma (Innovative Research, Southfield, MI) was used for final re-suspension of PBMCs. Medium with 10% HI-FBS (heat inactivated fetal bovine serum) (Invitrogen/GIBCO) instead of 1% human plasma was used for the PBMC-174xCEM co-cultures. DNA was isolated from lymph nodes as described with the DNeasy® Blood & Tissue Kit (Qiagen Sciences, MD USA). DNA was eluted in 50 μl RNAse/DNAse free water (Invitrogen/GIBCO). Vaginal swabs and plasma were collected as previously described [22] . Vaginal pH was determined by inserting pH-indicator strip (EMO Chemical Inc., Gibbstown, NJ) into the vaginal vault for 5 minutes to saturate the pH paper with vaginal fluid.
Virus detection
Plasma viral RNA copy numbers were determined by quantitative RT-PCR as previously reported [33] . The 6 animals with blips were further tested by quantitative SIV gag PCR for viral DNA levels in lymph node biopsies taken 30–57 weeks post challenge [34] .
Immune parameters
SIV-specific IFN-γ T cell responses were monitored by ELISPOT using AT-2 SIV (versus the no virus microvesicle controls) as the stimuli [35] and SIV-specific antibodies by ELISA [36] . Antibody positivity was defined as having positive OD values above background baseline samples at 4–8 weeks post challenge and IFN-γ positivity was defined by at least 50 SIV-specific IFNγ spot forming cells (SFCs) per 10 6 PBMCs on more than one time point post challenge. Cell-free vaginal fluids were analyzed for chemokine and cytokine expression using the monkey-reactive Beadlyte human 14-plex Detection System according to the manufacturer's instructions (Invitrogen). This assay recognized the macaque IL1-β, CXCL10, IL-6, CCL5, CCL3, GM-SCF, CCL4, CCL2, IFN-γ, TNF-α, IL-3, IL-2, IL-4, CXCL8. The Luminex 200 (Luminex, Austin, TX) and StarStation softwares (Applied Cytometry Systems, Sacramento, CA) were used to analyze samples.
MIV-150 Radioimmunoassay (RIA)
Plasma and vaginal swabs were treated for 30 min at room temperature with NP40 at a final concentration of 1% to inactivate any infectious agent. The samples were stored at −80°C until the RIA was performed. The RIA for MIV-150 was an indirect extraction based assay adapted from Kumar et al [37] . MIV-150 [38] was synthesized by catalyzed exchange with tritium gas and purified by preparative HPLC (American Radiolabeled Chemicals, Inc., St. Louis, MO) and a rabbit polyclonal Ab against MIV-150 was custom prepared by Pacific Immunology Corp. (Ramona, CA). The assay was optimized and validated for detection of MIV-150 in plasma, vaginal swabs, and cervical/vaginal tissues with a level of sensitivity of 2.7 nM (Rodriguez, unpublished). MIV-150 concentration in the samples was calculated by interpolation with the standard curve using a curve fitting procedure (logistic 4-parameter model).
Cloning and sequencing SHIV-RT genes
Viral RNA was extracted from 1 ml aliquots of plasma from infected animals, using the Qiagen Viral RNA Isolation Kit (Qiagen, Valencia CA, USA) according to the manufacturer's instructions and eluted in 60 μl RNAse/DNAse-free water (Invitrogen). A PBS control was run in parallel to ensure lack of contaminating RNA. Viral RNA was transcribed into cDNA with the Superscript III Reverse Transcriptase Kit (Invitrogen). RT genes in viral cDNA were amplified by PCR, using Pfu Ultra II Hotstart Polymerase (Agilent Technologies, Santa Clara, CA) and primers RT amp 5′ (5′ upstream of HIV RT, within SIV pol , 5-TACTAAAGAATACAAAAATGTAGA -3), and RT amp 3′ (3′ downstream of RT 5-CTCTGTGGATTGTATGGTACCCC -3). Due to the lower RNA viral load, an additional nested PCR reaction was performed for animal IE83 with primers 5-TAAATTTTCCCATTAGCCC -3 and 5-TCTTCTGTTAGTGGTATTA -3. PCR amplifications were carried out using the MyCycler Thermal Cycler (Bio-Rad, Laboratories, Inc., Hercules, CA). After activation of the Taq polymerase at 94°C for 5 min, DNA was amplified for 30 cycles, each at 94°C for 1 min, 44°C for 1 min, 72°C for 2 min, with a final extension at 72°C for 5 min. Unincorporated nucleotides were removed from the PCR product with the QIAquick PCR purification kit (Qiagen). PCR products were ligated using the TOPO TA cloning kit (Invitrogen) and colony miniprep DNA was prepared for sequencing (GeneWiz, Inc., South Plainfield, NJ). DNAStar Lasergene 8 software was used to analyze sequences.
Statistical Analyses
The Fisher's exact test was used for statistical comparison of the percentage of SHIV-RT infected animals in the differently treated groups. (GraphPad Prism version 5.02 for Windows, GraphPad Software, San Diego, CA). P values <0.05 were taken as statistically significant. | Results
Repeated application of low dose MIV-150 and zinc acetate combination gel prevents vaginal infection by SHIV-RT
In previous studies we demonstrated the ability of a 500 μM (185 μg/ml) MIV-150-containing gel to limit vaginal infection when a single dose was applied 30 min prior to SHIV-RT challenge of healthy [20] and HSV-2-infected [22] macaques. In this report we confirmed that there was a trend towards protection for up to 4 h (14% infection compared to 56% infection in the placebo MC-treated animals; p<0.09), but any protection was lost after 24 h, and post-exposure treatment was ineffective ( Fig. 1 and Table S1 ).
We then determined if a 10-fold lower dose of MIV-150 (50 μM or 18.5 μg/ml) could be used under a repeated gel-dosing regimen to afford longer lasting protection. Gels were applied daily for 2 weeks, and animals were challenged up to 24 h after the last application. This protocol was intended to model real-world topical gel application by women who would use it on a repeated basis and independently of coitus. Furthermore, this study design would evaluate any detrimental effects of repeated gel exposure. When infected, animals exhibited characteristic viremia with mean peak levels of 2.3×10 6 RNA copies/ml (typically) at 2–3 weeks post challenge, with mean set point viral loads of 1.7×10 4 RNA copies/ml being reached by week 8 (across all groups; n = 37). Otherwise, protected animals had no detectable viral RNA for up to 20 weeks of follow-up (<30 copies/ml; n = 56). In six animals, stochastic plasma virus RNA was detected on one or two occasions (blips; confirmed upon repeated testing), but they were otherwise negative for all other time points examined and none of them developed SIV-specific antibody or T cell responses ( Tables S2 and S3 ). Furthermore, no SIV gag DNA was detected in the lymph nodes of the six animals with blips (data not shown). Such blips in plasma virus might reflect controlled or aborted infections, as have been reported [39] , [40] , [41] , [42] , [43] , [44] . Since blips occurred across the different treatment groups, comparisons have been made based on the frequency of animals with typical viremia (not protected) versus those with undetectable virus or virus blips (protected). A MC placebo group was included to control for the non-specific barrier effect of carrageenan that was previously observed when animals were challenged only 30 min after a single gel application [20] . Relative to the MC controls, the barrier effect of the carrageenan vehicle was modest and not statistically significant in this repeated application study, where animals were challenged 8–24 h after the last gel application. Therefore, the protective efficacy has been calculated as the percentage of protection of test gels compared to carrageenan vehicle control ( Fig. 2A and Table 1 ).
Surprisingly, higher doses of MIV-150 offered no protective advantage, with repeated application of the 50 μM MIV-150 gel protecting animals by 56% and 500 μM by 33% relative to vehicle control at the 8 h time point ( Fig. 2A and Fig. S1 ). However, 50 μM MIV-150 afforded no significant protection relative to the vehicle-treated animals and any protective activity of 50 μM MIV-150 was gone after 24 h ( Fig. 2A and Table 1 ).
We next investigated whether adding 14 mM zinc acetate dihydrate, a concentration, which efficiently prevents vaginal and rectal HSV-2 infection in mice (Fernández-Romero, unpublished), would improve protection. Zinc acetate alone provided marked protection against SHIV-RT infection (11 of 14 protected, 67%, p<0.06; Fig. 2A and Table 1 ). Strikingly, the combination of zinc acetate and 50 μM MIV-150 in carrageenan provided full protection for up to 24 h (p<0.0001; Fig. 2A , Table 1 , and Fig. S2 ). In total, 21 of 21 animals, which received MIV-150/zinc acetate gel, were protected from vaginal SHIV-RT infection.
In order to determine whether daily gel use was required for efficacy, we tested the activity of the two most protective gels, MIV-150/zinc acetate and zinc acetate alone, when applied every other day for 4 weeks, followed by virus challenge 24 h after the last gel. This way, the animals received the same number of total applications as those treated daily for 2 weeks, only over twice the length of time. Both test gels provided protection after every other day administration, but the complete protection seen with the MIV-150/zinc acetate combination after daily treatment was lost ( Fig. 2B , Table 1 , and Table S3 ). While we were unable to include the carrageenan vehicle controls for this every other day regimen, comparison to the daily-applied carrageenan control group, in which a greater non-specific barrier effect would be expected than for an every other day dosing regimen, revealed marked protection (67%, 11 of 14 protected; zinc acetate and MIV-150/zinc acetate data combined, p<0.06). Interestingly, when the data from all animals treated with zinc acetate alone were pooled, the protection by zinc acetate was significant (70%, 17 of 21 protected; daily and every other day zinc acetate; p<0.02).
Using a separate group of animals, we also verified that there were no detectable local adverse effects that could explain the differences between the groups of animals repeatedly treated with the zinc and/or MIV-150-containing gels. Vaginal pH and the levels of cytokines and chemokines in fluids were measured before, during and 8 or 24 h after a 2 week administration of the test versus control gels. The presence of zinc acetate or MIV-150 in the formulations had no impact on vaginal pH ( Fig. S3A ). Low-level CXCL8 and CCL2 responses were detected after exposure to carageenan, although there was no cumulative effect of repeated application ( Fig. S3B ). While the levels varied between animals (and not all animals responded), inclusion of MIV-150, zinc acetate, or MIV-150 and zinc acetate in the carrageenan formulations had no further impact on the CXCL8 and CCL2 levels ( Fig. S3B ). Of the factors measured, no other cytokine/chemokines were detected in the vaginal fluids of any of the animals tested over time (not shown).
Cervix-associated MIV-150 predicts efficacy better than plasma or vaginal tissue MIV-150 levels
In a separate set of animals we used RIA to determine the levels of MIV-150 within plasma, vaginal fluids, and vaginal and cervical tissues as a measure of potential systemic absorption and accumulation. Samples were taken after repeated treatment with 50 μM MIV-150 alone (daily) or 50 μM MIV-150 with zinc acetate (daily versus every other day). MIV-150 was undetectable in plasma 8 or 24 h after 2 weeks of daily or 24 h after 4 weeks of every other day treatment with either gel, but it was detected in the vaginal swabs ( Fig. 3A ). While swab MIV-150 levels were lower 24 h after daily MIV-150 treatment than daily MIV-150/zinc acetate treatment (not statistically significant), comparable levels were detected after every other day dosing with MIV-150/zinc acetate. In order to determine if MIV-150 was being absorbed systemically, followed by rapid clearance, as observed in rats (Rodriguez unpublished), blood samples collected 0.5, 1, 4, 8, and 24 h after daily gel application were tested. MIV-150 remained undetectable at all of these time points after application of either 50 μM MIV-150 alone or MIV-150 and zinc acetate (not shown). Significantly more MIV-150 was present in the swabs (but not blood) after application of the gel containing 500 μM MIV-150 compared to when the 50 μM MIV-150 gel was applied ( Fig. S4A ). Interestingly, higher amounts of MIV-150 were associated with the cervical tissues 8 h (but not 24 h) after the last (daily) dose of the MIV-150/zinc acetate gel compared to daily dosing of 50 μM MIV-150 alone or every other day dosing of MIV-150/zinc acetate ( Fig. 3B ). Concentrations associated with the vaginal tissues were comparable 8 h after the last application of the MIV-150/zinc acetate or MIV-150 gels, and even higher in the MIV-150-treated animals at the 24 h time point. In animals treated with 500 μM MIV-150 much more MIV-150 was associated with vaginal tissues and comparable levels were associated with the cervical tissues (compared to the 50 μM MIV-150-treated animals; Fig. S4B ).
Low dose MIV-150-containing gels do not select for infection with drug resistant viruses
Infections occurring in the presence of MIV-150 (observed after daily MIV-150 or every other day MIV-150/zinc acetate treatment) might reflect the selection of drug resistant variants or simply failure of the drug to prevent wild type virus infection. To determine if MIV-150 had selected for drug resistant variants present in the inocula infecting these animals, we sequenced the RT gene of viral RNA isolated from plasma at peak viremia. As in the controls, the RT gene sequences from infected animals that received MIV-150 (with or without zinc acetate) were wild type ( Table S4 ). There were no amino acid changes at positions that confer resistance to NNRTIs [45] , [46] , [47] . Therefore, infections occurring in the presence of low doses of MIV-150 were not due to the selection of drug resistant variants. | Discussion
Effective microbicides that can be used frequently and independently of coitus are urgently needed to curb HIV spread worldwide. Although there have been candidate microbicides showing promise in macaques, most required mM or mg/ml amounts of the HIV-targeting anti-viral drugs and delivery was required within minutes or a few hours before the time of virus exposure [1] , [2] , [3] , [4] , [5] , [6] , [7] . Of note, the 1% tenofovir gel that protected macaques when applied just 30 min prior to repeated vaginal challenge [5] was similarly shown to significantly reduce HIV infection in women when used no more than 12 h before and no less than 12 h after intercourse [8] . This is a critical step for the field, providing the first proof of concept that topically applied gels can protect against vaginal HIV infection. Notably, there is increasing interest in the identification of microbicide approaches not based on drugs used as first line therapies against HIV-1 and/or that exhibit cross-resistance, to limit the emergence of drug resistant viruses. We observed absolute protection of macaques against vaginal infection for at least 24 h after daily application of a novel gel comprising a combination of only 50 μM of the NNRTI MIV-150 with 14 mM zinc acetate dihydrate. In contrast, gels containing either component alone afforded only partial protection, with zinc acetate-containing gels providing significant protection when applied daily or every other day (even in the absence of MIV-150). Because of this significant protection, gels with zinc acetate alone should also be considered for further development and human testing, since they are not expected to promote drug resistance.
Macaque models represent an informative system to evaluate microbicide candidates before clinical testing [48] . In fact, vaginal testing of tenofovir gel, using the repeated low dose challenge model, predicted its activity in humans [5] , [8] . For greater stringency, we chose to employ a single high dose challenge of animals pretreated with DepoProvera, conditions which increase susceptibility to infection. Our inocula contained 10 3 TCID 50 (>1.6×10 6 RNA copies), which is at least 100-fold greater than the median virus RNA levels typical of human semen [49] , [50] , [51] , [52] . This was combined with applying the gel repeatedly, an administration regimen aimed not only at evaluating protection, but also at identifying any adverse reactions that might augment transmission (since our inoculum was intended not to infect 100% of the control animals). It is interesting to note that there was a trend for increased frequency of infection after repeated application of the placebo MC compared to single MC dosing (81% versus 56%; p<0.2). This was not observed with any of the other gels and, in fact, the infection frequency in the repeated carrageenan vehicle treatment group was comparable to the single treatment MC group (64% versus 56%, p<0.8). In contrast, gels containing zinc acetate or MIV-150/zinc acetate were able to significantly protect against infection in this rigorous test model.
There are only a few reports on the anti-viral activity of zinc salts against HIV and other viruses, including HSV-2 [23] , [24] , [25] . This is probably largely due to the toxicity often observed in vitro , which confounds accurate demonstration of anti-viral activity, and emphasizes the need to evaluate zinc in vivo . A more detailed analysis of zinc's mechanism of action is underway to properly understand the basis for the efficacy of our combination gel. We recently found that zinc acetate-containing formulations in carrageenan are extremely effective against vaginal and rectal HSV-2 infection in mice (Fernández-Romero, unpublished), supporting earlier studies reporting zinc's anti-herpetic activity [23] , [25] . Therefore, in addition to directly preventing the spread of immunodeficiency viruses as shown here, a zinc-containing gel might block other STIs like HSV-2 and thereby help reduce HIV spread indirectly as well. Zinc has been reported to possess immunomodulatory activities [53] , and it is possible that modifications of the cellular milieu within the mucosal tissues render the animals resistant to infection. Our initial studies did not reveal obvious changes in the cytokines and chemokines present in the vaginal fluids, but further ex vivo infection and immune studies on biopsies from treated macaques [6] will be useful to dissect the mechanism of zinc acetate's anti-viral activity.
On the other hand, MIV-150 has potent anti-viral properties and may have virucidal activity [13] and, therefore, might act on viruses in the vaginal cavity prior to their entry into the body, as well as within the tissues where infection is established and initially amplified [54] . Pharmacokinetic (PK) studies in rats confirmed that MIV-150 is absorbed after vaginal application, but detection in the blood required 10–100 times more MIV-150 than used herein (Rodriguez, unpublished). These rat studies also showed that MIV-150 has a short plasma half-life, becoming undetectable within 24 h. Therefore, it is not surprising that we did not detect MIV-150 in the plasma after repeated application of gels containing 50 μM MIV-150, but did detect it in the cervico-vaginal fluids and associated with the tissues. This is similar to reports of gels containing the NNRTIs TMC120 or UC-781, where little or no drug was detected in the blood after repeated doses of 0.1–10 mg/ml [55] , [56] . The MIV-150 levels in the swabs did not predict efficacy, since (i) similar levels were seen 24 h after the daily versus every other day dosing of MIV-150/zinc acetate, but the protective effect of MIV-150 was not apparent in the latter and (ii) while significantly more MIV-150 was present in the swabs after application of the gel containing 500 μM MIV-150 compared to when the 50 μM MIV-150 gel was applied, the former was not more effective at preventing infection. Rather, the level of MIV-150 associated with the cervical tissues appears to be a better predictor of efficacy. The mean levels of MIV-150 associated with the cervical tissues 8 h after daily treatment with 2 ml of 50 μM MIV-150/14 mM zinc acetate dihydrate (630 pg/mg of tissue) or 50 μM MIV-150 (240 pg/mg) for 2 weeks are higher than the levels of TMC120 reportedly associated with the cervical tissues 8 h after daily treatment for 7 days with 273 μM TMC120 (3–80 pg/mg) [55] . This might be due to differences in the formulations that affect the release of the NNRTI and potentially the association of the drug with the tissues, and/or specific properties of the NNRTIs.
The few animals that became infected in the presence of MIV-150 bore virus expressing the wild type RT gene, indicating that the low doses of MIV-150 used did not select for resistant variants. Viral stocks were grown for a limited number of in vitro passages, in the absence of drug selection, and therefore the virus was expected to be clonal at the time of challenge [21] , [57] . It is possible that drug resistance will emerge if an infected individual uses an NNRTI-containing gel. However, the favorable resistance profile of MIV-150 and its rapid clearance from the blood, even if it is absorbed (at undetectable levels) after topical application, should lessen the pressure on the virus to select escape variants. Macaque studies are underway to evaluate the emergence of drug resistance during prolonged treatment of infected animals with MIV-150. Although there was no impact of tenofovir on drug resistance within the people who became infected with HIV in the CAPRISA 004 trial [8] , tenofovir is used in current HIV therapies. This ultimately increases the chances of transmission of tenofovir resistant viruses (that might develop in HIV-infected people receiving tenofovir therapy), since they would not be blocked by the tenofovir gel. Hence, identifying an effective gel containing novel anti-viral components that are not used in HIV treatment (like MIV-150 and zinc acetate) is critical to also prevent the spread of viruses that become resistant to current treatments.
Unlike earlier studies, we investigated the activity of a gel comprising two distinct active ingredients: the NNRTI MIV-150 and zinc acetate. The prototype formulation of the MIV-150/zinc acetate (and MIV-150) gel tested herein contains 1% DMSO and, while there is no federal regulation or guidance preventing the use of DMSO in microbicides, the promising MIV-150/zinc acetate formulation is currently being optimized without DMSO for human testing. Efficacy at low doses of an NNRTI that is rapidly cleared if absorbed and not already used to treat HIV is highly advantageous as it helps reduce cost, the likelihood of drug resistance, and importantly systemic toxicity. Notably, we demonstrated complete protection by this combination gel after daily usage, an effect that lasted for at least 24 h after the last application. In addition, the significant protective effects of zinc acetate alone, even when used every other day, is encouraging for the development of microbicides that are highly unlikely to select for or induce the development of drug resistant viruses. Safety and (ultimately) efficacy testing of these formulations in humans is required before they can be advanced for human use. These results represent a major step forward for the development of coitally independent microbicide gels that would be used on a frequent basis to help stem the spread of HIV and other STIs. | Conceived and designed the experiments: MR. Performed the experiments: JK MA RS MH AR LK CJA RM SS AC MP JAF-R TMZ. Analyzed the data: JK MA MH JAF-R TMZ MR. Contributed reagents/materials/analysis tools: JDL MP. Wrote the paper: JK MA JDL TMZ MR. Other: Coordinated treatment and handling of animals as well as shipment of samples: AG JB.
Background
Repeated use, coitus-independent microbicide gels that do not contain antiretroviral agents also used as first line HIV therapy are urgently needed to curb HIV spread. Current formulations require high doses (millimolar range) of antiretroviral drugs and typically only provide short-term protection in macaques. We used the macaque model to test the efficacy of a novel combination microbicide gel containing zinc acetate and micromolar doses of the novel non-nucleoside reverse transcriptase inhibitor MIV-150 for up to 24 h after repeated gel application.
Methods and Findings
Rhesus macaques were vaginally challenged with SHIV-RT up to 24 h after repeated administration of microbicide versus placebo gels. Infection status was determined by measuring virologic and immunologic parameters. Combination microbicide gels containing 14 mM zinc acetate dihydrate and 50 μM MIV-150 afforded full protection (21 of 21 animals) for up to 24 h after 2 weeks of daily application. Partial protection was achieved with the MIV-150 gel (56% of control at 8 h after last application, 11% at 24 h), while the zinc acetate gel afforded more pronounced protection (67% at 8–24 h). Marked protection persisted when the zinc acetate or MIV-150/zinc acetate gels were applied every other day for 4 weeks prior to challenge 24 h after the last gel was administered (11 of 14 protected). More MIV-150 was associated with cervical tissue 8 h after daily dosing of MIV-150/zinc acetate versus MIV-150, while comparable MIV-150 levels were associated with vaginal tissues and at 24 h.
Conclusions
A combination MIV-150/zinc acetate gel and a zinc acetate gel provide significant protection against SHIV-RT infection for up to 24 h. This represents a novel advancement, identifying microbicides that do not contain anti-viral agents used to treat HIV infection and which can be used repeatedly and independently of coitus, and underscores the need for future clinical testing of their safety and ability to prevent HIV transmission in humans. | Supporting Information | We thank the continued support of the veterinary staff at the TNPRC, Daniel Gawarecki for help with statistical analyses, and Julian Bess, William Bohn, Jeremy Miller, Terra Schaden-Ireland, Rodman Smith, and Elena Chertova, at NCI-Frederick, for producing, inactivating, purifying and characterizing AT-2 SIV and microvesicle preparations. Special thanks to David Phillips and Robin Maguire for scientific discussions on the concept and design of the gels. 174xCEM cells, courtesy of Peter Cresswell, were obtained from the NIH AIDS Research and Reagent Program. | CC BY | no | 2022-01-13 08:14:22 | PLoS One. 2011 Jan 5; 6(1):e15835 | oa_package/42/66/PMC3016413.tar.gz |
|
PMC3016414 | 21246053 | Introduction
G6PD deficiency affects more than 400 million people worldwide, with a prevalence varying from 10 to 25% in most areas where malaria is endemic. This genetic defect provides partial protection against malaria, but may lead to severe hemolytic episodes after the administration of some drugs (anti-malarials, anti-inflammatories, vitamin K, etc.), the ingestion of fava beans (favism) or infection [1] – [3] . Typically the appearance of the first symptoms occurs 24–48 hours after the intake of pro-oxidant drugs or fava beans.
While the molecular biology of G6PD deficiency has been extensively studied [2] , the molecular mechanisms leading to the hemolytic crisis are still unclear. G6PD deficient red cells (G − RBCs) display a failure of the protective response to oxidant stress, which leads to irreversible oxidation of glutathione [1] , [2] , [4] – [6] . The accumulation of large hemichrome aggregates (Heinz bodies) is an additional hallmark of the hemolytic crisis in G − individuals [7] .
Some membrane alterations have been described in G − RBCs, such as the oxidation and clustering of membrane proteins, the binding of hemichromes to the internal face of the membrane, the destabilization of the membrane and the release of micro-vesicles [8] – [10] . Interestingly, increased hemichrome formation has been observed in G − RBCs infected by malaria parasites [11] . The data available on membrane modifications are in any case insufficient to formulate a clear hypothesis as to the mechanisms of membrane destabilization and G − RBC destruction. The dearth of information concerning the mechanisms of red cell lysis represents a practical drawback which impedes both any prediction about the hemolytic activity of drugs and the understanding of the large individual susceptibility even in presence of the same G6PD mutation [1] .
The authors, as well as others have shown that band 3 red cell membrane protein (AE1) displays a marked tendency to become tyrosine phosphorylated in G- RBCs after –SH group oxidation or GSH depletion by 1-chloro-2,4-dinitrobenzene (CDNB) or diamide [12] , [13] . We have also demonstrated that Syk tyrosine kinase strongly increases its affinity to oxidized AE1 and induces its selective phosphorylation [13] . Hyper-phosphorylated AE1 showed a manifest tendency to cluster, indicating a change in its interactions with the cytoskeletal network. Furthermore, abnormal AE1 tyrosine phosphorylation has been observed in a number of red cell disorders [14] .
In the present study we have demonstrated that following –SH group oxidation induced by diamide (–SH group oxidant) and divicine, an oxygen reactive compound held responsible for favism [15] , AE1 becomes increasingly and irreversibly phosphorylated in G − RBCs. Syk kinase inhibition largely prevents red cell membrane lysis and vesiculation, strongly suggesting a functional role of AE1 tyrosine phosphorylation in the red cell membrane destabilization. | Materials and Methods
Treatment of RBCs
Venous blood was drawn from 5 healthy and 5 G6PD deficient volunteers (Mediterranean variant 563 C —> T with approximately 2–3% of normal red cell G6PD activity level). Written informed consent was obtained from each blood donor to allow: “The use of 10 ml of their blood donation for research use. In particular, to study the causes of hemolytic episodes in G6PD deficient subjects”. Ethical approval to perform the present study was obtained from the “Ethical Committee of the ASL 1 – Sassari”. The data were analyzed anonymously and all clinical investigation was conducted according to the principal expressed in the Declaration of Helsinki. None of the donors showed clinical and laboratory evidence of hemolysis and all presented hemoglobin levels within the normal range. G6PD deficiency assessment was performed as previously described [22] . Red cells were pelleted at 1,000× g for 10 min at room temperature. After removal of the buffy coat, red blood cells (RBCs) were again pelleted and washed 3× with phosphate buffer saline (137 mM NaCl, 2.7 mM KCl, 8.1 mM K 2 HPO 4 , 1.5 mM KH 2 PO 4 , pH 7.4) in presence of 5 mM glucose (PBS-glucose) to obtain packed cells. To simulate oxidative stress, RBCs at 30% hematocrit were treated for different incubation times (0–600 min) at 37°C in PBS-glucose containing 0.25 mM of diamide (Sigma) or divicine (Serva, Heidelberg). Divicine was activated by β-glucosidase treatment as previously described [5] .
For the inhibition of tyrosine phosphatases, RBCs were suspended at 30% hematocrit in PBS-glucose and incubated with 1 mM of o-vanadate in presence or absence of oxidants.
For the inhibition of Syk kinase, RBCs were suspended at 30% hematocrit in PBS-glucose and pre-incubated with 10 μM Syk inhibitors II and IV (Calbiochem) for 1 h at 37°C before oxidant treatments. The inhibitors were not washed prior to the oxidant treatment.
Each reaction was stopped by washing 3× in PBS-glucose, and membranes were prepared as described below. For all the protocols described above, untreated controls were processed identically, the only difference being the deletion of the stimulant/inducer from the incubation.
RBC membrane preparation
Standard hypotonic membranes were prepared at 4°C on ice as previously described [18] . Briefly: 150 μL of packed RBCs were diluted into 1.5 mL of cold hemolysis buffer (5 mmol/L sodium phosphate, 1 mmol/L EDTA, pH 8.0) containing a protease and phosphatase inhibitor cocktail (Sigma-Aldrich, St.. Louis, MO) and then washed up to 4 more times in the same buffer (until membranes became white) in a refrigerated Eppendorf microfuge at 25,000× g. The preparations were stored frozen at −20°C until use. Membrane protein content was quantified using the DC Protein Assay (Biorad).
Vesicle isolation
The supernatants of the samples treated as described above, were collected and centrifuged at 25000× g for 10 min at 4°C to eliminate spontaneously formed red cell ghosts. Lysis was quantified by measuring hemoglobin absorbance at 405 nm and expressed in nmoles/ml. After the addition of phosphatase infhibitors, supernatants were centrifuged for 3 hours at 100000× g on a refrigerated ultracentrifuge (Beckman) to isolate vesicles. Protein vesicles were identified by MALDI TOF MS [41] .
SDS-PAGE of membrane proteins
Membrane proteins were solubilized in Laemmli Buffer [42] under reducing (2% DTT) or non-reducing conditions in a volume ratio of 1∶1. SDS-PAGE analysis was conducted by heating the samples for 5 min at 100°C and loading 20 μg membrane proteins on the 8% gel for protein staining by blue colloidal Coomassie [43] .
Immunoblot analysis and IR fluorescence detection
Proteins separated by SDS/PAGE were transferred to nitrocellulose membranes as previously described [13] and then probed with either anti-phosphotyrosine antibody (Santa Cruz, CA) diluted to 1∶2000, or with anti-band 3 (AE1) antibody (Sigma Aldrich) diluted to 1∶50000. Secondary antibodies conjugated to infrared fluorescent dyes excitable at 800 nm (IRDye 800CW, Li-COR-USA) were then used to visualize the desired antigens using an 800 nm laser scanner (Odyssey, Licor, USA). To establish the specificity of anti-phosphotyrosine antibodies, proteins were dephosphorylated prior to gel electrophoresis by incubating the samples for 20 min at 30°C with 6 μL (400 units) lambda phosphatase (Santa Cruz, CA) in 50 mM Tris Buffer pH 7.5, 0.1 mM Na 2 EDTA, 5 mM dithiothreitol, 2 mM MnCl 2 .
Membrane protein cluster separation by size exclusion chromatography
AE1 was labeled with eosine maleimide and red cell membranes were fractionated as previously described [34] . With minor modifications, 1 ml membranes were solubilized in 2 ml extraction buffer (10 mM Hepes, 130 mM NaCl, 10 mM N -ethylmaleimide, 1 mM EDTA, 1 mM PMSF, 1% triton-X100, pH 7.4), gently shaken for 10 min at 20°C and centrifuged at 13000 rpm for 5 min in an Eppendorf microfuge. The supernatant was applied to a 40×1 cm column filled with Sepharose CL-6B equilibrated with a solution containing 10 mM Hepes, 50 mM NaCl, 0.1% triton-X100 (pH 7.4), at a flow rate of 1 ml/min. Constant flow was maintained using an HPLC pump. The effluent was collected in 1 ml fractions. Eosine maleimide fluorescence was quantified as described [18] and, to measure the percentages of clustered AE1, the fluorescence measured in the high molecular weight fraction was expressed as a percentage of the total fluorescence eluted from the columns. The high molecular weight fraction was concentrated 10 fold and analysed for immunoblotting with anti-AE1, anti-phosphotyrosine and anti-Syk as described above.
Preparation of cells for immunofluorescence
Control and G6PD deficient RBCs were pelleted and washed twice in PBS 1× containing 5 mM glucose and then fixed for 5 min in 0.5% acrolein in PBS. Cells were rinsed three times then permeabilized in PBS containing 0.1 M glycine (rinsing buffer) plus 0.1% Triton X-100 for 5 min and again rinsed 3× in rinsing buffer. To ensure complete neutralization of unreacted aldehydes, the cells were then incubated in rinsing buffer at room temperature for 30 min. After incubation, all nonspecific binding was blocked by incubation again for 60 min in blocking buffer (PBS containing 0.05 mM glycine, 0.2% fish skin gelatin and 0.05% sodium azide). Staining of fixed, permeabilized RBCs was performed by using specific antibodies diluted in blocking buffer. After labeling, resuspended RBCs were allowed to attach to cover slips coated with polylysine, and the cover slips were mounted by using Aqua-Mount (Lerner Laboratories, New Haven, CT). The auto-fluorescence of hemichromes was visualized by exciting at 488 nm and observing their emission in the 630–750 nm range. Samples were imaged with a Bio-Rad MRC1024 (Bio-Rad) confocal microscope equipped with a 60×1.4 numerical aperture oil immersion lens.
Assay of hemichromes
Hemichromes were quantified by measuring heme absorbance at 560, 577 and 630 nm [11] and expressed as nmoles/mL of solubilised membranes.
Assay of glutathione
GSH estimations were performed using 5,5′-dithiobis(2-nitrobenzoic acid) (DTNB) [44] . | Results
Short and long term effects of oxidants in G6PD deficient red cells
Previous work has described how oxidant treatments induce more intense AE1 tyrosine phosphorylation in G − RBCs than in control RBCs [13] , [14] , [16] . In the present study, we analyzed AE1 phosphorylation and a series of additional parameters for longer time exposure with diamide, an -SH group oxidant reagent, or with divicine [15] , a compound extracted from fava beans considered responsible for severe hemolytic crises in G − deficient subjects [5] , [15] . The long term effects of oxidants in the G − RBCs were not easily predictable as, although the G − RBC samples used in our experiments had low G6PD levels (Mediterranean variant 563 C —> T with approximately 2–3% of normal red cell G6PD activity level), the hexose monophosphate shunt activity in these cells presented normal activity and could be 2–3 fold further activated following oxidant treatments [17] .
Following 0.25 mM treatment with diamide, approximately 80% of reduced glutathione (GSH) was oxidized within 5 minutes and the pre-treatment levels were restored within 45 minutes in control RBCs. Conversely, in G − RBCs reduced GSH further declined and reached un-measurable levels within 2 hours of incubation ( Fig. 1A ). Indistinguishable GSH response was elicited by divicine 0.5 mM. On the basis of these results we decided to use these concentrations both for diamide and divicine.
Figure 1C shows that following oxidant treatment, in G − RBCs, AE1 phosphorylation progressively increased during the course of incubation (10 hours). In control RBCs AE1 phosphorylation was completely reverted in approximately 1 hour. Syk kinase inhibitors markedly reduced the rate of AE1 phosphorylation both in control and G − RBCs. AE1 oxidation (disulfide cross-linking) parallels its tyrosine phosphorylation both in control and G − RBCs ( Fig. 1D ). As expected Syk inhibitors did not exert an apparent effect on AE1 oxidation.
Figure 1E shows that following oxidant treatment, in G − RBCs, AE1 was increasingly recruited in a high molecular weight cluster [13] , [18] , while in control RBCs, AE1 clustering showed transient behavior. AE1 large cluster formation appeared to be delayed in comparison to the AE1 phosphorylation and oxidative cross-linking. Syk kinase inhibitors decreased the amount of clustered AE1. Analysis of the clustered AE1 fraction revealed that AE1 was prevalently oxidized, cross-linked and phosphorylated. Anti-Syk western blot revealed a time dependent increase of Syk associated with the high molecular weight fraction. In the same fraction we observed the presence of hemoglobin denaturation products (see below).
Figure 1B shows that the hemichrome quantities measured in isolated RBC membranes increased with time after oxidant treatment of G − RBCs. In control RBCs no hemichrome formation was observed. Syk inhibitors did not exert an apparent effect on hemichrome formation. Measuring the hemichromes also in the high molecular weight clusters, we have estimated that approximately 75% of hemichromes were associated with this membrane protein complex.
At the chosen concentrations, diamide and divicine caused very similar effects in all monitored parameters indicating that the long term effects of divicine are probably triggered by sulfidril groups oxidation.
In conclusion, this set of results indicates that even after long term incubation, G − RBCs are unable to restore the initial levels of GSH and, consequently, to reduce the disulfide bonds of AE1. In G − RBCs, AE1 tyrosine phosphorylation and clustering also increase with time after oxidant treatment. Syk inhibitors cause a marked reduction of both AE1 phosphorylation and clustering without effecting its oxidative cross-linking and hemichrome formation, suggesting that persistent AE1 phosphorylation plays a role in the formation of the large membrane complexes.
Effect of Syk kinase and PTP inhibitors on G6PD deficient red cell lysis caused by oxidant treatments
Observing the supernatant of G − RBC culture we noticed a marked lysis some hours after the exposure to oxidants. No hemolysis was observed in control RBCs. Figure 2A shows that red cell lysis became evident after 2–3 hours of incubation in G − RBCs, increasing progressively with the time. Pre-treatment with Syk kinase inhibitors caused a consistent reduction of the hemolysis, while phosphatase inhibition by o-vanadate caused an accelerated and more intense hemoglobin leakage indicating an association between G − RBCs lysis and AE1 phosphorylation levels ( Fig. 2B ).
Confocal microscopy analysis revealed that oxidant exposure induced marked changes exclusively in G − RBCs. Following a 2 hour incubation with diamide 0.25 mM, hemichrome formation was already observable: staining with both AE1 and anti-phosphotyrosine antibodies revealed an uneven membrane distribution and an apparent co-association with the hemichrome clusters ( Fig. 3 ). Hemichromes and tyrosine phosphorylation were not observed in control RBCs treated in the same conditions.
Characterization of membrane vesicles released from G6PD deficient red cells following oxidant treatments
Vesicles were isolated from red cell cultures (4 hours of incubation with diamide 0.25 mM) by ultra-centrifugation and then characterized by confocal analysis of hemichrome auto-fluorescence, western blotting and mass spectrometry. Confocal microscopy analysis was suggestive of large amounts of hemichromes in isolated vesicles ( Fig. 4A ). The presence of hemichromes in isolated vesicles was further demonstrated by Vis spectrometry analysis indicating that a large fraction (approximately 90%) of hemoglobin contained in the vesicles, was present in the form of hemichromes (data not shown). SDS-PAGE separation of vesicle proteins and the identification by mass spectrometry of the more prominent bands, further supported the presence of large amounts of hemoglobin products ( Fig. 4B ). Western blotting analysis of isolated vesicles demonstrated that they contained oxidized, cross-linked and phosphorylated AE1 ( Fig. 4C ). The quantitative measurement of AE1 evidenced the time course of vesicle release. A marked reduction of vesicle release was observed following Syk inhibitor treatment ( Fig. 4D ). | Discussion
In a previous study [13] we observed that AE1 phosphorylation was apparently more pronounced and long standing in G − RBCs than in control RBCs following their treatment with diamide but, after a short incubation time, we did not observe any functional consequence. Hemolytic crises in G − subjects occur many hours after the intake of pro-oxidant compounds [7] , therefore, in the present study we investigated the long term effects (up to 10 hours) of the treatment of G − RBCs with diamide (an -SH group oxidant reagent) or divicine (a compound extracted from fava beans held responsible for severe hemolytic crises in G − deficient subjects) [15] .
The treatment with these compounds caused similar short term effects in G − and control RBCs but after 2–3 hours of incubation, striking differences were observed between the two types of red cells. While control RBCs returned to the pre-treatment conditions, G − RBCs showed a time dependent modification of several parameters: i) the GSH levels and -SH groups of AE1 continued to decrease; ii) Syk tyrosine kinase became irreversibly associated with the membrane; iii) AE1 tyrosine phosphorylation steadily increased over time, iv) hemichromes became progressively bound to the membrane; v) phosphorylated AE1 formed large clusters; vi) these clusters were exovesiculated and this phenomenon was associated with lysis.
We have observed that Syk inhibitors consistently reduced the amount of vesiculation and lysis. The protective effect of Syk inhibitors suggested that, in G − RBCs, AE1 hyper-phosphorylation may play a role in the progressive membrane damage induced by oxidant treatments.
This hypothesis is supported by some additional findings: i) o-vanadate (tyrosine phosphatase inhibitor) further increased AE1 phosphorylation and the rate of vesicle release, ii) we have isolated membrane protein complexes which contain both hemichromes and phosphorylated AE1, iii) confocal microscopy confirmed the co-association between hemichromes and phosphorylated AE1, iv) vesicles isolated from G − RBCs revealed their high content of hemichromes and phosphorylated AE1.
The present findings are in accordance with previous results which have demonstrated that Syk kinase binds preferentially to oxidized AE1, inducing its tyrosine phosphorylation [13] . Increased red cell membrane fragility has often been associated with membrane vesiculation in different hematological situations [19] – [21] . It is interesting to note that the clustering of AE1 through its binding to hemichromes has been demonstrated in most of these conditions [22] – [29] . Moreover, in hemolytic diseases, the accumulation of vesicles derived from the red cell membranes has received particular consideration because of their potential pro-coagulant and pro-inflammatory activities [30] , [31] . No direct data about vesicle accumulation in the plasma of G − subjects undergoing hemolysis are currently available, but the presence of high levels of hemoglobin in the plasma strongly suggests the occurrence of intravascular hemolysis due to red cell membrane instability [7] .
In conclusion, the present study contributes to the understanding of the mechanism of membrane destabilization and vesicle release occurring after the treatment of G − RBCs with oxidant compounds ( Fig. 5 ). As malaria parasites have been demonstrated to exert both oxidative damage and the binding of hemichromes to the red cell membrane [23] , [32] it will be of great interest to verify whether the membrane alterations described herein may also be promoted by malaria parasites. In this case, the data presented may contribute to an explanation of the mechanism of malaria protection in G6PD deficient subjects.
AE1 clusters are known to bind naturally occurring antibodies and to trigger red cell phagocytosis [33] , [34] . These phenomena may enhance the removal of vesicles which contain hemichromes, however, the role of antibodies and splenic macrophages has yet to be elucidated. In this respect, the functional meaning of the proposed mechanism in the selective removal of hemichromes from red cell membrane Vs the induction of membrane instability responsible for red cell lysis is to be clarified.
Additional studies will be required to further define a series of events such as: the irreversibility of AE1 tyrosine phosphorylation; the relative contributions of Syk kinase activation [12] , [35] – [37] and tyrosine phosphatase inhibition [38] – [40] ; the changes of AE1 – cytoskeleton interactions induced by AE1 phosphorylation; and the effect of additional membrane protein modifications [33] . | Conceived and designed the experiments: AP EF FM FC FT. Performed the experiments: AP EF FM FC AK. Analyzed the data: FT PP. Contributed reagents/materials/analysis tools: FC. Wrote the paper: AP EF FT LFS. Blood sample collection: LFS. Designed the experiments: PP.
Background
While G6PD deficiency is one of the major causes of acute hemolytic anemia, the membrane changes leading to red cell lysis have not been extensively studied. New findings concerning the mechanisms of G6PD deficient red cell destruction may facilitate our understanding of the large individual variations in susceptibility to pro-oxidant compounds and aid the prediction of the hemolytic activity of new drugs.
Methodology/Principal Findings
Our results show that treatment of G6PD deficient red cells with diamide (0.25 mM) or divicine (0.5 mM) causes: (1) an increase in the oxidation and tyrosine phosphorylation of AE1; (2) progressive recruitment of phosphorylated AE1 in large membrane complexes which also contain hemichromes; (3) parallel red cell lysis and a massive release of vesicles containing hemichromes. We have observed that inhibition of AE1 phosphorylation by Syk kinase inhibitors prevented its clustering and the membrane vesiculation while increases in AE1 phosphorylation by tyrosine phosphatase inhibitors increased both red cell lysis and vesiculation rates. In control RBCs we observed only transient AE1 phosphorylation.
Conclusions/Significance
Collectively, our findings indicate that persistent tyrosine phosphorylation produces extensive membrane destabilization leading to the loss of vesicles which contain hemichromes. The proposed mechanism of hemolysis may be applied to other hemolytic diseases characterized by the accumulation of hemoglobin denaturation products. | We would like to thank Giuseppe Casu for his valuable collaboration; Elena Valente and Daniela Ulliers for technical assistance. | CC BY | no | 2022-01-13 08:14:22 | PLoS One. 2011 Jan 5; 6(1):e15847 | oa_package/19/17/PMC3016414.tar.gz |
||
PMC3016415 | 21246054 | Introduction
Herbivorous insects constitute about a quarter of all living species [1] , and butterflies make up an important part of that diversity. The greater species diversity of herbivorous insect groups, as compared to their non-plant-feeding sister-groups [2] , [3] , suggests that host use may be relevant to explaining insect diversity [but see 4] . Previously, studies have focused mainly on host plant specialisation. But specialisation is a depletive source of diversification and would run out of variation to act upon – preventing further specialisation – and would then run into a dead end [5] . Yet specialisation is not a dead end. Rather diet breadth is a dynamic trait with evidence of shifts and expansions as well as specialisation [6] – [8] . And these changes in host use may be the necessary injection of new variation that facilitates diversification [5] , [9] .
The “oscillation hypothesis” [5] , [10] proposes that the increased diversity of herbivorous insects is largely a result of expansions in diet breadth followed by specialisation, in other words oscillations in diet breadth. These oscillations are then coupled with correlated changes in geographic range size, which in turn may lead to population fragmentation.
An important requirement for the oscillation hypothesis is that diet breadth should be correlated with geographic range size, as wide geographic ranges will set the stage for subsequent local adaptation and specialisation. This is because in a larger geographic range the environment is likely to be more heterogeneous, with differences in, for example, climate, local abundance of host plants, or interactions with competitors or parasites. Although gene flow can be high during periods of expansion, it may decrease with time as populations become increasingly adapted to local conditions. This causes the oscillation to swing back toward a more specialised use of locally favoured host plants, and this geographic variation in host use may give rise to population fragmentation and speciation. That is, expansions in diet breadth and geographic range are the source of new variation that allows further specialisation and speciation, and hence may be an important process behind the diversification of plant-feeding insects. There is support for this in the increased diversification of insect groups that have passed through such an oscillation in diet breadth as compared to their primitively specialised sister-groups [5] , [11] . The greater diversity of these groups corresponds to the predictions made by the oscillation hypothesis, but the mechanistic assumptions underlying the process remain to be tested, in particular whether increased diet breadth is positively correlated with geographic range.
Previous studies considering the relationship between diet breadth and geographic range have found a positive correlation [12] – [15] . However, the studies have all compared range sizes within a very restricted area, e.g., Germany and the United Kingdom. Range sizes of butterflies and other species do not follow national borders, and hence it is very likely that the ranges of some – if not most – species extend quite a bit outside the region under study. This means that “range size” does in fact not measure range size, but rather the ability of the species to persist within the various types of habitat that this particular region offers. As a consequence, it is important to perform the study on a geographic level that includes the whole geographic ranges of all included species. This problem will persist in any geographic region, no matter its size, and the only way to avoid it is to perform the study on a global level. This is to our knowledge the first study that investigates this relationship between diet breadth and geographic range size with a global scope. Butterflies are among the few groups where comprehensive data is available on this scale, much due to long-standing and widespread attention the group has received from amateurs, collectors and researchers.
The aim of this paper is to test if there is a correlation between the diet breadth and geographic range size in the butterfly subfamily Nymphalinae at a global scale. The Nymphalinae is very suitable for this study, as it is a very diverse group with variation in both diet breadth and geographic range sizes. Moreover, the recent development of well-supported phylogenies allows diet breadth and geographic range to be analysed phylogenetically. | Methods
Nymphalinae is a diverse subfamily containing approximately 496 species in 56 genera [16] . For this study, data was available for 182 species in 36 genera, which covers all five tribes. The phylogenetic analyses were performed on a subset of 144 species (in 35 genera) for which sequence data are available from previous studies [16] , [37] – [44] . Hence, although we used all species for which data were available, the data set still represents an incomplete sample. However, there is no apparent reason to expect sampling bias that might affect the results.
Phylogeny
Sequence data from three gene regions were included, COI (Cytochrome Oxidase subunit I, EF-1α (Elongation Factor 1 alpha) and wingless, these are available on Genbank (for accession numbers see Table S1 ). Sequences were downloaded and aligned in BioEdit v7.0.5.3 [45] . The alignment was straightforward.
The fit of different nucleotide substitution models were estimated based on likelihood scores calculated in PAUP* 4.0 [46] analysed on the ModelTest server 1.0 [47] running ModelTest 3.8 [48] using the standard AIC (Akaike Information Criterion) and using branch lengths as parameters. The suggested model, GTR+I+G, was imposed on the three gene partitions separately and Maximum likelihood (ML) analysis was conducted using RAxML 7.0.4 [49] via the Cyberinfrastructure for Phylogenetic Research (Cipres) Portal v.1.15 [50] using the default parameters. Bootstrap values were calculated from 1000 pseudoreplicates.
Diet breadth and geographic range data
Host plant data was collected from several sources [8] , [51] – [63] . Because host plant data may contain spurious/anecdotal records, a number of measures were taken to ensure a consistent approach to which records were included. We recorded the plant genera used, as records at species level may be uncertain or lacking. Records of observations made in the lab were not included as these may represent the butterfly's potential host use, rather than actual host use, and plants that are not used in the wild will not contribute to the geographic range of the butterfly (although they may lead to expansion and future increase in geographic range). In addition, rearings in the lab are lacking for many butterfly species and host plants. Further, we employed the same steps as used previously by Janz & Nylin [19] and Janz et al. [5] , where records are only included if at least one of the following criteria is met: 1) several plant genera are reported in a family, 2) several species are reported in a plant genus, 3) the plant genus is used by other butterflies in the genus, or 4) there are several independent sources. Where species and subspecies status differed between host plant data and data on geographic range, we followed the nomenclature used for the geographic range data. Two measures of diet breadth were used ( Table S2 ). The first measure was the number of genera used. The second measure was designed to reflect the greater diversity in diet breadth of butterflies that feed on plants not only in several genera but also several families and even orders. For this purpose, the plant genera were assigned to family and order according to Stevens [64] and the number of families and orders were then multiplied with the number of genera to create an index of diet breadth. The delimitation of genera and families may not be directly comparable across plant groups, which may add to the noise in the host range data-set. However, we still believe that these two measures of host range will reflect the degree of polyphagy of the butterflies and that using more refined measures of host range would only strengthen any patterns found.
Geographic range data was collected from Savela [62] as automatically digitised maps. The accuracy of these automatically generated range maps was checked against available range maps [51] , [60] , [63] and adjusted if necessary. The downloaded maps were in a map projection that was not of equal area, therefore the maps were recreated in a GIS (ArcMap 9.2) to avoid latitudinal distortion. The downloaded maps were used to create selections on the world map that is distributed with the software and using the statistics tool, the area in km 2 , was obtained for each species geographic range ( Table S2 ).
Analyses
To test if species are independent of each other with respect to diet breadth and geographic range, lambda was estimated using BayesTraits v1.0 [65] , implementing BayesContinuous [66] and Maximum Likelihood. Lambda measures if phylogeny correctly predicts the patterns of covariance among species on a given trait, that is, the strength of the phylogenetic signal. Lambda was assessed for geographic range and both measures of diet breadth.
Correlation between diet breadth and geographic range was analysed using cross-species comparison, under the assumption that the traits are independent of history. As both diet breadth and geographic range data were strongly right skewed ( Figure 1 ) they were modified by Box-Cox transformation (transformation parameters: t geographic range = 0.132, t diet breadth index = −0.459, t number of genera = −0.349). Linear regression was performed in Stata/SE (v. 11.0).
If instead related species are assumed to be more similar than expected by chance, the analysis needs to take phylogenetic relationships among species into account. This was done using generalised estimating equations (GEE), which incorporates species relatedness as a correlation matrix and uses a generalised linear model approach, which allows data to be analysed using non-normal response variables. The data (highly skewed, positive and continuous) suggested using the gamma family and log link. Branch lengths for the phylogeny were proportional to the number of changes along each branch. The GEE analyses were run using the APE package [67] , [68] in R 2.9.2 [69] . | Results
The maximum likelihood analyses resulted in a well-supported phylogeny ( Figure S1 ), which is largely consistent with those previously published for Nymphalinae [16] . The decision whether or not to account for the phylogeny was based on the results from the test of phylogenetic signal, which tests if species are independent of each other with respect to the variables studied, in this case, diet breadth and geographic range. Both diet breadth measures received intermediate lambda estimates (0.48 for diet breadth index and 0.45 for number of genera), indicating that there was some effect of phylogeny. Geographic range on the other hand appeared to be weakly correlated to phylogeny (λ = 0.08). As geographic range and diet breadth gave such different results, and because the intermediate value for diet breadth was difficult to interpret, we chose to analyse the data using both cross-species comparisons and accounting for phylogeny using GEE.
Cross-species comparisons were performed using linear regression. However, because both diet breadth and geographic range data were strongly right-skewed, that is, most species have very small diet breadths and geographic ranges ( Figure 1 ) [17] , data had to be transformed before analysis. Box-Cox transformations gave approximate normal frequency distribution for both traits. Linear regression showed that geographic range is significantly correlated with diet breadth ( Figure 2 , diet breadth index: df = 181, r 2 = 0.17, p<0.001; number of genera: df = 181, r 2 = 0.15, p<0.001).
A significant correlation between geographic range and diet breadth was also found when accounting for phylogenetic non-independence using GEE (diet breadth index: df phylogenetic = 39.74, estimate = 3.1×10 −8 , p = <0.001; number of genera: df phylogenetic = 39.74, estimate = 1.9×10 −8 , p = <0.001). This result is consistent for removal of the outlier Vanessa cardui ( Figure 1 ). | Discussion
Geographic range size was found to be significantly correlated with diet breadth in Nymphalinae on a global scale. This suggests that butterflies that have a more diverse host plant use are more geographically widespread than butterflies with more narrow host plant use. These results agree with previous studies showing that geographic range is correlated to diet breadth [12] – [14] . However, Brändle et al. [14] showed that the strength of this correlation decreased with increasing scale. The global perspective employed in this study ensures that the entire ranges of the species are considered, and the results show that the correlation is present at this global scale. Diet breadth explains 17% of the total variation in geographic range size. Considering the multitude of other factors that would be expected to influence a species' geographic range (such as climate, competition, and habitat availability), this must be seen as rather high.
In order to evaluate whether closely related butterflies are more similar than expected by chance with respect to diet breadth and geographic range, we tested both traits for phylogenetic signal. The results showed that, although diet breadth does show some effect of phylogeny, it is not highly conservative. This is interesting considering that the particular host plant taxa used are conservative, with related butterflies feeding on related plants [18] , [19] . Most host shifts are to ancestrally used host plants [8] , [19] , which suggests that using a completely new host is relatively difficult, whereas adding and subtracting from a potential range of ancestrally used host plants is easier. In other words, host use is conservative with few drastic changes in the host plant species used, but at the same time dynamic with relatively common changes in diet breadth [20] . Geographic range, on the other hand showed little correlation to phylogeny. Results from previous studies are contradictory [12] , [21] – [24] , and the use of different methods makes the results hard to compare. Although more phylogenetic studies are necessary, at the moment evidence appears to favour geographic range size as relatively independent of phylogeny [25] . Because the results for diet breadth and geographic range differed in their phylogenetic signal, both cross-species comparisons and GEE with phylogeny incorporated were used to test for correlation between the two traits.
Irrespective of which method was used, butterflies with larger diet breadths had significantly larger geographic ranges. This lends support for an important part of the oscillation hypothesis in that it can provide the necessary raw material for forthcoming population fragmentation and speciation [10] . However, the results do not imply anything about the causation.
The original reasoning in the formulation of the oscillation hypothesis was that expansions in diet breadth should precede expansions in geographic range [10] . Ultimately, the geographic range of the host plant(s) used sets the limit of the geographic range of the herbivore, and as two plant taxa rarely have fully overlapping ranges the potential range after a host plant colonisation should be larger than before the event. Hence, an increase in diet breadth allows the insect to expand its geographic range. However, the opposite is also possible: an expansion of the geographic range could put the herbivore in contact with novel plant species and thus increase the likelihood of colonisation through ecological fitting [26] . This alternative implies that the insect's initial geographic range was constrained by something else than the geographic range of its host, and that a change in some external factor caused its geographic range to expand. It is a scenario that is reminiscent of the “taxon pulses” proposed by Erwin [27] , [28] to explain the geographic expansion and subsequent turnover of clades through time [also see 29] , [30] . While this scenario requires a secondary explanation for the geographic expansions (such as climatic change) it could well lead to the same correlation between diet breadth and geographic range. Hence, while we have demonstrated the correlation predicted by the oscillation hypothesis, we cannot yet distinguish between the alternative scenarios that may have given rise to the pattern. However, it must also be pointed out that the two scenarios are not mutually exclusive and it may well be that they are actually two sides of the same coin.
As mentioned, incorporating a novel host into the repertoire is not a trivial task, and is likely to require some pre-existing machinery that allows at least some individuals of the insect species to have some realized fitness on the novel plant at the time of colonisation [26] , [29] , [ also see 31] , [32] . In other words, even in the case when a host plant colonisation appears to be caused by a geographic range expansion, some ability to utilize this particular host must have existed in the species prior to the expansion.
The oscillation hypothesis was proposed as a general explanation for the elevated diversification rates in plant-feeding clades. Previous studies have found support for the overall pattern expected from the hypothesis, that diversity in host plant use is correlated with increased diversification rates [5] , [11] . However, the hypothesis is dependent upon several distinct steps, each of which requires to be demonstrated separately. There is previous evidence for the first step, that there are at least transient phases of expansion in diet breadth, i.e. specialisation is not a dead end [6] – [8] . There is also evidence for the last step, that a large geographic range may increase the likelihood of fragmentation, isolation and speciation [e.g. 33] . Plausibly, a larger geographic range will encompass more environmental heterogeneity and a more pronounced geographic mosaic [33] , [34] . Such spatial variation may often lead to divergent selection, reproductive isolation and speciation, and in some cases it may be as a direct result of different interactions leading to diversifying coevolution [33] – [35] . In addition, an increase in geographic range per se, without divergent selection pressures, can lead to genetic differentiation with effects similar to local adaptation [36] .
The results presented here support the remaining middle step where a diet breadth expansion leads to a correlated increase in geographic range. As a consequence, the general plausibility of the oscillation hypothesis as a driver of diversification has been strengthened. It remains to be seen, however, to what extent these results can be extrapolated to other butterflies, and to other insect groups with different patterns of geographic distribution and different feeding habits.
We conclude that the geographic range sizes of Nymphalinae butterflies are correlated with diet breadth on a global scale. In other words, butterflies that have a broader diet breadth in general have larger geographic ranges than do butterflies with more limited diet breadths. Such large geographic ranges could increase the likelihood of future fragmentation and speciation, as a result of diversifying coevolution. Our study provides evidence for an important mechanism behind the oscillation hypothesis for the diversification of plant feeding insects, where an expansion in diet breadth is correlated to an expansion in geographic range. | Conceived and designed the experiments: NJ JS. Analyzed the data: JS. Wrote the paper: JS NJ.
The “oscillation hypothesis” has been proposed as a general explanation for the exceptional diversification of herbivorous insect species. The hypothesis states that speciation rates are elevated through repeated correlated changes – oscillations – in degree of host plant specificity and geographic range. The aim of this study is to test one of the predictions from the oscillation hypothesis: a positive correlation between diet breadth (number of host plants used) and geographic range size, using the globally distributed butterfly subfamily Nymphalinae. Data on diet breadth and global geographic range were collected for 182 Nymphalinae butterflies species and the size of the geographic range was measured using a GIS. We tested both diet breadth and geographic range size for phylogenetic signal to see if species are independent of each other with respect to these characters. As this test gave inconclusive results, data was analysed both using cross-species comparisons and taking phylogeny into account using generalised estimating equations as applied in the APE package in R. Irrespective of which method was used, we found a significant positive correlation between diet breadth and geographic range size. These results are consistent for two different measures of diet breadth and removal of outliers. We conclude that the global range sizes of Nymphalinae butterflies are correlated to diet breadth. That is, butterflies that feed on a large number of host plants tend to have larger geographic ranges than do butterflies that feed on fewer plants. These results lend support for an important step in the oscillation hypothesis of plant-driven diversification, in that it can provide the necessary fuel for future population fragmentation and speciation. | Supporting Information | We wish to thank Niklas Wahlberg and Ullasa Kodandaramaiah for help with the phylogeny. | CC BY | no | 2022-01-13 08:14:22 | PLoS One. 2011 Jan 5; 6(1):e16057 | oa_package/5e/f1/PMC3016415.tar.gz |
|
PMC3016416 | 21246055 | Introduction
Aspergillus fumigatus is a ubiquitous fungus that is efficiently cleared by immunocompetent hosts. Inability to efficiently clear Aspergillus under conditions of immune suppression, which is a common occurrence in organ transplant patients, induces severe invasive disease [1] . In patients with cystic fibrosis or severe asthma fungal clearance is also impaired which causes allergic bronchopulmonary aspergillosis (ABPA) [1] , [2] . In the lung, macrophages and neutrophils are the key cell types involved in defense against various pathogens including Aspergillus [3] . Macrophages constitute an important and primary line of defense against any infection. These cells not only serve a role in pathogen phagocytosis but they can also function as modulators of the immune response [4] . Development, behavior and functional properties of macrophages are influenced by various environmental cues to which these cells are exposed [5] , [6] , [7] . Several phenotypes or classifications of macrophages have been described. However, they can be best divided into two broad categories. Classically Activated Macrophages (CAMs) induced by IFN-γ are designated as M1 macrophages [7] , [8] . Alternatively Activated Macrophages (AAMs) or M2 macrophages are so designated because of the ability of IL-4 to enhance expression of mannose receptor, considered a distinctive feature of these macrophages [9] . While the M1/M2 designation is still used in the literature, the M2 subclass has expanded to include macrophages with diverse phenotypes and functions [7] , [8] , [10] .
The most important function of CAMs is engulfment and destruction of microbial agents. Activated CAMs produce pro-inflammatory cytokines such as TNFα and IL-6 and also show marked upregulation of nitric oxide synthase (NOS2) associated with NO production that together help in the destruction of the phagocytosed pathogens [7] , [8] , [10] . AAMs have been best studied in the context of infections by helminths [7] , [8] . However, AAMs have been also noticed during infections by intracellular bacteria [11] or viruses [12] , [13] and in other disease conditions such as allergic airways disease in mice [14] , [15] , diabetes [16] , [17] and cancer [18] , [19] . Various markers have been identified for AAMs like Arginase1 (Arg1), Chi3l3(Ym1), Chi3l4(Ym2), Fizz1(Found in Inflammatory Zone1) and macrophage mannose receptor (CD206). However, thus far Arg1 is regarded as the prototype activation marker for AAMs in murine macrophages [7] . Arg1 expressed by AAMs metabolizes L-Arginine (L-Arg), the common substrate for both NOS2 and Arg1, to produce orninthine and urea. Arg1 activation generates polyamines and hydroxyprolines that help in repair processes after tissue injury caused by parasitic infections and suppress Th2 effector functions [20] , [21] .
Recently, the function of AAMs was addressed either by depleting them or by using mouse models deficient in their signature molecules like Arg1 and Fizz1. Thus, in infections by Nippostrongylus brasiliensis or Schistosoma mansoni , Arg1- and Fizz1-expressing AAMs were shown to be suppressors of Th2 inflammation in the lung [21] , [22] , [23] . Furthermore, ablation of Arg1 specifically in macrophages and neutrophils exacerbated schistosomiasis and the presence of this enzyme was necessary for downregulating chronic inflammation and suppressing fibrosis [21] . In contrast, during infections by intracellular pathogens such as Toxoplasma gondii and Mycobacterium bovis , CAMs were found to upregulate expression of Arg1 with concomitant suppression of NO production due to competition for the common substrate L-Arg [11] . This process interfered with microbial killing since mice lacking Arg1 showed higher survival rate [11] . CD 4 T cell memory response that helped clear Heligmosomoides polygyrus was found to be facilitated by AAMs [24] .
In the present study, we explored the nature of the early innate immune response to Aspergillus fumigatus infection of the lung. We show that after fungal infection, AAMs expressing Arg1, Ym1 and CD206 develop in the lung as early as 6 hours after infection. The expression of Arg1 in BAL CD11c + cells was only partially dependent on IL-4Rα/STAT6. Moreover, Arg1 expression was also not dependent on Dectin-1 or MyD88, pathways associated with fungal recognition and induction of immune responses [25] , [26] , [27] , [28] , [29] , [30] , [31] , [32] , [33] . However, Dectin-1 was important for the phagocytosis of Aspergillus conidia. Depletion of macrophages by clodronate-filled liposomes delayed the clearance of fungus after infection even though neutrophil numbers increased upon clodronate treatment. Alveolar macrophages from WT mice efficiently phagocytosed fungal conidia, but those from mice deficient in Dectin-1 showed impaired fungal uptake. Since Arg1, constitutively expressed by neutrophils, was previously associated with antifungal activity [34] , switching on expression of this enzyme in alveolar macrophages highlights an important antifungal defense mechanism. Taken together, our data suggest that rapid induction of Arg1 in alveolar macrophages after A. fumigatus infection is a key antifungal defense mechanism employed by the infected host to eliminate the fungus. | Materials and Methods
Ethics statement
All animal work was conducted in accordance with guidelines issued by the Institutional Animal Care and Use Committee of the University of Pittsburgh and our approved protocol ID is 1005244. The Institutional Animal Care and Use Committee of the University of Pittsburgh is in compliance with Public Health Service (PHS) Policy on Humane Care and Use of Laboratory Animals when using live, vertebrate animals. PHS Policy incorporates U.S. Government Principles, the Guide for the Care and Use of Laboratory Animals, and the Report of the American Veterinary Medical Association (AVMA) Panel on Euthanasia. Mice were bred and maintained in the Department of Laboratory Animal Resources (DLAR) at the University of Pittsburgh. Mice were maintained in pathogen free environment and kept in sterile filtered top cages, maintained on 12 h dark/light cycle.
Mice
Male 6–8 weeks old BALB/c IL-4Rα -/- and STAT6 -/- mice were purchased from the Jackson Laboratories. MyD88 -/- [44] and Dectin-1 -/- mice [32] on the BALB/c background were bred at the animal facility at the University of Pittsburgh.
Infection by A. fumigatus
A. fumigatus isolate 13073 (American Type Culture Collection) was grown on Potato Dextrose Agar (PDA) media for 5–7 days at 37°C in a culture flask. Conidia were harvested with 50 ml of sterile PBS containing 0.1% Tween-20. The harvested conidia were then passed through sterile 40 μm strainer and counted on a hemacytometer. Mice were infected with 10–50×10 6 resting conidia (RC) suspended in 50 μl of sterile PBS and administered intratracheally after anaesthetizing mice with isofluorane.
BAL
Cells were collected by bronchioalveolar Lavage (BAL) from naïve and infected mice after high volume lavage with 1ml 1x PBS successively 10 times. BAL cells were subjected to CD11c purification using magnetic beads (Miltenyi Biotech) against mouse-specific CD11c described previously [67] , [68] , [69] . Cytospins of cells were stained with Hema-3 reagents (Fisher Scientific) according to the manufacturer's recommendations.
Clodronate-Liposome mediated depletion of alveolar macrophages
Macrophages were depleted using liposomes containing clodronate. Clodronate was incorporated into liposomes as described previously [49] . Mice were given 100 μl (25 mg/mouse) of PBS-liposome or clodronate-liposome intratracheally 48 hour prior to administration of A. fumigatus infection after anesthetizing the mice with the isofluorane.
RT-PCR and Real time quantitative PCR
Total RNA was isolated from whole lung samples or from purified cell populations at various times following A. fumigatus infection. RNA was isolated from TRIzol (Invitrogen Life Technologies) suspended samples using RNeasy Mini kit from Qiagen. The purified RNA was subsequently used for cDNA preparation using a Reverse Transcriptase-PCR kit (Applied Biosystems). The following mouse-specific oligodeoxynucleotides were used for RT-PCR analysis: Arginase1- FP 5′ATG GAA GAG ACC TTC AGC TAC 3′ , RP 5′GCT GTC TTC CCA AGA GTT GGG 3′ ;Chi3l3- FP 5′ GGG CAT ACC TTT ATC CTG AG 3′ ,RP 5′ CCA CTG AAG TCA TCC ATG TC 3′ ; NOS2 – FP 5′CCCTTCCGAAGTTTCTGGCAGC 3′ , RP 5′GCGTGTCAGAGCCTCGTGGCTTTGG 3′ ; Fizz-1 FP 5′ TCC CAG TGA ATA CTG ATG AGA 3′ , RP 5′ CCA CTC TGG ATC TCC CAA GA 3′ , CD206- FP 5′ GCA AAT GGA GCC GTC TGT GC 3′ , RP 5′ CTC GTG GAT CTC CGT GAC AC 3′ , β-actin FP 5′ TGGAATCCTGTGGCATCCATGAAAC 3′ , RP 5′TAAAACGCAGCTCAGTAACAGTCCG 3′ . For semi-quantitative analyses, all reactions involved 30 PCR cycles. After amplification, the samples were separated on 2% molecular biology grade agarose gels containing ethidium bromide and bands were visualized and photographed using UV transillumination. For quantitative (real time) RT-PCR, specific TaqMAN gene expression assays were obtained from Applied Biosystems which included those for Arginase1 (Mm01190441_g1), Chi3l3 (Mm00657889_mH), Fizz1 (Mm00443109_m1), NOS2 ( Mm00440488_m1), Gus-β (Mm00446953_m1), CD206 (Mm01329362_m1) and Real time RT-PCR was performed on cDNA using TaqMAN assay. Reactions were run in a real time PCR system (ABI 7900 HT; Applied Biosystems). The results were analyzed using SDS 2.2.2 software and samples were normalized to Gus-β. Fold induction was calculated over PBS treated or untreated controls unless otherwise indicated.
Western blotting
Non-denaturing cell lysis buffer containing 1% Triton (Cell Signaling) was used to prepare total lung extracts. Western blotting techniques were used to analyze equal amounts of protein as described previously [67] . Membranes were probed with monoclonal antibodies against YM1 (Stem Cell Technology) at a 1/1000 dilution. After stripping, the blots were probed with anti-β-actin (Jackson laboratory) to confirm equal protein loading. The intensity of the YM1 signal was quantified relative to that of β-actin using image J software.
Fungal Burden
Fungal burden was calculated by isolating DNA from infected lung tissue using Epicentre Yeast DNA isolation kit. Real time PCR was done with DNA as the template using Aspergillus-specific oligonucleotides and the results were analyzed according to a previously described method [70] . Fungal burden was expressed as Conidia Equivalents/lung (CE/lung). Fungal burden was also measured by plating lung homogenates on PDA plates and colonies were counted. Fungal burden was expressed as colony forming units/lung (CFU/lung).
Arginase Activity
For assay of arginase activity, total lung or cell extracts were made using 1x cell lysis buffer (Cell Signaling). Arginase activity was measured using the DARG-200 kit (Bioassay Systems). Protein concentration was measured using the BCA kit (BioRad Laboratories) and arginase activity was expressed as U/mg Protein.
Intracellular staining and flow cytometric analysis
Staining for cell surface expression of CD45, CD3, CD19, CD11c, Ly6G, and MHC II was carried out using specific antibodies as described previously [67] . Intracellular staining was done according to the manufacturer's suggestions (Cytofix/Cytoperm, BD Pharmingen). For Arg1 staining, purified mouse anti-Arg-1 antibody (BD Biosciences) was used followed by donkey Alexa fluor 555 (Invitrogen)-conjugated anti-mouse secondary antibody. Mouse IgG (Santa Cruz) was used as isotype control. NOS2 staining was done with purified polyclonal rabbit anti-mouse NOS2 antibody (BD Biosciences) with rabbit IgG as isotype control, followed by Alexa fluor 647 (Invitrogen)-conjugated goat anti-rabbit secondary antibody. Samples were analyzed in a FACS Calibur flow cytometer (BD Immunocytometry Systems) and the data were analyzed using the FlowJo software (Tree Star).
Labeling of conidia and Phagocytosis Assay
Live conidia were labeled with FITC (Sigma) according to previously described methods [71] . For the phagocytosis assay, alveolar macrophages were isolated from BAL and were cultured in complete RPMI media. Alveolar macrophages were incubated with FITC-labeled conidia for 4 hours at 37°C. At the end of the incubation period, phagocytosis was stopped by washing the macrophages with cold PBS and fixing cells with 4% PFA. Cells were collected and percent phagocytosis was analyzed by flow cytometry. To locate FITC-labeled conidia phagocytosed by macrophages, live cell imaging was done using a Nikon A1 Confocal on a Nikon Ti-E live cell microscope and data was analyzed with NIS-Elements imaging software. Cell tracker (Red CMPTX, Invitrogen) was used to stain cell cytoplasm and nuclei were stained with Hoechst.
Statistical analyses
All statistical analyses were carried out using Graph Pad Prism software (Version 4). Student's unpaired two-tailed t-test was used for all statistical analyses. Differences between groups were considered significant when P<0.05. | Results
A. fumigatus infection induces the prototypic marker of Alternatively Activated Macrophages (AAMs) Arginase 1 in the lung
We first compared the innate immune response in the lung to two very different pathogens, the extracellular bacterium Klebsiella pneumoniae , and the fungus Aspergillus fumigatus . Mice were either left uninfected or infected intratracheally with 100 cfu (colony forming units) of K. pneumoniae or 50×10 6 resting conidia (RC) of A. fumigatus . Lungs were harvested after 4 days of infection with K. pneumoniae or 48 hours of infection with A. fumigatus and mRNA expression for various AAM markers was determined by semi-quantitative RT-PCR techniques. The expression of Fizz1/Relm-α, a protein expressed by AAMs, epithelial cells and eosinophils was increased in the lung in both the infection models ( Figure 1A ). Whereas K. pneumoniae infection promoted NOS2 gene expression, infection by A. fumigatus caused increased Arg1 expression in the lung ( Figure 1A ). Since CAMs express NOS2 while AAMs express Arg1, these results suggested that A. fumigatus infection induces AAM-type cells. We next infected mice with different numbers of RC (2.5–50×10 6 per mouse) of A. fumigatus and harvested the lungs at 48 hours post-infection (p.i.). As shown in Figure S1A , the expression of genes such as Arg1, Fizz1 and Ym1 increased with increasing doses of RC while that of NOS2 did not increase much over that detected in control PBS-treated mice. Arg1, Fizz1 and Ym1 are genes associated with AAMs while NOS2 is expressed by CAMs. Further, using the dose of 50×10 6 RC for infection, we harvested lungs at different times after infection to assess expression of AAM-associated genes ( Figure S1B ). Of note, at 48 hours after infection, a low level of Arg1 was noted (data not shown) in the lungs of Klebsiella-infected mice which disappeared after 4 days (as shown). While Arg1 was upregulated, NOS2 expression was not detected at any time point (from 24–120 hours) in the lungs of Aspergillus-infected mice ( Figure S1B ). Collectively, the results showed peak expression of AAM-associated genes in the lung at 48 hours p.i.
We next infected mice with 50×10 6 RC to assess expression of AAM-associated molecules at both mRNA and protein levels at time points earlier than 48 hours p.i. to determine whether the expression of Arg1 but not NOS2 was evident from times very early after infection. Whole lung tissue was isolated from infected mice at 6, 12, 24, 48 and 96 hours p.i. and processed for RNA. As early as 6 hours after infection, the expression of Arg1 and Fizz1 was detected in the infected lungs when compared to uninfected controls but no induction of NOS2 was noticed ( Figure 1B ). The steady state levels of both Arg1 and Fizz1 mRNA peaked at 48 hours p.i. The expression of Arg1 and Fizz1 was 13- and 90-fold higher in the infected lungs compared to that in uninfected controls ( Figure 1C ). Further, the expression of Arg1 decreased substantially at 96 hours p.i. while that of Fizz1 was reduced but remained elevated ( Figure 1C ). Comparatively, the expression of NOS2, the signature marker for CAMs, did not appreciably increase in the infected lungs at any of these time points ( Figure 1B,C and Figure S1B ).
In addition to investigating the expression of AAM-associated genes at the mRNA level, we also examined expression of the corresponding proteins. Arg1 enzyme activity and expression of YM1 protein were assessed. Arg1 enzyme activity was high at both 48 and 96 hours p.i. ( Figure 1D ) even though decreased Arg1 mRNA level was noted at the later time point ( Figure 1C ). Increased YM1 protein expression was noted at 48 hours p.i. which decreased at 96 hours after infection ( Figure 1E ). Beyond 96 hours, expression of both proteins declined (data not shown).
Characterization of bronchoalveolar lavage cells after A. fumigatus infection
Our next goal was to characterize the major cell types present in the alveolar space after A. fumigatus infection one or more of which would potentially express the molecules expressed by alternatively activated macrophages. Mice were infected with 50×10 6 RC and BAL cells were isolated from infected and uninfected controls at various times after infection. Total and differential cell counts showed an increase in the number of polymorphonuclear neutrophils (PMNs) and macrophages, cells of the innate immune response that are important for clearing invading pathogens ( Figure 2A ). For the next series of experiments we used a lower dose of 10×10 6 RC rather than a dose of 50×10 6 RC or higher that is typically used in mortality studies in animals not treated with immunosuppressive agents [35] . Using the lower dose, at the whole lung level, a 3-4 fold increase in Arg1 expression over baseline was noted ( Figure S1A ). In order to characterize BAL cells further, cells were recovered from infected and uninfected controls by high volume BAL and stained for various surface markers after gating on live CD45 + cells (leukocytes) and analyzed for various cell types. The results showed that PMNs form the majority (68.6%) of the BAL-derived cells after 48 hours of infection as shown by Ly6G expression. However, the majority of alveolar macrophages (14.6%) remain CD11c + , as was also observed in the naïve mice ( Figure 2B ), even though the total number of CD11c + cells increased significantly after infection. While ∼0.1×10 6 total cells were recovered by BAL from naïve mice, 0.5×10 6 cells were obtained from infected mice.
We also distinguished BAL cells based on high and low autofluorescence corresponding to macrophages and dendritic cells (DCs) respectively from uninfected and infected mice at 48 hours p.i. The majority (98.5%) of the cells from uninfected mice were CD11c + autofluorescence high and only1% of the cells were CD11c + autofluorescence low . In infected mice, ∼85.7% of cells were identified as CD11c + autofluorescence high alveolar macrophages and 11.4% were CD11c + autofluorescence low (DCs). For further assessment, we examined the expression of MHC Class II in these two populations of cells from uninfected and infected mice. Alveolar macrophages were CD11c + autofluorescence high MHC II low and DCs were CD11c + autofluorescence low MHC II high ( Figure 2B ). Based on these results, we used autofluorescence high CD11c + cells corresponding to alveolar macrophages for further experiments.
A. fumigatus infection-induces CD11c + autofluorescence high Arg1- expressing alveolar macrophages
Next, we sought to determine whether alveolar macrophages expressed Arg1 but not NOS2 upon infection by A . fumigatus . Mice were infected with 10×10 6 RC and CD11c + cells were recovered by BAL 48 hours p.i. In addition to the increase in the number of CD11c + cells in the alveolar space after fungal infection, the alveolar macrophages in the infected mice also acquired a distinct morphology and were found to be more vacuolated as compared to those in naïve mice ( Figure 3A ). Similar results were obtained in C57BL/6 mice (data not shown). The CD11c – fraction comprised mainly PMNs ( Figure 3A ). To determine the nature of these macrophages, Arg1 expression was examined in purified CD11c + cells. Based on their high autofluorescence and morphology, the cells were uniformly identified as macrophages. As shown in Figure 3B , a robust increase in Arg1 expression was noted in these CD11c + cells suggesting development of AAM-like cells in the infected lungs.
We assessed expression of other AAM-associated markers in the CD11c + cells. Arg1 was the only gene whose expression was upregulated in the CD11c + BAL cells from infected mice as compared to expression in cells isolated from the controls ( Figure 3B ). However, Ym1 and CD206, genes also associated with AAMs [7] , [8] , [10] , were found to be constitutively expressed in alveolar macrophages isolated from uninfected mice ( Figure 3B ). Fizz1/RELM-α was not detected in these cells whether the cells were isolated from infected or uninfected mice. It has been previously shown that the basal levels of expression of Ym1 and Fizz1 differ in macrophages isolated from different tissues presumably due to differential stimulation by the microenvironments they reside in [36] . The lack of Fizz1 expression in the CD11c + cells suggested that the increased expression of this molecule observed in the lungs of infected mice was contributed by tissue resident cells such as epithelial cells and eosinophils [37] . At this early time point after infection, eosinophil infiltration is quite low making it unlikely that these cells contributed much to Fizz1 expression in the infected lungs which makes epithelial cells the likely source of this molecule. The signature marker for CAMs, NOS2, was barely detectable in the CD11c + population ( Figure 3B ). In the lungs too, NOS2 was not detected at any time point after infection ( Figure 1 , panels B and C and Figure S1 , panels A and B). Thus, macrophages expressing Arg1, Ym1 and CD206 were the dominant alveolar CD11c + cells early after A . fumigatus infection.
We also examined the expression of Arg1 and NOS2 by intracellular staining of CD11c + cells purified from infected mice after 48 hours and compared with expression in cells recovered from uninfected mice. It was clear in these experiments that Aspergillus infection induces robust Arg1 expression but not NOS2. Compared to >50% of the cells expressing Arg1 at 48 after infection ( Figure 3C ), only 2.5% of the cells were found to be NOS2 + by intracellular staining techniques ( Figure 3D ). Further, we also assayed arginase activity in CD11c + cells isolated by BAL from infected and uninfected mice. As shown in Figure 3E , significantly higher arginase activity was evident when cells were isolated from infected mice, further providing the evidence that alveolar macrophages after Aspergillus infection have a predominance of alternatively activated phenotype.
CD11c + Arg1-expressing macrophages isolated after A. fumigatus infection carry fungal load
It was previously shown that the lack of NOS2 expression has no effect on the killing of fungal conidia by alveolar macrophages [38] . Since the majority of the alveolar macrophages expressed Arg1 after fungal infection ( Figure 3 , panels B and C), we were curious whether the CD11c + cells isolated and purified from BAL fluid had the ability to phagocytose fungal conidia. Mice were infected with 10×10 6 RC and CD11c + cells were recovered by BAL 48 hours p.i. Fungal load was measured by quantitative PCR of fungal DNA corresponding to fungal 18S rRNA and expressed as conidia equivalents in CD11c + cells ( Figure 4 ). The data suggested that alveolar macrophages, a large fraction of which expresses Arg1 after fungal infection, can efficiently phagocytose conidia.
A. fumigatus -induced Arg 1 expression is partially dependent on IL-4Rα/STAT-6 signaling
AAMs or M2 macrophages can be elicited in vitro in the presence of Th2 cytokines such as IL-4 and IL-13 [4] , [39] . The development and maintenance of AAMs in vivo involve IL-4Rα/STAT6 signaling, the common signaling pathway for IL-4 and IL-13 [20] , [24] and the induction of AAM-specific genes was also shown to be dependent on this signaling axis [40] , [41] , [42] . We therefore explored the validity of this pathway in the expression of Arg1, the prototypical marker of AAMs, in our study. Mice deficient in IL-4Rα or STAT6 were infected with 10×10 6 RC along with WT BALB/c controls. CD11c + BAL alveolar macrophages were purified from WT, IL-4Rα -/- and STAT6 -/- mice and Arg1 expression was measured. Arg1 was found to be reduced by ∼50% in the BAL macrophages isolated from IL-4Rα -/- and STAT6 -/- mice as compared to that in cells from WT mice ( Figure 5A ). The fact that Arg1 expression was not completely ablated in cells deficient in signaling downstream of IL-4Rα suggested that factors other than STAT6, triggered by fungal surface molecules contribute to the maximal level of Arg1 expression observed in BAL CD11c + cells isolated from WT mice. The expression of other AAM-associated genes such as Ym1 and CD206 remained unaffected in cells from IL-4Rα -/- and STAT6 -/- mice ( Figure 5B ). These data also suggested that the basal level of Ym1 or CD206 expression in alveolar macrophages is not driven by IL-4Rα/STAT6 signaling. However, similar to the ability of IL-4 and IL-13 to induce the alternatively activated phenotype in macrophages in the context of helminth infections [7] , fungus-induced Arg1 was also found to be at least partially dependent on IL-4Rα/STAT6.
Dectin-1 and MyD88 involvement in Arg1 expression and fungal clearance
Given that Arg1 expression was not completely dependent on the IL-4Rα/STAT6 signaling pathway, we were curious whether pattern recognition receptors on the macrophages contributed to the expression of these molecules in infected mice. In this regard, Dectin-1, the pattern recognition receptor that binds β-glucan expressed on fungal cell walls [29] , [31] , [43] and MyD88, the essential adaptor molecule for signaling downstream of most TLRs [44] , were the key candidates. Dectin-1 has been shown to be important for antifungal defense [32] , [33] , [35] , [45] and MyD88 is also utilized once conidia germinate to hyphae [25] , [29] , [46] . An important role for Dectin-1 in uptake of β-glucan-expressing zymosans in phagosomes of macrophages was demonstrated and Dectin-1 was shown to augment TLR/MyD88-induced pro-inflammatory cytokines in the zymosan-exposed macrophages [47] . We first examined Arg1 expression in BAL CD11c + cells isolated from WT, Dectin-1 -/- and MyD88 -/- mice. As shown in Figure 6A , Aspergillus- induced Arg1 expression in alveolar macrophages was independent of these signaling pathways. The expression of other AAM-associated genes such as Ym1 and CD206 remained unaffected in cells from Dectin-1 -/- and MyD88 -/- mice ( Figure 6A ). We next investigated fungal burden in WT, Dectin-1 -/- and MyD88 -/- mice at 48 hours p.i. As shown in Figure 6B , fungal burden was 2-3-fold more in the lungs of both Dectin-1 -/- and MyD88 -/- mice.
With the observation that Dectin-1 and MyD88 deficiency results in higher fungal burden in the lung when compared to wild type mice, we further investigated phagocytosis of fungal conidia by alveolar macrophages isolated from WT, Dectin-1 -/- and MyD88 -/- mice using FITC-labeled live conidia. We examined the presence of phagocytosed conidia in live cells from WT, Dectin-1 -/- and MyD88 -/- mice by confocal microscopy. While alveolar macrophages isolated from either WT or MyD88 -/- showed presence of FITC-conidia inside the cells ( Figure 6C ), fewer macrophages from Dectin-1-deficient mice showed labeled conidia inside the cells in line with previous observations establishing a role for Dectin-1 in phagocytosis by sensing β-glucan [47] . Although Dectin-1 has been directly associated with phagocytosis [47] , the slightly lower efficiency of the MyD88-deficient cells in phagocytosis as compared to WT cells may have been due to the inability of Dectin-1 to collaborate with the TLR pathway in the MyD88-deficient cells. These data showed that unlike IL-4Rα/STAT6, Dectin-1 and TLR/MyD88 do not regulate Arg1 expression but nonetheless play an important role in fungal clearance given their function in sensing fungus-expressed molecules and phagocytosis and induction of inflammatory responses [47] .
Depletion of macrophages decrease pulmonary clearance of A. fumigatus
Since our investigations showed that Aspergillus infection promotes the development of Arg1-expressing alveolar macrophages and at none of the time points we could detect NOS2, we asked whether these cells were important in fungal clearance in the infected host. One of the strategies used to deplete macrophages is making use of clodronate-loaded liposomes that selectively deplete monocytes, macrophages but not lymphocytes or neutrophils [48] , [49] . Although alveolar dendritic cells (DCs) (but not interstitial DCs) are also at least partially depleted by clodronate-liposomes [50] , this was not of concern to us for two reasons. First, macrophages are significantly more numerous and the key phagocytic cells in the alveolar space in naïve mice and second, our objective was to determine effects on fungal burden and not adaptive immune responses. Clodronate-loaded liposomes or control PBS-filled liposomes were administered intratracheally 48 hours prior to fungus infection and mice were subsequently infected with 5 fold more RC (50 million), a high fungal dose that is used in mortality studies [35] . When compared with PBS-liposome group, mice that received clodronate-liposomes showed reduced numbers of macrophages ( Figure 7A ) both at 48 and 96 hours after fungus infection. Further, we measured the total and differential counts in the BAL cells in these two groups. We observed an impressive increase in total cell counts in the clodronate group at both time points due to compensatory increase in PMNs which caused a higher PMN/macrophage ratio due to depletion of macrophages but increase in PMNs ( Figure 7B ). Fungal burden was compared between PBS-liposome and clodronate-liposome groups as well as in infected mice without liposome administration (additional control) at 72 and 96 hours p.i. As shown in Figure 7 , panels C and D, when clodronate–liposomes were administered prior to intratracheal administration of 50×10 6 RC, fungal burden was significantly higher at 72 and 96 hours p.i. showing that alveolar macrophages in fungus-infected mice are important for reducing fungal burden in the lungs. Remarkably, the increased numbers of PMNs were unable to control fungal burden.
Taken together, these results showed that alveolar macrophages, the majority of which assume an alternative phenotype with the induction of Arg1 in response to Aspergillus infection with no detectable expression of NOS2, play an important role in pathogen clearance immediately after fungal infection. | Discussion
In immunocompetent healthy individuals, inhaled spores of A. fumigatus are rapidly cleared off in which innate immunity is believed to be sufficient for clearing the fungus [3] . However, in immunocompromised patients, inhaled conidia germinate and invade the parenchyma. In patients with cystic fibrosis or severe asthma, impaired fungal clearance induces allergic disease termed allergic bronchopulmonary aspergillosis (ABPA). Efficient and prompt fungal clearance is therefore of utmost importance to prevent fungus-induced disease. However, the mechanisms underlying fungal clearance are not well understood. Our study for the first time shows an important role of alveolar macrophages in fungal recognition and clearance immediately after fungal infection. Fungal infection rapidly induced Arg1 expression in alveolar macrophages, which was also true for tissue macrophages (not shown). Besides Arg1, the alveolar macrophages in fungus-infected mice were also found to express other AAM-associated molecules such as Ym1 and CD206. Arg1 induction upon fungal infection was partially dependent on the IL-4Rα/STAT6 signaling axis. The β-glucan receptor, Dectin-1, was found to play an important role in the phagocytosis of Aspergillus by alveolar macrophages and its absence increased fungal burden in the lungs of the infected mice. Lack of MyD88, the adaptor downstream of most TLRs, known to recognize Aspergillus and cooperate with Dectin-1 for the induction of inflammatory responses [47] , also impaired fungal clearance although neither Dectin-1 nor MyD88 contributed to Arg1 expression in the infected macrophages. Depletion of alveolar macrophages increased fungal burden in the lungs of mice despite of increased influx of neutrophils in the alveolar space. Figure 8 illustrates the key findings in our study.
Macrophages constitute one of the most important cells of innate immunity with versatile functions. Recently, heterogeneity in macrophage phenotype and function has been well recognized similar to that noted for T cells [7] . Macrophages have been subdivided into two broad categories, M1 and M2. M1 macrophages express NOS2 and reactive oxygen and nitrogen intermediates and are IL-12 high and IL-10 low . M2 macrophages express Arg1 along with a host of other molecules such as Ym1 and Fizz1 and are IL-12 low and IL-10 high . Depending on context, variability in expression of these molecules is increasingly being noted and the M2 category has been broadened to include all additional subtypes [8] , [51] . Classical or M1 macrophages are crucial for killing pathogens and tumor cells [7] , [52] . M2 macrophages/AAMs have been associated with both adverse and beneficial effects in interactions of the host with various pathogens. For example, a recent study showed that Arg1, the key enzyme expressed by AAMs, can be detrimental during infections by intracellular pathogens such as T. gondii and M. bovis [11] . In this study, Arg1, which uses the same substrate, L-Arg, as NOS2, was found to help survival of the intracellular pathogens due to a decrease in NO production [11] . However, in the context of worm infections, where AAMs have been studied the most, these cells have been largely associated with beneficial effects in the infected host. For example, recent studies have highlighted an important role for Fizz1 expressed by AAMs in suppressing Th2 responses and downregulating inflammation and fibrosis in mice infected with S. mansoni [22] , [23] and Arg1 was also associated with similar suppressive functions [21] . The protective role of AAMs in schistosomiasis was also shown to involve downregulation of harmful Th1 inflammatory responses and AAM induction was essential for survival [53] . Clodronate-mediated removal of macrophages with AAM phenotype in the intestines of mice infected with N. brasieliensis impaired smooth muscle contractility and increase in thickness and worm expulsion [20] , [54] . In the case of infection by the worm Brugia malayi , absence of AAMs resulted in increased neutrophilia and reduced eosinophilia [55] . In this regard, the AAMs were shown to phagocytose apoptotic neutrophils. Thus, at the present time there exists a significant body of literature on AAM characterization and function in the context of chronic infections, particularly in the context of helminth infections, and for the most part this type of host response has been found to be beneficial. However, fewer studies have studied these cells early after infection. In the RSV infection study, AAMs expressing Th2 cytokines were detected around 4 days p.i. [13] . Compared to that observed in WT mice, infection of IL-4Rα -/- mice by RSV that impaired AAM development caused worse lung pathology and thus a protective role of these cells was suggested [13] .
In our study, the experiments performed to address the relevance of alveolar macrophages in Aspergillus infection suggest a protective role of these cells in the context of fungal infection although we could not address the specific role of the Arg1-expressing AAM-type population due to our inability to selectively deplete them. Fungus-induced Arg1 expression in alveolar macrophages was also rapidly induced as early as 6 hours after infection and at none of the time points tested did we detect appreciable NOS2 expression unlike in mice infected with K. pneumoniae that showed brisk NOS2 expression ( Figure 1 ). The decline of Arg1 mRNA levels after 96 hours of infection was in toe with fungal clearance. Furthermore, in phagocytosis assays, macrophages isolated from WT mice showed significantly higher phagocytosis as compared to those from Dectin-1-deficient mice ( Figure 6 ). In the in vivo setting, we also detected fungal DNA corresponding to 18S rRNA in CD11c + cells isolated from infected mice ( Figure 4 ). Finally, depletion of alveolar macrophages using clodronate-liposomes increased fungal burden in the lung at multiple time points ( Figure 7 ). Collectively, these results suggest that a population of alveolar macrophages with a predominance of alternatively activated phenotype is beneficial in rapid clearance of fungi from infected lungs.
The induction of various AAM-associated genes such as Arg1, Ym1 and Fizz1 in the majority of studies has been found to require the IL-4Rα/STAT6 signaling axis [12] , [54] , [56] , [57] , [58] . However, exceptions have been noted as during infection by T. gondii or M. bovis where TLR-mediated signaling is required [11] or during development of trypanosomiasis where IL-10-mediated mechanisms were invoked [59] . We show that Arg1 expression is reduced but not eliminated in IL-4Rα -/- or STAT6 -/- mice ( Figures 5 ). It is possible that functional cooperation between IL-4Rα/STAT6 and additional pathways promotes maximal Arg1 expression in alveolar macrophages after infection with A. fumigatus . In studies of infection by Fasciola hepatica and S. mansoni , the secreted antioxidant, peroxiredoxin (Prx), was shown to induce Ym1-expressing AAMs, which enhanced the secretion of IL-4, IL-5 and IL-13 from naïve CD4 + T cells [60] . However, any such possibility in our study remains to be determined. Cell surface molecules such as Dectin-1 and TLRs are integral to fungal recognition; however the role of these molecules in the induction of markers commonly associated with the AAM phenotype such as Arg1 has not been previously studied. Our data show that unlike IL-4Rα/STAT6, Dectin-1 or MyD88 do not contribute to Arg1 expression suggesting a division of labor between different cell surface molecules with respect to pathogen recognition and uptake and induction of intracellular molecules such as Arg1.
The localization of Dectin-1 to phagosomes and its important role in phagocytosis of zymosan particles expressing β-glucan, the ligand of Dectin-1, was previously shown [47] . This study also showed the collaborative efforts of Dectin-1, exerted via its ITAM (immunoreceptor tyrosine-based activation) motif and TLR2 via MyD88 signaling in macrophages in both phagocytosis and expression of pro-inflammatory cytokines and reactive oxygen species [47] . As many subsequent studies have shown, fungal pathogens like Aspergillus, Pneumocystis and Candida utilize TLRs and Dectin-1 to infect macrophages and neutrophils [25] , [29] , [31] , [32] , [33] , [35] , [43] , [45] , [46] , [51] . Therefore, we believe that the increased burden noticed in the Dectin-1 -/- and MyD88 -/- mice after 48 hours of A. fumigatus infection is due to the lack of the collaborative effort between the two signaling pathways in the infected macrophages. Given that lack of MyD88 did not significantly impair fungal uptake by the alveolar macrophages but still increased fungal burden, it is likely that in the absence of MyD88, inadequate production of proinflammatory cytokines that are required to kill phagocytosed microbes accounts for the delay in fungal clearance in these mice.
So, how might Arg1 induced by Aspergillus cause increased fungal clearance? In the H. polygyrus infection study, AAMs were associated with impairment of larval parasite health and mobility and worm expulsion that was dependent on Arginase expression [24] . Similarly, Arg1 was found to be crucial for suppression of Th2 responses in mice infected with S. mansoni [21] . How Arg1 might contribute to the suppressive functions of AAMs at early or late time points after infection is currently not understood. Metabolism of L-Arg by Arg1, the major arginase activity in the body [61] , generates L-ornithine and urea. L-ornithine is metabolized by ornithine decarboxylase to the polyamine putrescine, which is further converted to other polyamines. L-ornithine is also metabolized in the mitochondria via successive steps to L-proline, which is essential for the synthesis of many structural proteins, including collagen [62] . Competition of Arg1-expressing macrophages with myofibroblasts for the substrate L-Arg causing less collagen production by the myofibroblasts has been suggested as one possible mechanism for suppression of fibrosis by AAMs induced after infection by S. mansoni [21] . In our study, none of these mechanisms is relevant since we have studied fungal clearance at an early time point before the induction of adaptive immunity. However, one important consideration is competition for L-Arg between the germinating fungal spores and the AAMs. The Aspergillus species, A. nidulans , was shown to utilize L-Arg as a source for nitrogen and carbon employing arginase enzymes [63] , [64] . It is likely that Arg1-expressing macrophages competitively deprive the fungus of L-Arg and compromise spore viability. Interestingly, in human neutrophils, L-Arg depletion by Arg1 localized to phagolysosomes was recognized as a novel mechanism of anti-fungal activity against Candida albicans [34] . It is a well recognized phenomenon that myeloid cells deplete phagosomes of critical nutrients required for survival of phagocytosed microbes [65] . The microbes, in turn, try to compensate by upregulating expression of genes to adapt to the host microenvironment. The fungi C. albicans and Saccharomyces cerevisiae were shown to upregulate expression of genes associated with Arginine biosynthesis in human neutrophils [66] . Taken together, the finding of a role for Arg1 constitutively expressed in human neutrophils in defense against C. albicans [34] , the dependence on L-Arg by fungi as an essential nutrient source [63] , [64] , and our collective data of the role of alveolar macrophages with AAM phenotype in Aspergillus uptake and clearance provide logical explanations for why the host would attempt to rapidly induce Arg1 in the infected lung macrophages. Since Aspergillus is a ubiquitous pathogen and the host has to fight this battle with the fungus continuously, it makes more sense to express Arg1 rather than NOS2 to deplete L-Arg since constant generation of NO via NOS2 activity would be deleterious to lung health. Thus, alveolar macrophages with prevalence of AAMs following A. fumigatus infection play an important role in innate immune response. | Conceived and designed the experiments: PR AR SB. Performed the experiments: SB MF MY ZQ CMSC GAG. Analyzed the data: SB PR AR CMSC GAG. Contributed reagents/materials/analysis tools: SA SS YI. Wrote the paper: SB PR AR. Provided clodronate filled-liposomes: NvR.
The ubiquitous fungus Aspergillus fumigatus is associated with chronic diseases such as invasive pulmonary aspergillosis in immunosuppressed patients and allergic bronchopulmonary aspergillosis (ABPA) in patients with cystic fibrosis or severe asthma. Because of constant exposure to this fungus, it is critical for the host to exercise an immediate and decisive immune response to clear fungal spores to ward off disease. In this study, we observed that rapidly after infection by A. fumigatus , alveolar macrophages predominantly express Arginase 1 (Arg1), a key marker of alternatively activated macrophages (AAMs). The macrophages were also found to express Ym1 and CD206 that are also expressed by AAMs but not NOS2, which is expressed by classically activated macrophages. The expression of Arg1 was reduced in the absence of the known signaling axis, IL-4Rα/STAT6, for AAM development. While both Dectin-1 and TLR expressed on the cell surface have been shown to sense A. fumigatus , fungus-induced Arg1 expression in CD11c + alveolar macrophages was not dependent on either Dectin-1 or the adaptor MyD88 that mediates intracellular signaling by most TLRs. Alveolar macrophages from WT mice efficiently phagocytosed fungal conidia, but those from mice deficient in Dectin-1 showed impaired fungal uptake. Depletion of macrophages with clodronate-filled liposomes increased fungal burden in infected mice. Collectively, our studies suggest that alveolar macrophages, which predominantly acquire an AAM phenotype following A. fumigatus infection, have a protective role in defense against this fungus. | Supporting Information | The authors thank C. Steele for providing the A. fumigatus spores and advice on fungal culture. The authors also thank Adam Henry for his expert assistance with cell imaging. | CC BY | no | 2022-01-13 08:14:23 | PLoS One. 2011 Jan 5; 6(1):e15943 | oa_package/da/1b/PMC3016416.tar.gz |
|
PMC3016417 | 21122150 | Background
Gene expression profiling is accelerating our progress toward a comprehensive understanding of the genetic mechanisms that control responses to environmental stress. Microarray analysis was developed to obtain overall gene expression profiles in various plants. Microarray profiling and the recently introduced tag-based sequencing approaches are proven technologies for estimating gene expression. However, array-based technologies have critical limitations [ 1 , 2 ]. As most microarray probes are designed on the basis of gene annotation, arrays are limited to the analysis of transcripts from previously annotated genes of a sequenced accession of a species. Probes are designed to cover only a very small portion of a gene and so do not represent the whole structure of the gene. Moreover, computationally annotated genes have not fully been validated, because ESTs and full-length cDNAs (FL-cDNAs) cannot cover entire transcribed regions. It is therefore important to identify whole transcripts (including unannotated transcripts) for complete gene expression profiling. There is a need for the development of technologies beyond arrays.
Sequencing-based approaches could overcome the limitations of array-based technologies. Following the rapid progress of massive parallel sequencing technology, whole mRNA sequencing has been used for gene expression profiling [ 3 - 8 ]. This sequencing involves mapping of the reads on known annotated gene models but cannot be used to identify novel genes. Recently, a series of programs have been developed for building gene models directly from the piling up of short reads: Bowtie efficiently maps short reads on genomic sequences [ 9 ]; TopHat concatenates adjacent exons and identifies reads that bridge exon junctions [ 10 ]; and Cufflinks [ 11 ] constructs gene models from the exons and bridging sequences predicted by Bowtie and TopHat and then calculates their abundances of these sequences. The use of this series of programs has the potential to discover new transcripts from mRNA-Seq (an Illumina cDNA sequencing application) but has only just begun [ 7 , 12 ].
In this study, we identified unannotated transcripts in rice on the basis of the piling up of mapped reads. As a model case, we give examples of salinity stress-inducible unannotated transcripts encoding putative functional proteins. For these purposes, we performed whole mRNA sequencing by using massive parallel sequencing technology. We also took advantage of various high-quality genomic resources in rice, including the genomic sequence (International Rice Genome Sequencing Project [IRGSP] build 4.0), FL-cDNA sequences [ 13 ], the Rice Annotation Project database (RAP-DB: http://rapdb.dna.affrc.go.jp/ ) [ 14 , 15 ], and a rice 44K microarray (Agilent Technologies, Palo Alto, CA, USA), in our analysis of rice transcriptomes. First, to estimate the scale of the transcriptomes in rice, we mapped 36-base-pair (bp) reads from the mRNA of salinity stress-treated rice tissues on the rice genome. The coverage of previously annotated regions or of the rice genome was then calculated. Second, we attempted to identify salinity stress-inducible genes as a model system for gene expression profiling by mRNA-Seq. The number of mapped reads was counted and marked on the rice genome. Third, using the mRNA-Seq data, we used Bowtie, TopHat, and Cufflinks to construct gene models based on the piling up of short reads on the rice genome, and compared these with previous annotations and then characterized the unannotated transcripts. We conducted a BLASTX search for the unannotated transcripts, and we discuss the function of the encoded proteins. Fourth, to validate our sequence-based technology, we compared the results of quantification by the array-based and sequence-based approaches, and we discuss the advantages of the latter. This work contributes to the discovery of whole salinity stress-inducible transcripts without the need to rely on previous annotations. It should help to establish further sequence-based gene expression profiling in any organism. | Methods
Plant material and salt stress treatment
Seeds of rice ( Oryza sativa L. 'Nipponbare') were germinated in the dark at 28°C on a sterilized germination tray. Germinated seeds were evenly distributed on 96-well PCR plates supported by a plastic container. Seeds were grown in a growth chamber at 28°C, as previously described [ 44 ]. After the seedlings had been grown for 7 days, they were transferred on their 96-well plates into containers filled with 150 mM NaCl solution, or with control solution, and placed at 28°C in a growth chamber for 1 h. Four kinds of tissue (normal shoot, normal root, shoot with 1-h salinity stress, or root with 1-h salinity stress) were collected and immediately frozen in liquid nitrogen. For RNA extraction from each treatment group, 10 plants were collected and mixed, to minimize the effect of transcriptome unevenness among plants.
mRNA sequencing
Total RNA was extracted by using an RNeasy Plant kit (Qiagen, Hilden, Germany). RNA quality was calculated with a Bioanalyzer 2100 algorithm (Agilent Technologies); high-quality (RNA Integrity Number > 8) RNA was used. Total RNA samples (10 μg) were subjected to cDNA construction for Illumina sequencing, in accordance with the protocol for the mRNA-Seq sample preparation kit (Illumina). Oligo(dT) magnetic beads were used to isolate poly(A) RNA from the total RNA samples. The mRNA was fragmented by heating at 94°C for 5 min. First-strand cDNA was synthesized using random hexamer primers for 10 min at 25°C, 50 min at 42°C, and 15 min at 70°C. After the first strand had been synthesized, dNTPs, RNaseH, and DNA polymerase I were added to synthesize second-strand DNA for 2.5 h at 16°C. The ends of double-stranded cDNA were repaired by using T4 DNA polymerase and Klenow DNA polymerase and phosphorylated by using T4 polynucleotide kinase. A single "A" base was added to the cDNA molecules by using Klenow exo-nuclease, and the fragments were ligated to the PE adapters. cDNAs with 200 ± 25-bp fragments were collected. The purified cDNA was amplified by 15 cycles of PCR for 10 s at 98°C, 30 s at 65°C, and 30 s at 72°C using PE1.0 and PE2.0 primers.
Mapping of short reads, detection of bridging sequences, and prediction of transcripts
For each sample, cDNA was sequenced (single read) by an Illumina Genome Analyzer II. Data on nine technical replicates (nine sequencing lanes of a cDNA sample from root after salinity stress) were accumulated for Figure 1 . Data on four technical replicates (four sequencing lanes of each cDNA sample, corresponding to about 27 to 35 million 36-bp reads) were summed for Table 1 . In our preliminary experiment, two independent sequencing runs using the same cDNA were highly correlated ( r = 0.99). The default Illumina pipeline quality filter, which uses a threshold of CHASTITY ≥ 0.6, was used to identify clusters with low signal-to-noise ratios. CHASTITY is defined as "the ratio of the highest of the four (base-type) intensities to the sum of the highest two." Passed filter reads were mapped onto both the Nipponbare reference genome (IRGSP build 4.0) and the spliced exon junction (SEJ) sequences by SOAP ver. 1.11 [ 45 ], allowing up to 2 bp of mismatch or up to 3 bp of indels. SEJ sequences were constructed by concatenating the 40 bases at the 3' end of the upstream exon to the 40 bases at the 5' end of the downstream exon for all RAP2 transcripts [ 14 , 15 ] at a locus. To calculate the cumulative coverage of the genome or annotated regions, reads were mapped by BWA (Burrows-Wheeler Aligner) [ 46 ] with the default option. To predict transcripts, a series of programs--Bowtie [ 9 ], TopHat [ 10 ], and Cufflinks [ 11 ]--was used. Briefly, mRNA-Seq reads were mapped against the whole reference genome (IRGSP build 4.0) by using Bowtie software. An initial consensus of exon sequences was extracted from the mapped reads. The reads that did not align to the genome but that were mapped to these potential junctions by TopHat were considered to bridge splice junctions. Cufflinks constructs gene models (RPKM ≥ 2, length ≥ 100 bp) on the basis of the exons and bridging sequences predicted by Bowtie and TopHat. ORFs were predicted by BLASTX search against UniProt (Swiss-Prot) and RefSeq (reviewed and validated) or by longest-ORF search (≥20 amino acids).
Microarray analysis
The same RNA material was shared for use in the Illumina sequencing and the microarray experiments and qRT-PCR analysis. The rice 44K oligo microarray (Agilent Technologies) contained approximately 44,000 60-mer oligonucleotides synthesized on the basis of RAP annotation. For each microarray experiment, 400 ng of total RNAs was used for Cy3- or Cy5-labeled complementary RNA (cRNA) synthesis. DNA microarrays were hybridized for 16 h with 825 ng of Cy3- and Cy5-labeled probes from salinity-stressed or unstressed plants. The microarray experiment was repeated with color-swapping of Cy3 and Cy5. Agilent Feature Extraction Software (ver. 8.5.1.1) was used to quantify microarray images. GeneSpring (ver. 10) software (Agilent Technologies) was used for background subtraction, LOWESS normalization, and extraction of normalized raw signal intensities for all probe sets from each array. Normalized raw signal intensities were compared with the corresponding RPKM. Parts of the signals were removed for further analysis if they were not positive, significant, or above background levels. The hybridization experiments and array scanning were performed at an open laboratory run by the DNA Bank of the National Institute of Agrobiological Sciences ( http://www.dna.affrc.go.jp/ ).
Quantitative RT-PCR (qRT-PCR)
qRT-PCR primers were designed on the basis of the annotation of the RAP-DB (Additional file 5 : Table S4). One microgram of total RNA was reverse-transcribed in a 20-μL reaction mixture of Transcriptor First Strand cDNA Synthesis Kit (Roche, Basel, Switzerland). qRT-PCR was performed in a 20-μL reaction mixture containing 2× SYBR Master Mix (Roche) and 1 μL of cDNA template (1:10 diluted). qRT-PCR of three technical replicates for each sample was performed using a LightCycler480 System with its relative quantification software (ver. 1.2) based on the delta-delta-Ct method (Roche). qRT-PCR was performed for 10 s at 95°C, 5 s at 55°C, and 10 s at 72°C. The detection threshold cycle for each reaction was normalized against the expression level of the ubiquitin gene. | Results
Mapping of 36-bp reads to the rice genome
We performed rice transcriptome analysis at single-nucleotide resolution by using Illumina mRNA-Seq technology. Briefly, poly(A) RNAs from salinity stress-treated rice tissues were reverse-transcribed and sequenced (Table 1 ). Millions of 36-bp reads were mapped to the rice genomic sequence (IRGSP Build 4.0), with at most two mismatches or 3 bp of indels allowed. To obtain many kinds of transcripts, data on nine technical replicates of the sequencing run of cDNA from the roots after salinity stress were accumulated. As the number of reads increased, the cumulative coverage of both the genome and the annotated transcribed region gradually approached a plateau (Figure 1a ). Saturation of sequencing was also estimated on the basis of the fraction of genes that had reached their final RPKM (reads per kilobase of exon model per million mapped reads) [ 16 ]. As the number of reads increased, the fraction of highly expressed genes (RPKM ≥ 300) close to their final RPKM was almost unchanged, whereas those of genes with relatively low expression (RPKM 3-30) converged more slowly (Figure 1b ). With four technical replicates (corresponding to about 27 to 35 million reads), 81.2% of genes with relatively low expression levels (RPKM 3-30) reached to within ± 5% of their final RPKM (Figure 1b ). Thus, for further analysis, we adopted the summing of four technical replicates after filtration according to their base quality.
Rice transcriptome analysis was based on response to salinity stress. mRNAs were prepared from the tissues of normal rice shoots and roots and from those subjected to 1 h of salinity stress. Of the 27 to 35 million quality-evaluated reads (Table 1 ; Total filtered reads), 72.0% to 75.2% were mapped uniquely to the rice genome (Table 1 ; Unique-genome); 5.0% to 5.7% of the reads bridged flanking exons (Table 1 ; Unique-bridged); 6.0% to 11.2% of the reads were repetitive sequences (Table 1 ; Multiple); and 10.1% to 16.7% had no match in the genome (Table 1 ; Unmapped). Thus, a total of 76.9% to 80.9% of the reads were mapped uniquely to the rice genome or to exon-exon junctions (Table 1 ; Unique-total).
Of the unmapped reads, 26.1% had high levels of identity to sequences derived from sequencing adaptors (11.0%), contaminating organisms (8.2%), or ribosomal RNA (6.9%) (Additional file 1 . Table S1). A few transcripts might have been transcribed from unsequenced genomic regions of rice [ 17 ]. However, most of the unmapped reads (71.5%) had no similarity to each other (data not shown). Our preliminary experiment showed that the ratio of these unmapped reads was higher with mRNA-Seq (10.1%-16.7%; Table 1 ; Unmapped) than with genomic sequencing (2.0%-3.1%; data not shown). Thus, part of the random sequences might have come from residual random primers used in cDNA synthesis. The common random sequences might have come from sequencing errors in the use of the Illumina sequencing technology.
Identification of differentially expressed genes by mRNA-Seq
mRNA-Seq quantifies the amount of transcripts on the basis of the number of sequence reads mapped on each gene. We adopted this method for transcript quantification by RPKM [ 16 ] and calculated the RPKM of each gene (Additional file 2 : Table S2). RPKM quantification was distributed from 0 to over 10 4 . In shoots under normal conditions, the gene encoding ribulose bisphosphate carboxylase activase (AK104332) was expressed at extremely high levels (rpkm_0 hr_shoot = 10612.237). In roots under normal conditions, the gene for metallothionein (AK105219) was expressed at extremely high levels (rpkm_0 hr_root = 23661.149). The statistical mean and median were 19.78 and 3.399, respectively, in the shoot, and 18.705 and 4.241 in the root under normal conditions.
We then comprehensively compared the RPKM of each gene in response to salinity stress ( r = 0.95 in shoot and 0.94 in root; Figure 2 ). We used the G-test with a 1% false discovery rate (FDR) and identified 6,469 (in shoot) and 10,321 (in root) differentially expressed RAP2 genes. Of these, 3,050 (up, 1,651; down, 1,399) genes were commonly differentially expressed. The number of highly differentially expressed genes (> 32×), such as those encoding bHLH-containing protein (AB040744) and amino acid transporter (J075191I06), was greater in the root (58 genes) than in the shoot (5 genes). Expression of genes previously identified under salinity stress [ 18 ]--i.e. OsTPP1 (AK103391), LIP9 (AY587109), OsABA2 (AK062655), OsMST3 (AK069202), WSI76 (AK107065), and MYBS3 (AK107134)--was induced in the root (> 2×). For a complete comparison see Additional file 2 : Table S2.
The distribution of mapped reads on the rice genome was graphed on a GBrowse [ 19 ] (Figure 3 ). For example, the OsTPP1 (for trehalose-6-phosphate phosphatase: TPP) gene (AK103391), which encodes a protein that synthesizes the abiotic stress-protectant trehalose [ 20 , 21 ], was expressed exclusively in the root after 1 h of salinity stress; RCc3 (AK109149), which was previously identified as a root-specific gene [ 22 ], was expressed only in the root with and without stress; AK058218 (similar to ZmGR1a in Zea mays ) was expressed exclusively in the shoot; most of the neighboring genes were expressed evenly in all tissues used (Figure 3 ).
Constructing gene models by mRNA-seq
Transcribed regions were identified on the basis of the piling up of mapped short reads through the programs Bowtie [ 9 ], TopHat [ 10 ], and Cufflinks [ 11 ]. In the shoot, 51,301 transcripts were predicted (RPKM ≥ 2, length ≥ 100 bp) (Table 2 ); 94.6% (48,506/51,301) of the predicted transcripts were mapped on previously annotated loci in RAP2 [ 14 , 15 ]; thus, the remaining 2,795 predicted transcripts were unannotated in RAP-DB (Table 2 ). In the root, 3,082 of the 54,491 predicted transcripts were mapped on unannotated regions (Table 2 ). For example, the previously annotated gene AK243146, which is similar to DREB1B in Arabidopsis thaliana (GI: 3738226), was expressed after salinity stress and also predicted by Cufflinks (Root_CUFF. 214677.0); other exons were also predicted and connected by bridging sequences elucidated by TopHat (Root_CUFF. 214638.0) (Figure 4a ). Reads were also mapped on the extended parts of the ends of most 5' and 3' exons in previous gene models (Figure 4b, c ). Of the transcripts mapped on previously annotated loci, 1,738 (shoot) and 2,297 (root) had not been supported by ESTs [ 23 ] or FL-cDNAs [ 13 ].
We attempted to predict the functions of unannotated transcripts by BLASTX search and longest-ORF search. In a BLASTX search against the UniProt and RefSeq sequences, of the predicted transcripts, 995 (shoot) and 1,052 (root) had ORFs similar to those encoding the amino acid sequences of functional proteins (Table 2 ). Of the remaining unannotated transcripts, 1,670 (shoot) and 1,873 (root) had ORFs encoding at least 20 amino acids by longest-ORF search (Table 2 ). Amino acid length was widely distributed: the mean and median were 125 and 77 amino acids in the shoot, and 123 and 74 in the root (Figure 5 ). We used the G-test with a 1% FDR and identified 213 (up, 86; down, 127; in shoot) and 436 (up, 146; down, 290; in root) differentially expressed Cufflinks transcripts. Even though the lengths of Cufflinks transcripts were not completely identical between shoot and root, at least 55 differentially expressed transcripts were common to the two tissues. In response to salinity stress, 5 (shoot) and 13 (root) unannotated transcripts were upregulated (≥2×) (Table 3 ). These unannotated transcripts encoded, for example, proteins similar to indole-3-glycerol phosphate lyase and gibberellin 2-beta-dioxygenase (Table 3 ). Of the other differentially expressed genes (< 2×), Root_CUFF.256193.0 was upregulated (1.9×); it encoded proteins similar to MSL2 (MscS-LIKE2) (Additional file 3 : Table S3). For a complete list of unannotated transcripts see Additional file 3 : Table S3.
Comparison of sequence-based and array-based technologies for gene expression profiling
Our sequence-based gene expression profiling was validated against array-based technology. First, signal intensity and RPKM from the same RNA materials were compared. These two independent measures of transcript abundance were correlated ( r = 0.75-0.77), especially at moderately high signal intensities (Figure 6 ). However, the correlation was not as strong at extremely high signal intensities (> log 2 32,768 = 15), suggesting that the array signal intensity was saturated but the RPKM was not (Figure 6 , root). Next, the ratios of differentially expressed genes were compared. The ratio obtained from the array and the corresponding ratio obtained from RPKM was highly correlated over a broad range ( r = 0.72 in shoot and 0.80 in root; Figure 7 ). The histogram was highest at log 2 1 (= 0), suggesting that most genes were expressed evenly both before and 1 h after salinity stress (Figure 7 ). However, a few discrepancies were found: increased changes in the expression of 17 genes were found by using the array (> 4×), but not by using mRNA-Seq (< 2×); conversely, increased changes in the expression of 7 genes were found by using mRNA-Seq (> 4×), but not by using the array (< 2×) (Additional file 4 : Figure S1). To further examine these discrepancies, we used quantitative real-time polymerase chain reaction (qRT-PCR). The qRT-PCR results suggested that most of the former discrepancy was due to technical inaccuracy in the array experiments. However, qRT-PCR supported only three of the seven mRNA-Seq data in the latter discrepancy (Additional file 4 : Figure S1). Despite these discrepancies, our sequence-based approach was generally valid as a gene expression profiling technology for use with previously annotated genes. | Discussion
Estimation of variation and abundance of whole transcripts in rice
How many reads are required to cover whole transcripts in the rice cell? As the number of reads increased, the cumulative coverage approached a plateau (Figure 1 ). We summed four technical replicates (Table 1 ). RPKM is widely used to calculate the abundance of each transcript and is linear across a dynamic range [ 16 ]. The distribution of RPKM of rice genes ranged from 0 to over 10 4 (Figure 2 ); genes involved in photosynthesis in the shoot or in regulation of physiological metals in the root were highly expressed, whereas about 30% of genes had RPKM < 1 (Additional file 2 : Table S2). The saturation of sequencing in rice (Figure 1b ) was almost the same as in a previous mammalian analysis [ 16 ]. According to that analysis, "one transcript in a cell corresponds to 1 to 3 RPKM" [ 16 ], so genes having RPKM < 1 might rarely be expressed. However, data on the RNA content of each rice cell are required to calculate the number of existing molecules of RNAs. As rice tissue contains cells of various sizes and types, the relationship between the number of existing molecules and their RPKM has not yet been accurately determined. When we used four technical replicates, about 20% of genes expressed at relatively low levels (RPKM 3-30) did not reach their final RPKM (Figure 1b ), suggesting that these model settings were insufficient for calculating the real RPKM of genes expressed at low levels.
Summing of the four technical replicates covered 70.1% of all annotated regions, corresponding to 15.8% of 389 Mb [ 24 ] of the rice genome (Figure 1a ). This result suggests that these regions were transcriptionally active under the experimental conditions. Even though the cumulative coverage was close to a plateau, the coverage rose gradually; the accumulation of about 95 million reads covered 77.0% of annotated regions (Figure 1a ), suggesting that some of the reads expressed at low levels were not sequenced. However, the gradual increase in coverage might have been due to the presence of contaminated genomic DNA or a very small amount of partly processed nuclear RNAs, because intron retention is the most prevalent alternative splicing form in rice [ 25 ], as it is in Arabidopsis thaliana [ 26 ]. Thus, we consider that the summing of four technical replicates of 36-bp reads, corresponding to a total of 1 Gbp of filtered sequences, covered almost all the transcripts in the rice cell under the experimental conditions, although more reads are required to obtain the final RPKM of genes expressed at relatively low levels.
Identification of unannotated transcripts by mRNA sequencing
mRNA-Seq provides information on whole transcribed genes without the need to rely on annotation (Figure 3 ), whereas array technology is limited to providing data only on those previously annotated genes and on previously identified ESTs with no known homologies that have corresponding probes on the array. On the basis of the piling up of mapped reads, we predicted 2,795 (shoot) and 3,082 (root) currently unannotated transcripts in RAP-DB (Table 2 ; Figure 4a ). Of the RAP2 unannotated transcripts, 54.6% (1,525/2,795) in shoot and 53.8% (1,659/3,082) in root had not been annotated by Michigan State University (MSU) (data not shown), suggesting that these transcripts were novel transcripts.
Unannotated transcripts included extended parts of previously annotated genes (Figure 4b, c ). Extension of 5' exons might contribute to the making of a different start codon or the shifting of the reading frame of previously annotated genes. Extension of 3' UTRs might contribute to microRNA-mediated control of translation or post-transcriptional RNA metabolism [ 27 , 28 ]. For example, mRNA-Seq provided evidence of the existence of extended parts of previously annotated genes and of the differential regulation of their expression. AK240862, previously annotated as a non-protein-coding transcript, had additional predicted exons distal to the 5' end of the previous gene model, and it encoded an indole-3-glycerol phosphate lyase (Additional file 4 : Figure S2). Two neighboring genes (AK072595, AK288107) were also similar to the indole-3-glycerol phosphate lyase gene, suggesting that all three genes were tandemly duplicated. Although all three genes were upregulated in response to salinity stress, their tissue specificities and expression levels differed (Additional file 4 : Figure S2), suggesting that their functions diversified after gene duplication.
mRNA-Seq also provided evidence of expression of computationally predicted genes. The existence of a number of genes computationally predicted in RAP-DB [ 15 ] has not been supported [ 15 ] by ESTs [ 23 ] or FL-cDNAs [ 13 ]. Here, 1,738 (shoot) and 2,297 (root) transcripts identified by mRNA-Seq have been mapped on computationally predicted genes, the presence of which was not supported by experiments, suggesting the validity of the computationally predicted gene models in RAP-DB. We will use these sequence-based transcriptome analyses to improve RAP-DB.
mRNA-Seq provided details of the bridging sequences between exons, suggesting the presence of splicing junctions, whereas array technology--including whole-genome tiling arrays [ 29 ]--provides no information on connecting exons. Because reads that bridge exon boundaries are not mapped directly to the genomic sequence, a mapping technique was required. As a first step, the enumeration of all theoretical splicing junctions within annotated transcripts allows the mapping of bridging reads [ 12 , 16 , 30 ] by using statistical models [ 31 ]. We found that 5.0% to 5.7% of reads formed primary bridges with previously annotated exons (Table 1 , Unique-bridged); this was not sufficient to discover sequences bridging unannotated transcripts. Programs such as TopHat [ 10 ] and G-Mo. R-Se (Gene Modeling using RNA-Seq) [ 32 ] are designed to align reads to form potential splice junctions without relying on known splice sites. In this study, sequences flanking potential donor/acceptor splice sites were joined to form canonical (GT-AG) introns between neighboring (but not necessarily adjacent) islands by using TopHat [ 10 ]. Even though we used TopHat for our prediction, some of the predicted transcripts remained to be separated--unlike the case with the FL-cDNA sequences--because of the lack of sufficient bridging sequences between the exons (Additional file 4 : Figure S3), suggesting that more bridging reads should be sequenced to connect predicted exons. Elongation of the length of each read may also enhance the chance to connect predicted exons.
Sequence-based transcriptome analysis for capturing salinity stress-inducible genes in rice
mRNA-Seq comprehensively identified salinity stress-inducible genes. Unannotated transcripts had ORFs (Table 2 ) with a mean length of 123 amino acids (root) or 125 amino acids (shoot) (Figure 5 ), suggesting that these unannotated transcripts could encode functional proteins. Of the unannotated transcripts, 213 (shoot) and 436 (root) were differentially expressed in response to salinity stress (Table 3 , Additional file 3 : Table S3). These unannotated transcripts encoded proteins associated with functions such as amino acid metabolism (indole-3-glycerol phosphate lyase) in response to abiotic stress [ 33 ], diterpenoid biosynthesis (gibberellin 2-beta-dioxygenase), and mechanosensitive ion channel (MSL2) function [ 34 ]. Mechanosensitive ion channels are gated directly by physical stimuli such as osmotic shock and transduce these stimuli into electrical signals [ 35 ]. mRNA-Seq also captured previously identified genes involved in salinity tolerance, namely those associated with trehalose synthesis ( OsTPP1 ) (Figure 3 ), dehydrin ( LIP9 ), ABA synthesis ( OsABA2 ), sugar transport ( OsMST3 ), glycerol transferase ( WSI76 ), and transcription factors similar to those of the DREB family (Additional file 2 : Table S2). A substantial number of transcripts were exclusively upregulated only in the root (Figure 2 ). As only the root was directly exposed to 1 h of salinity stress, it might take time to induce the expression of more genes in the shoot; OsTPP1 (Figure 3 ) might be expressed in the shoot after 10 h of exposure, as has been found in Yukihikari rice [ 36 ]. With these genes, Nipponbare may have the potential to be tolerant to salinity stress.
Rice cultivars such as Nona Bokra and Pokkali are substantially more salinity tolerant than Nipponbare [ 37 ], suggesting that the genuine salinity stress tolerance gene might be missing in Nipponbare. The 23 Oryza species are geographically, physiologically, and genetically diverse [ 38 ], and many of the genes in cultivated rices have been selected by humans under field conditions, not by environmental stress. These essentially missing genes could serve as potential genetic resources for the improvement of cultivated crops. Sequence-based technology can be used to extract such missing genes by the piling-up of short reads on their own genomes without the need to rely on sequence similarity.
Overcoming the technical inaccuracy
Microarray technology has been used as a sophisticated platform for the expression profiling of previously annotated genes. However, as an array-based technology, evaluation of signal intensities close to background levels tends to cause artifacts in array analysis because of high levels of background noise and/or cross-hybridization [ 2 ]; moreover, hybridization efficiency might vary with the probes used, suggesting that the calculation of real molar concentrations is inaccurate. Whereas the Agilent rice 44K Array is designed to quantify 60-mer sequences at the 3'-end of transcripts, mRNA-Seq quantifies transcript abundance on the basis of the number of mapped sequences on the whole gene model. In our study, the two measures of transcript abundance (Figure 6 ) and change ratios (Figure 7 ) were highly correlated, as in a previous report [ 6 ]. Moreover, for genes expressed at low or extremely high levels (Figure 6 , root) and for genes differentially expressed in arrays (Additional file 4 : Figure S1a), mRNA-Seq seemed to be accurate. Therefore, mRNA-Seq measures the molar concentrations of genes accurately over a broad dynamic range.
Biological replication is required for identifying differentially expressed genes through statistical analysis, as in array-based analysis. Unfortunately, with sequence-based transcriptome analysis there are greater costs than with microarrays for cDNA preparation and sequencing; this prevented us from performing further experiments. Illumina has improved its sequencing technology. Each read length has been continuously increased. Efficient base calling by using the latest Illumina data analysis pipeline software improved the quality and quantity of reads from the same raw image data. Controlled hydrolysis of RNA before cDNA synthesis substantially improved the uniformity of sequence coverage, as in a previous report [ 8 ]. These technical innovations in hardware and software will enable remarkable progress in reducing costs and in increasing the sensitivity of detection of sequences transcribed at low levels, the accuracy of quantification and detection of splice forms, and the prediction of the whole structures of transcripts.
Sequence-based transcriptome analysis has recently been applied to various organisms: Arabidopsis thaliana [ 4 , 39 ], yeasts [ 40 , 41 ], Drosophila melanogaster [ 6 ], and human [ 5 ]. During this study, two types of rice transcriptome analysis were reported, focusing on the transcriptional differences in two rice subspecies and their reciprocal hybrids [ 42 ] and in eight organs from different developmental stages of Oryza sativa L. ssp. indica '93-11' [ 43 ]. We analyzed salinity stress-inducible transcripts and constructed gene models based on the pilling up of short reads by using the Cufflinks program. This approach should help to discover novel gene models without reliance on gene annotation. | Conclusions
Microarray-based gene expression profiling is limited to the analysis of annotated genes. In our mRNA-Seq analysis, unannotated salinity stress-inducible transcripts were identified on the basis of the piling up of mapped reads without reliance on gene annotation or FL-cDNA sequences. Some of these novel transcripts had ORFs encoding putative functional proteins and were differentially expressed in response to salinity stress. mRNA-Seq was valid as a gene expression profiling technology for quantifying the abundance of previously annotated genes. Our findings will contribute to improvement of our RAP-DB and to further sequence-based gene expression profiling in various organisms. | Background
Microarray technology is limited to monitoring the expression of previously annotated genes that have corresponding probes on the array. Computationally annotated genes have not fully been validated, because ESTs and full-length cDNAs cannot cover entire transcribed regions. Here, mRNA-Seq (an Illumina cDNA sequencing application) was used to monitor whole mRNAs of salinity stress-treated rice tissues.
Results
Thirty-six-base-pair reads from whole mRNAs were mapped to the rice genomic sequence: 72.0% to 75.2% were mapped uniquely to the genome, and 5.0% to 5.7% bridged exons. From the piling up of short reads mapped on the genome, a series of programs (Bowtie, TopHat, and Cufflinks) comprehensively predicted 51,301 (shoot) and 54,491 (root) transcripts, including 2,795 (shoot) and 3,082 (root) currently unannotated in the Rice Annotation Project database. Of these unannotated transcripts, 995 (shoot) and 1,052 (root) had ORFs similar to those encoding the amino acid sequences of functional proteins in a BLASTX search against UniProt and RefSeq databases. Among the unannotated genes, 213 (shoot) and 436 (root) were differentially expressed in response to salinity stress. Sequence-based and array-based measurements of the expression ratios of previously annotated genes were highly correlated.
Conclusion
Unannotated transcripts were identified on the basis of the piling up of mapped reads derived from mRNAs in rice. Some of these unannotated transcripts encoding putative functional proteins were expressed differentially in response to salinity stress. | Authors' contributions
HM, YO, and JW prepared plant materials, performed mRNA extraction and cDNA synthesis; HK, HY, and HI performed sequencing experiments and primary data analysis; YK, HS, HW, and TI performed data analysis; HM, YK, TI and TM designed the study; and HM wrote the manuscript. All authors read and approved the final manusctipt.
Accession Numbers
All primary sequence read data have been submitted to DDBJ (DNA Data Bank of Japan) [ DRA000159 ], and microarray data have been submitted to the GEO (Gene Expression Omnibus) [GSE20746].
Supplementary Material | Acknowledgements
The authors thank F. Aota, K. Ohtsu, and K. Yamada for their technical assistance in sample preparation, and Dr. Y. Nagamura and R. Motoyama for their technical assistance in the microarray experiment. This work was supported by the Ministry of Agriculture, Forestry and Fisheries of Japan (Genomics for Agricultural Innovation, RTR-0001). | CC BY | no | 2022-01-12 15:21:46 | BMC Genomics. 2010 Dec 2; 11:683 | oa_package/c0/36/PMC3016417.tar.gz |
PMC3016418 | 21129228 | Background
Integrated genetic and physical maps are extremely valuable resources for map-based gene cloning, comparative genome analysis, and sequencing and assembly of large and complex genomes. Screening bacterial artificial chromosome (BAC) libraries is an indispensable step for integration of genetic and physical maps, by which BAC clones and contigs can be placed and ordered on a genetic map. The use of an appropriate BAC pooling strategy [ 1 , 2 ] maximizes work efficiency. Two different approaches, hybridization-based and PCR-based, are available for BAC library screening. The hybridization-based approach is based on multi-dimensional pools of molecular markers or probes hybridizing with high-density BAC library screening membranes to identify the BAC clones associated with specific nucleotide sequences or genes. Overgo probes have been used for large scale physical mapping of plant and animal genomes, such as those of soybean [ 3 ], maize [ 4 ] and human [ 5 ]. Because a single overgo probe may hybridize with clones in multiple contigs and several probes may hybridize to the same clone, the overgo probes often fail to unequivocally associate a contig with a locus on a genetic map. This may arise from gene duplication, repeat sequences in a BAC clone or probe, or false positives.
A PCR-based six dimensional (6-D) BAC clone pooling strategy has been successfully used by other groups for BAC library screening in sorghum [ 6 ], maize [ 7 ] and soybean [ 8 ]. This pooling strategy includes a conventional three-dimensional (3-D) stack, called plate pool, face pool and side pool, and an additional three types of pools, called row pool, column pool and diagonal pool. A positive clone in a BAC plate is located by only three types of pools; the other three types of pools are used only for verification. Hence, this strategy not only uniquely defines individual clones and efficiently eliminates false positives but also reduces the tedious task of individual clone verification. However, the six dimensions result in a large number of pools for PCR screening, and limit the size of BAC libraries to be screened. For example, a total of 184 pools were generated for 24, 576 BAC clones (~4× genome equivalent) in sorghum [ 6 ], 288 pools for 110,592 BAC clones (~6× genome equivalent) in maize [ 7 ], and 208 pools for 49,152 BAC clones (~6.6× genome equivalent) in soybean [ 8 ]. Because the number of pools that must be screened is a function of the BAC library size, the workload for screening larger BAC libraries, such as those of the grasses in the tribe Triticeae that includes wheat, barley and rye, will be unacceptably high. For example, in the physical mapping project ( http://wheatdb.ucdavis.edu ) of Aegilops tauschii , the diploid ancestor of the wheat ( Triticum aestivum ) D genome, five BAC libraries comprise a total of 302,976 clones (in 789 384-well plates), which equal to ~8.5× Ae. tauschii genome equivalents. If the same pooling strategy as had been used in maize were used, 454 pools (131 × 48 × 48 for the basic 3-D stacks plus three additional dimensions with an equal number of pools) would be needed. This would be extremely laborious in DNA pool preparation and PCR screening. In order to reduce pool number and relieve PCR workload, Luo et al. (2009) [ 9 ] evaluated a high-throughput five-dimensional (5-D) clone pooling strategy based on both Illumina's GoldenGate assay and PCR screening of Ae. tauschii BAC clones. The major points of this strategy include: (1) Conventional 3-D grid design (plate, row, and column pools corresponding to plate, face, and side pools respectively in the 6-D strategy [ 6 - 8 ]) with the plate pools further grouped into two-dimensional (2-D) pools, referred to as super pools in Luo et al. 2009 [ 9 ]. Because the pooling procedure involves five different DNA pool sets, this design was called a 5-D clone pooling strategy [ 9 ]. (2) Super pools screened by Illumina's GoldenGate assay and row and column pools screened by PCR. (3) Positive plate pools at 3-D intersections further verified by PCR to find positive plate pools among the putative positives. This strategy reduces pool number and adopts the highly parallel GoldenGate assay for clone screening, and makes high-throughput clone screening possible for large genomes. With this strategy, 95% of Illumina's GoldenGate EST-based oligonucleotide markers unequivocally assigned BAC clones to loci on the genetic map [ 9 ].
The 5-D clone pooling strategy still requires a considerable effort to identify positive plates among the putative positive plates in super pool screening by PCR assays. In addition, clone row pools and column pools also need to be screened by PCR. Here we describe a new implementation of this strategy, in which clone row and column pools are further divided into sub-pools with 1~2× genome coverage to minimize the number of positive pools among the putative positive pools. The rationale for limiting the number of genome equivalents screened at a single time is that the number of false positives increases exponentially with the number of genome equivalents screened. If a 1× genome equivalent is screened, then there is an average of one plate row and plate column intersection and hence no false positive plate. If a 2× genome equivalent is screened, then there are on average four plate row and plate column intersections with four putative positive plates, of which two are false positive. If a 3× genome equivalent is screened, then there are on average nine plate row and plate column intersections with nine putative positive plates, of which six are false positive.
In the implementation of the screening strategy pursued here, all pools are screened with the Illumina GoldenGate assays. The assay data is then combined with BAC contig data and overlapping clones in BAC contigs are employed as additional information to discriminate between pool intersections that harbour positive clones and intersections that are false positive. A computational algorithm was designed for this implementation of the 5-D pooling strategy, which was integrated into a newly developed software tool, FPCBrowser, for analyzing pooling data. | Methods
Improved 5-D clone pooling strategy
As described in [ 9 ], a conventional 3-D grid pooling is used as part of our algorithm. This pooling strategy includes three different types of pools: plate pools (PP), clone row pools (RP) and clone column pools (CP). For PP, DNAs of 384 clones present in a plate are pooled. To make the screening of PP more efficient, the plate pools are further pooled as a 2-D array. In this 2-D array, PPs are pooled into plate row super-pools (RSP) and plate column super-pools (CSP). For RP and CP screening, the entire stack of 384-well plates is subdivided into N smaller pools (sub-pools) with 1~2× genome coverage. For RP screening, a total of 16 clones × N row pools are generated for each row. Similarly for CP, a total of 24 × N column pools are generated for each column. The four types of pools (RSP, CSP, RP, and CP) are screened with Illumina GoldenGate genotyping techniques [ 9 ] in this new implementation of the pooling strategy. Details of DNA pooling methods, Illumina GoldenGate genotyping and scoring of genotyping results were described in [ 9 ].
Ae. tauschii Phase I contig maps and BAC screening for 55 SNP markers
Of 302,976 Ae. tauschii clones contained in BAC and BiBAC libraries [ 16 ], a total of 270,720 were fingerprinted and automatically edited with GenoProfiler [ 17 ], and 199,190 were ultimately used for assembly [ 18 , 19 ] with FPC [ 12 , 13 ]. Using different stringencies and end merges, eight Phase I contig maps were generated [ 18 , 19 ]. These draft maps are available at http://wheatdb.ucdavis.edu [ 19 ].
All of the 302,976 Ae. tauschii clones (789 384-well plates) (Table 2 ) were pooled into 789 plate pools (each pool containing 384 clones). Plate pools were arranged into a 2-D array, consisting of 27 rows and 30 columns (See Table S1 in Additional file 1 ). This generated 57 super-pools (27 RSPs, each containing 11,520 clones and 30 CSPs, each containing 10,368 clones), which decreased the number of plate pools for screening to 7%. A total of 190 clone pools, consisting of 80 RPs (5 sub-pools each row, one sub-pool per library) and 110 CPs (5 sub-pools each column, one column sub-pool per library) were generated across all 789 plates. Each clone row sub-pool contained 3,264-4,800 clones and each clone column sub-pool contains 2,176-3,200 clones (Table 2 ). DNA samples of a total of 217 pools were generated. The pools were screened with Illumina GoldenGate genotyping techniques [ 9 ] for 1,384 SNP markers which have been mapped on the Ae. tauschii D genome genetic map [ 20 ]. Among them, 55 SNP markers mapped on chromosome 2D genetic and physical maps and verified by PCR assay [ 9 ] were utilized as test data.
Algorithm of clone deconvolution using a contig map
Theoretically the 5-D clone pooling is an unbalanced design because a truly positive clone (a well in a plate) for a molecular marker cannot be uniquely determined from the four types of positive pool hits. Positive plate hits are obtained as intersections of positive plate RSP and plate CSP in the 2-D super pool array (Figure 1 and Table S1 in Additional file 1 ). Candidates of positive clones are at intersections of the positive plates and positive clone RPs and clone CPs. Truly positive intersections must be distinguished from false positives (F+), and such F+ must be removed. For example, pools of clones equivalent to 4× genome coverage are screened with a marker. Four true positive (TP) clones are therefore expected in the pools. Assuming that these clones are in different plates, different rows, and different columns within a plate, there would be a maximum of 4 TP intersections among a total of 256 experimental intersections generating 256 candidate clones. If all clones in a specific row or column of all stack of plates are divided into 4 sub-pools with 1× genome coverage, we can get 4 RPs of 1× genome coverage in a row and 4 CPs of 1× genome coverage in a column, and 1 row pool hit in each sub-row-pool and 1 column pool hit in each sub-column-pool. The total number of candidate intersections will be only (4×4×1×1)×4 = 64. This improved clone pooling strategy for clone row and column pools can dramatically reduce the number of F+ clones. However, 60 of the 64 clones are still F+ clones. A large number of dedicated PCR [ 9 ] or extra pools [ 6 - 8 ] are required to eliminate the F+ clones and detect the TP clones. Rather than using PCR, we use information about clone overlaps in the existing contig maps to find the TP clones among the candidates.
Clone deconvolution for a marker identifies TP clones in the population of candidate clones suggested by CSP, RSP, RP, and CP hits. The basic idea is that the TP clones are a subpopulation of candidate clones that have a unique characteristic: they share part of their fingerprint and therefore must be neighbours in a single contig within a well-assembled or a "perfect" contig map. A perfect contig map is almost impossible because of imperfect fingerprints and assembly errors. Here we treat a contig map for a marker to be "perfect" if all TP clones associated with a marker are neighbours within a single contig, irrespective of the veracity of the rest of the contig. Clones should have a spanning relation or inclusion relation (Figure 4A ) or at least a simple overlapping relation between each other (Figure 4B ). Herein we define that clone a and b are said to span clone c if and only if f(c) is contained in the union of f(a) and f(b) , where f(a) , f(b) and f(c) are fingerprints of clone a , b and c , respectively. Clone a is contained in clone b if and only if f(a) belongs to f(b) . Using these relations it is possible to search exhaustively for neighbouring clones among the candidate clones. The search results are often not unique and a discrimination score must be applied to infer an optimal solution. If a search generates no result, the deconvolution has no solution.
In practice, contigs are assembled in several steps, including initial assembly at a high strigency, DQing questionable clones at an increasing stringency, and end-to-end merging and singleton-to-contig end merging at decreasingly lower stringencies. In the contig map assembled at an initial stringency, TP clones for a marker may be scattered among multiple contigs, and/or singletons (Figures 4C , 4D and 4E ). Through several merges, a relatively "perfect" contig map can be expected. Using this "perfect" contig map, a computational algorithm was designed as follow:
(1) For each marker, find all positive candidate clones C at the intersections of the four types of positive pools (Figure 1 ).
(2) Search all contigs in the contig map and in each contig find a clone subset C 1 which belongs to C if any.
(3) If the size of C 1 (the number of clones) for a contig ≥ 2, find a clone subset C 2 in which clones have spanning or inclusion relation, or overlap each other.
(4) If the size of C 2 for a contig ≥ 2, calculate the number of F- clones (FN). F- clones are the clones which are located between left-most position and right-most position of C 2 in a contig but are not included in C .
(5) Calculate a discrimination score for each of all clones sets of C 2 : Score = 0.8*NC/MC + 0.2 * (1-FN/NC), where NC is the number of clones in C 2 ; MC is the maximum number of clones in sets of C 2 , i.e., the number of clones in the top set of C 2 after sorting by the number of clones.
(6) Sort all clone sets of C 2 by the score of C 2 in a descending order. Choose the clone set with the highest score. If the score ≤ 0, no solution is found. If there are two or more clone sets which have the same scores, non-unique solutions are obtained for further manual verification. | Results
Clone deconvolution algorithm
Clone deconvolution identifies clone(s) that have a desired marker by analyzing information about pool intersections in a multi-dimensional BAC pooling design. The 5-D pooling strategy employing row-pools (RP), column-pools (CP), plate pools (PP), column super-pools (CSP), and row super-pools (RSP) is graphically detailed in Luo et al. 2009 [ 9 ]. The 5-D deconvolution algorithm is based on information generated by a combination of pool screening with the Illumina GoldenGate assay and clone overlaps in the contig map (See details in Methods ). The Illumina GoldenGate assay detects a positive plate RSP, plate CSP, clone RP and clone CP for each molecular marker (Figure 1 ). A positive plate pool is at the intersection of plate RSP and plate CSP in the super pool 2-D design matrix (see Table S1 in Additional file 1 ). Candidates for positive clones are at intersections of positive clone RPs, and positive clone CPs, and positive plate pools. In each set of positive clone candidates only a few clones are truly positive (TP) clones; the rest are false positive clones at multiple intersections generated when more than a single positive RP and CP are obtained by Illumina GoldenGate screening. Since multiple genome equivalents are always screened, more than one TP clone is expected. Because TP clones share the same marker, they should share also a portion of their fingerprint profiles and hence be neighbours in a contig (unless specified, it is assumed throughout that contig assembly generates only "perfect" contigs). An exhaustive search across all contigs for a group of clones that are among the candidate clones and are neighbouring clones in a contig can pinpoint the TP clones among the candidate clones. If the marker is present only once in a genome (single copy), only one set of clones should be TP clones. A discrimination score can be assigned to each set of clones and the set of clones with the maximum score is inferred as the most likely TP clones associated with that marker. Because the search is exhaustive and no sole deterministic solution is available for an unbalanced multiple dimensional pooling design, this is an exhaustive search or brute force algorithm for an optimization problem [ 10 , 11 ].
Of 1,384 SNP markers mapped to the Ae. tauschii D genome physical map, the contig location of 55 markers had been verified by PCR screening in the previous study [ 9 ]. This data set was used to test the new clone deconvolution algorithm. Three different versions of Phase I Ae. tauschii contig assemblies were generated with different assembly stringencies and end merges, Assembly 1, Assembly 1.1, and Assembly 2, and these assemblies were used in clone deconvolution for comparison of the "perfectness" of the assemblies (Table 1 ). Assembly 1 was initiated at 1×10 -15 followed by DQing and contig end-to-end merging, and generated 11,852 contigs. The DQing is a process of gradually eliminating questionable clones (Q-clones) by the DQer module in the FPC software [ 12 , 13 ]. The number of contigs was further reduced to 7,447 in Assembly 1.1 by relaxing the conditions of contig end-to-end merges. Assembly 2 is an initial assembly at a higher stringency of 1×10 -60 with 17,832 contigs.
To evaluate the accuracy of the clone deconvolution algorithm, two performance metrics, recall and precision, were used. The recall is defined as the number of TP markers deconvoluted by the algorithm divided by the total number of markers analyzed, and the precision is defined as the number of TP markers divided by the total number of markers with solutions deconvoluted by the algorithm. The TP markers are those for which TP clones were assigned by the algorithm. Different versions of contig maps resulted in significantly different deconvolution results. In Assembly 1.1, 48 (87%) out of 55 markers were successfully associated with TP clones (0.87 recall) with a precision of 91% (Table 1 ). In Assembly 2, only 15% of markers were correctly assigned to TP clones. Therefore, a relatively "perfect" contig map is a prerequisite for clone deconvolution. In our example, Assembly 1.1 approximated the "perfect" assembly the best.
In the analysis using the Assembly 1.1 contig map, 3 markers were found without solutions, and 4 markers were assigned to false positive (F+) clones. The seven failed deconvolutions had two primary reasons: (a) low genome coverage in CP or RP (4 markers with < 4× genome equivalent coverage), and (b) F+ and false negative (F-) screening results in CP pools and RP pools (Figure 2 ). F- pool hits are mostly caused by low genome coverage of pools. F+ pool hits are likely related to either data clustering by the Illumina GoldenGate assay or failures of the Illumina GoldenGate assay. Only one out of 7 failed markers was due to plate super pool deficiency (Figure 2 ). Obviously, reasonable increase of pool coverage will considerably reduce F- hits and increase the success rate and precision of clone deconvolution.
Software implementation
The clone deconvolution algorithm developed for the 5-D clone pooling strategy has been integrated into a newly developed tool, FPCBrowser, a Java-written, platform-dependant and GUI-based software tool (Figure 3 ). FPCBrowser was initially designed as a portable physical map viewer to comprehensively view FPC contig maps and related information such as clones, fingerprints, and markers in multiple platforms. A Java-based relational database, HSQLDB [ 14 ], was adopted in FPCBrowser to store all source data of a FPC contig map and the fingerprints of BAC clones in a physical mapping project, which are also required in clone deconvolution, as well as the analysis results of clone deconvolution. A program module of the clone deconvolution was appended to FPCBrowser for pooling data analysis (Figure 3 ).
For clone deconvolution, FPCBrowser needs a relatively "perfect" contig map as input (*.fpc, an output file from the FPC software), a 2-D super pool matrix design file and a formatted pool hit file based on the GoldenGate genotyping assay (see details in the FPCBrowser user's guide at http://avena.pw.usda.gov/wheatD/fpcbrowser.shtml ) [ 15 ]. The clone deconvolution module generates as output a summary file, an anchored positive marker file used for convenient conversion to an ACE file to merge markers into a FPC contig map, and a deconvolution result file for each marker for further manipulation. A Java tool for Ace file conversion directly from the result file is also available in the FPCBrowser package.
Performance of this module depends on several major factors, such as the genome coverage of pools, the number of contigs and contig lengths. When the pooling data of 55 SNP markers and Assembly 1.1 of the contig map were used (Table 1 ), only 2.1 minutes were needed by a desktop computer (Asus P6T, Intel core i7 920, 12 GB of RAM, and an Ubuntu Linux 9.04 64 bit operating system) to execute the operation. The executable binary version of FPCBrowser is freely available at http://avena.pw.usda.gov/wheatD/fpcbrowser.shtml [ 15 ]. | Discussion
The new implementation of the 5-D clone pooling strategy employs the GoldenGate genotyping assay to screen BAC clones in 4 types of pools, RSP, CSP, RP and CP (four dimensions), which replace PCR screening and verification for positive plates in RSP and CSP and for positive clones in RP and CP in the previous implementation [ 9 ]. The RP and CP pools are further divided into sub-pools with 1~2× genome coverage to reduce the number of false positive clones in clone deconvolution. Although the total number of screened pools increases, this does not significantly raise the cost for the high-throughput GoldenGate genotyping assay. The fifth dimension is the information about overlapping BAC clones in a contig map which is used to detect and verify truly positive clones among a pool of candidate clones at the intersections of the preceding four dimensions. A clone deconvolution algorithm and corresponding software FPCBrowser have been developed for this purpose. This new implementation provides a high-throughput and low cost approach to BAC library screening and deconvolution of clone pools, and tremendously reduces work load otherwise required for PCR screening and verification of a large number of pools [ 6 - 9 ]. In a test with 55 SNP markers previously associated with Ae. tauschii contigs via the GoldenGate assay and manual PCR [ 9 ], this implementation yielded a 87% success rate with 91% precision. Some markers resulted in F+ or had no solutions compared to the previous implementation [ 9 ]. Improving contig assembly and increasing genome coverage of pools can reduce no-solution and F+ markers.
A "perfect" contig map and adequate pool coverage are two critical components for the clone deconvolution algorithm. Although contig assembly never results in an absolutely perfect contig map because of assembly errors, substandard fingerprinting, chimeric clones, and other reasons, relatively "perfect" contigs can be obtained by an appropriate contig assembly strategy. As long as truly positive clones associated with the marker are overlapping each other in a contig, the contig map for that marker is "perfect", irrespective of the status of the rest of the contig. In addition, if the markers used have been mapped on a genetic map, mapping data can be used for the verification of pool deconvolution. Collocation of markers in a single contig and on a genetic map can validate deconvolution. If a contig is anchored with only a single marker, additional PCR verification of deconvolution may be prudent.
The test data showed that low genome coverage of pools is another reason for F+ anchoring of clones or the inability to place a BAC clone on the physical map (no-solution) resulting in low precision contig anchoring. Although the average coverage of RP and CP was 8.5× in our test data set of 55 SNP markers (Table 2 ), the actual genome coverage for some of the markers was still low (less than 2×). If a better contig map and higher actual genome coverage were used, a deconvolution success rate higher than the 87% achieved here can be expected. | Conclusions
A new implementation of 5-D BAC clone pooling strategy employing both the GoldenGate assay to screen BAC pools and the use of previously assembled BAC contigs is suggested. The implementation is shown to be a high-throughput, low cost, rapid, and feasible approach to screening BAC libraries and anchoring of BAC clones and contigs on genetic maps. The software FPCBrowser with the integrated clone deconvolution algorithm has been developed and is downloadable at http://avena.pw.usda.gov/wheatD/fpcbrowser.shtml [ 15 ]. | Background
A five-dimensional (5-D) clone pooling strategy for screening of bacterial artificial chromosome (BAC) clones with molecular markers utilizing highly-parallel Illumina GoldenGate assays and PCR facilitates high-throughput BAC clone and BAC contig anchoring on a genetic map. However, this strategy occasionally needs manual PCR to deconvolute pools and identify truly positive clones.
Results
A new implementation is reported here for our previously reported clone pooling strategy. Row and column pools of BAC clones are divided into sub-pools with 1~2× genome coverage. All BAC pools are screened with Illumina's GoldenGate assay and the BAC pools are deconvoluted to identify individual positive clones. Putative positive BAC clones are then further analyzed to find positive clones on the basis of them being neighbours in a contig. An exhaustive search or brute force algorithm was designed for this deconvolution and integrated into a newly developed software tool, FPCBrowser, for analyzing clone pooling data. This algorithm was used with empirical data for 55 Illumina GoldenGate SNP assays detecting SNP markers mapped on Aegilops tauschii chromosome 2D and Ae. tauschii contig maps. Clones in single contigs were successfully assigned to 48 (87%) specific SNP markers on the map with 91% precision.
Conclusion
A new implementation of 5-D BAC clone pooling strategy employing both GoldenGate assay screening and assembled BAC contigs is shown here to be a high-throughput, low cost, rapid, and feasible approach to screening BAC libraries and anchoring BAC clones and contigs on genetic maps. The software FPCBrowser with the integrated clone deconvolution algorithm has been developed and is downloadable at http://avena.pw.usda.gov/wheatD/fpcbrowser.shtml . | Abbreviations
BAC: bacterial artificial chromosome; FPC: fingerprint contig; PCR: polymerase chain reaction; SNP: single nucleotide polymorphism; PP: plate pool; SP: super pool; CSP: column super pool; RSP: row super pool; CP: clone column pool; RP: clone row pool; 5-D: five dimensional; 6-D: six dimensional; TP: true positive; F+: false positive; F-: false negative.
Authors' contributions
FMY, MCL, ODA and JD planned the work. FMY, MCL and JD developed the clone deconvolution algorithm. FMY designed and implemented the FPCBrowser software. MCL prepared DNAs for Illumina genotyping. FMY and MCL performed data analysis. KX and KRD constructed BAC pools. FMY and JD drafted the manuscript. All authors read and approved the final draft of the manuscript.
Supplementary Material | Acknowledgements
Authors are grateful to Alina. R. Akhunova for preparing BAC pools and Vanessa Rashbrook for performing the Illumina GoldenGate assays. This work is supported by the US National Sciences Foundation grant IOS 0701916. | CC BY | no | 2022-01-12 15:21:46 | BMC Genomics. 2010 Dec 6; 11:692 | oa_package/e2/f1/PMC3016418.tar.gz |
PMC3016419 | 21143916 | Background
Male infertility is becoming increasingly prevalent partly due to environmental factors, but many defects in sperm development arise from a genetic cause. Problems in the production and maturation of sperm are the most common causes of male infertility resulting in low sperm numbers, morphologically abnormal sperm or low sperm motility [ 1 - 3 ]. Despite efforts to reveal the genes and their functions in spermatogenesis, little is known about the underlying causes of male infertility. Therefore, the localization and identification of mutations specifically affecting spermatogenesis provide invaluable information for investigating the causes of male infertility.
Mammalian spermatogenesis is a complex process, where diploid spermatogonia develop into haploid, highly specialized spermatozoa. Spermatogenesis includes many testis-specific processes that are controlled by complex regulatory mechanisms [ 4 , 5 ]. During spermiogenesis, haploid round spermatids undergo dramatic biochemical and morphological changes that are governed by specialized gene expression and interactions between various genes and their protein products [ 6 ]. Identification of genes involved in sperm development is a prerequisite to understanding the molecular mechanisms of spermatogenesis.
Sperm development is known to be disrupted during spermiogenesis in several acrosomal defects; e.g. globozoospermia in humans, where spermatozoa lack an acrosome [ 7 - 9 ] and the knobbed acrosome defect (KAD) in bulls, boars, stallions, rams, and dogs [ 10 - 15 ]. The acrosome is an organelle that develops over the anterior half of the head in the spermatozoa. It is a cap-like structure derived from the Golgi apparatus. The acrosome contains digestive enzymes, which break down the zona pellucida of the ovum, allowing the sperm to deliver its haploid nucleus into the ova. Disturbances of acrosomal development and function significantly impair the fertilizing capacity of spermatozoa [ 16 ].
Knobbed acrosome defect has been recently described in the Finnish Yorkshire pig population [ 15 ]. Testicular weights of boars with KAD did not differ from control boars. However, affected boars had a smaller seminiferous tubule diameter and lower number of Sertoli cells relative to control boars [ 15 ]. Investigation of the pedigrees of KA-affected boars suggested an autosomal recessive inheritance of the defect. Generally two common boars were identified in the pedigree of the boars with the KAD. Fertility of KA-affected boars is severely compromised. Depending on the amount of knobbed spermatozoa (25-81%) affected boars had poor non-return rate from no pregnancies to 47%, thus KA-affected boars produced no offspring or on average 2.5 fewer piglets per litter than control boars. Here we have characterized further the severity of the sperm head abnormalities in KA-affected boars.
A whole genome scan with microsatellite markers showed increased homozygosity in KA-affected boars in chromosomes 3, 8, 14 and 15 [ 15 ]. However, no statistically significant association was detected with available microsatellite markers. In this study we have used the PorcineSNP60 Genotyping BeadChip (Illumina) in order to increase marker density and accurately map the KAD associated region in pigs. All affected boars were homozygous for SNPs covering 432 kb on porcine chromosome 15. The coding region of two genes was located within this homozygous region and sequenced from both a KA-affected and control boar. | Methods
Animal material
Experimental material included 14 Finnish Yorkshire boars affected with KAD and 21 control boars. All affected boars were clinically examined and shown to display symptoms typical of the syndrome, but no other abnormalities. Sperm from affected and control boars was collected and the DNA obtained following phenol/chloroform extraction. Samples were diluted to 100 ng/μl in TE-buffer and used as templates for PorcineSNP60 Genotyping BeadChip (Illumina). Genomic DNA was also used for sequencing of the HECW2 exon 1 and SNP1.
For microscopical examination representative semen samples from KA-affected and control boars were fixed in formaldehyde for confocal laser scanning and transmission electron microscopy analyses.
Confocal laser scanning microscopy
Spermatozoa were labeled with LIVE/DEAD Reduced Biohazard Viability kit (red, L23102, Invitrogen). The labelling protocol was in accordance with the recommendations of the manufacturer. In brief, 50 μl DMSO was added to one vial of fluorescent dye and thoroughly mixed to make a stock solution. Spermatozoa were suspended in PBS at approximately 1 × 10 6 /ml. One μl of fluorescent dye was added to the suspension. After 30 min incubation at room temperature spermatozoa were washed and resuspended in 1 ml PBS twice. One drop of suspension was put on a Superfrost slide and coverslipped and subsequently analyzed on a BioRad MRC 1024 confocal laser scanning microscope. Three-dimensional reconstructions were preformed using Volocity LE free software ( http://www.improvision.com ).
Transmission electron microscopy (TEM)
Cells were fixed in 2.5% glutaraldehyde in PBS (pH 7.2) for 2 hours at 4°C. After washing, samples were postfixed in 1% OsO 4 and 0.5% K-ferrocyanide in PBS for 2 hours, dehydrated with a graded series of acetone, and embedded in Spurr's mixture. Semithin sections were stained by 0.5% toluidine blue (pH 8.5) Areas of interest were trimmed out by comparing the cut surface of the blocks with the semithin sections. Ultrathin sections were cut by an RMC MT-7 ultramicrotome, stained with 2% uranylacetate and lead citrate and analyzed on Philips CM10 electron microscope.
Genotyping
For high throughput genotyping DNA samples were analyzed by PorcineSNP60 Genotyping BeadChip (Illumina Ltd, San Diego, USA) in the Institute for Molecular Medicine Finland (FIMM, Helsinki, Finland). The PorcineSNP60 BeadChip has recently been developed as an outcome of the porcine whole genome sequencing project [ 39 ]).
Expression profiling and sequencing of STK17b and HECW2
For sequencing the full-length mRNA of the candidate genes STK17b and HECW2 , samples of testicular tissue from a KA-affected and a control boar were collected and stored in RNAlater buffer (Qiagen). Total RNA purification was performed with RNeasy Protect Mini kit (Qiagen). Extracted RNA was reverse transcribed (RT-PCR) using oligo T primers and an ImProm-II Reverse Transcription System (Promega) according to the manufacturer's instructions and amplified using gene specific primers (Table 2 ). Expression of gene fragments was assessed by gel electrophoresis. For sequencing the PCR amplicons were purified using ExoSAP-ITTM (Amersham Biosciences), while PCR fragments were sequenced in both directions with the same primers used in the amplification procedures. Sequencing was performed on MegaBace 500 capillary DNA sequencer (Amersham Biosciences) using DYEnamic ET Terminator Kits with Thermo SequenaseTM II DNA Polymerase (Amersham Biosciences).
Statistical analysis
A recessive mode of inheritance was tested for each SNP separately. The recessive model was selected because the pedigree of KA-affected boars suggested a recessive mode of inheritance and the low frequency of the defect in the Finnish Yorkshire pig population. In the recessive model, for each SNP the frequency of homozygote animals for the minor allele (or for the major allele) was compared to frequency of heterozygote and other homozygote animals between cases and controls. In order to correct for multiple testing a permutation procedure was adapted to create empirical genome-wide P-values. Association tests and permutation were carried out using the software package Plink [ 40 ]. Haplotypes, the linkage disequilibrium plot and the Manhattan plot were produced with Haploview [ 41 ]. | Results
Microscopical analysis of the KAD
In the confocal laser scanning microscopy three-dimensional reconstructions of the spermatozoa with acrosomal granules indicated that the granules protruded on both sides and contained a vacuolum (Figure 1A ). TEM analyses confirmed the three-dimensional protrusion of the granules and the occurrence of vacuoles within the granules. The nucleus was also shown to be affected as evident from the Y-shaped form at the apical end (Figure 1B ) suggesting that the defect affects both the chromatin and acrosome. These findings highlight that this particular and previously unknown KA-defect appears to be a serious form of the acrosomal granule defect.
SNP quality measures
Based on all available SNPs and the method to estimate IBDs in the Plink software package (pi_hat) the average relatedness among cases and controls was 0.24 and 0.26, respectively. These levels of relatedness are typical in the studied Finnish Yorkshire pig population. The average sample call rate was 95%. There were 2815 SNPs that did not work for any of the samples analysed. Excluding these SNPs, the average SNP call rate was 0.9982 (s.d. = 0.007) and the average minor allele frequency was 0.25 (s.d. = 0.14). Overall, the dataset contained 9216 monomorphic SNPs. Observed distribution of P-values in the Hardy-Weinberg equilibrium test statistics did not differ from expectations. In total, 183 SNPs (excluding SNPs on the X-chromosome) had a P-value <1.0E-06 being lower than expected.
Genome wide association analysis
The association test was performed for 47055 SNPs. The Manhattan plot of the log10 based P-values is presented in Figure 2 . The recessive model identified a KAD associated region covering approximately 3 Mbp between 93 and 96 Mbp (pig genome build 9) on chromosome 15. After permutation, five SNPs were statistically significant (P-value = 0.0002, Table 1 ). Four of these SNPs (ALGA0086494, DRGA0015302, MARC0011300, and CASI0005693) are located within a 1.4 Mbp region and were in complete linkage disequilibrium (D' = 1.0, r 2 = 1.0). 12 out of 14 KAD cases had inherited two identical copies of the haplotype covering these and other SNPs between them, indicating an extended homozygosity in this region, and thus a common ancestral origin (Figure 3 ). All KA-affected boars shared a 0.7 Mbp homozygous region between SNPs DIAS0000367 and ALGA0086503 (Figure 3 , additional file 1 ). The CASI0005693 SNP was in stronger linkage disequilibrium with ALGA0086494 and other significant SNPs compared with neighbouring SNPs, highlighting that the position of CASI0005693 may change following a more refined genome build in this region (see additional file 1 ). The fifth significant SNP (MARC0020403) was located 4 Mbp from the other four SNPs, and was shown to be in linkage disequilibrium (D' = 1.0, r 2 = 0.13).
Candidate genes HECW2 and STK17b
The most promising candidate gene Ubiquitin-protein ligase E3 ( HECW2 ) was located within the haplotype of two SNPs with the highest P-values; ALGA0086494 and DRGA001532 (Table 1 , additional file 1 ). All KA-affected animals were homozygous for these SNPs and only two control animals had the same homozygote alleles as KA-affected boars (Figure 3 ). Furthermore, one of these two animals appeared to have a SME-defect and another was removed from breeding at young age due to weak leg conformation and therefore no fresh sperm samples were available for analysis. The SME-defect is a cyst malformation in the sperm head, with indications that this is of acrosomal origin [ 17 , 18 ].
HECW2 is expressed in the testis [ 19 ] and functions in ubiquitin mediated proteolysis [ 20 ]. Ubiquitin signals have been detected during acrosome development [ 21 ] and deubiquitinating enzyme mUBPy is upregulated in the testis of wobbler mouse, which is infertile due to the lack of a functional acrosome [ 22 ].
Another gene within the KAD homozygous region was serine/threonine kinase 17b ( STK17b, DRAK2 , additional file 1 ) . STK17b is a serine/threonine kinase, which has a role in the regulation of apoptosis [ 23 - 25 ]. STK17b is highly expressed in the testes where the apoptosis plays an important role during spermatogenesis. Even though the phenotype of KA-affected boars does not implicate a defect in apoptosis the expression and sequence of STK17b mRNA was determined.
Analysis of the porcine HECW2 gene
The expression pattern of different HECW2 fragments (Table 2 ) appeared to be comparable in the KA-affected and control boar. The full-length mRNA of porcine HECW2 [GenBank HM562353 ] was sequenced from the testis of one KA-affected and one control boar. The total length of the sequenced HECW2 transcript was 4802 bp with a high homology with other mammalian species. When compared to the human HECW2 gene, the porcine sequence started at position 177 bp in the exon 2. In the pig, exon 1 did not appear to be expressed in the testis. However, based on the genome sequence, exon 1 was highly conserved compared with the human suggesting that it may have an important role in HECW2 expression, at least in some tissues. In man, the protein coding region starts at mRNA position 184 bp (exon 2). The human HECW2 protein consists of 1572 aa. Current data suggests that the corresponding pig protein sequence is 1574 aa with a 96% homology to the human sequence. Similarly, the full-length HECW2 protein in the mouse includes 1578 aa and has 95% homology to HECW2 in the pig.
Sequencing of the porcine HECW2 mRNA and the exon 1 from genomic DNA of a KA-affected and control boar showed two SNPs at mRNA positions 1563 (SNP1) and 2233 bp (SNP2). SNP1 causes a change in the protein sequence at position 519 aa from isoleucine to threonine (Figure 4 ). This SNP was further genotyped for all 14 KA-affected and 10 control boars. All KA-affected boars were homozygous for this SNP, but also four control boars had the same homozygous allele, discounting this as the causal mutation of the KAD. In addition to these two SNPs, a deletion of nine bp (three aa) was detected after the nucleotide at position 3348 bp (1113 aa) when compared to the porcine reference sequence [Ensemble: ENSSSCG00000016068]. This deletion seems to be very common in mammalian species (Figure 4 ). The porcine reference sequence for HECW2 included exons 3-28 (based on human [GenBank AB037722 ]), however our sequencing results indicate that exons 2, 29 and 30 are also expressed in the pig testes.
Analysis of STK17b mRNA
The sequenced testicular mRNA of porcine STK17b [GenBank HM594868 ] contained exons 1-9 (based on human exon numbering, [GenBank NM_004226 ]) and 1102 bp. Translation start codon was identified at position 282 within exon 2. No change in the expression profile was identified and the protein coding sequence was identical in the KA-affected and control boar. | Discussion
While the results of homozygosity mapping of the KAD in a previous study [ 15 ] were not statistically significant, they did indicate the most probable positions of the KAD-associated chromosomal segments. In this study we confirmed the association between KAD and porcine chromosome 15. The initial genome screen with microsatellite markers S0004 and SW2608 showed increased homozygosity in KAD affected boars [ 15 ]. The genome scan with PorcineSNP60 Genotyping BeadChip (Illumina) localized the KAD associated region between these two markers on porcine chromosome 15. The PorcineSNP60 BeadChip illustrated a high call rate (<95%) in the Finnish Yorkshire pig population and only 15% of the SNPs were monomorphic. In this study we detected the KAD associated region covering 2 Mbp indicating that the marker map in the initial screen was not dense enough to detect the significant increase in homozygosity.
Within the associated region we identified and sequenced two candidate genes Ubiquitin-protein ligase E3 (HECW2) and serine/threonine kinase 17b (STK17b) . The sequencing of these two genes revealed two SNPs within HECW2 gene, but no polymorphisms were detected in the protein coding sequence of STK17b . Although the identified mutations appeared not to be the causal cause for the KAD, HECW2 remains a good candidate gene for this defect considering its role in acrosome development and chromatin remodelling.
Protein ubiquitination is one of the fundamental regulatory post-translational modifications controlling intracellular signalling events. Ubiquitin-proteosome-dependent proteolysis plays an important role in selectively degrading and recycling proteins in many basic cellular processes including spermatogenesis [ 26 ]. For degradation by the proteosome, binding of ubiquitin with substrate proteins requires the activity of ubiquitin-activating enzyme E1, ubiquitin-conjugating enzyme E2, and substrate-specific ubiquitin ligase E3 [ 27 ]. Ubiquitin ligase E3 in combination with an E2 ubiquitin-conjugating enzyme causes the attachment of ubiquitin to a lysine residue on the target protein.
In spermatogenesis ubiquitination is required for various processes; for example the replacement of the spermatids nuclear histones with protamines during spermatid elongation [ 26 ]. In spermatozoa, proteosomes are located on the plasma membrane overlying the acrosome, in the acrosomal and postacrosomal regions, in the head-tail connecting-piece, middle-piece of the tail, and residual bodies [ 28 - 32 ]. Proteosome subunit Psmc3 and an ubiquitin protein ligase Rnf19a have been located at the cytosolic side of outer and inner membranes of the acrosome [ 33 ]. The co-immunoprecipitation and localization of Psmc3 and Rnf19a in spermiogenesis points to the participation of the ubiquitin-proteosome system in acrosome formation, spermatid head shaping, and development of the head-tail coupling apparatus and tail [ 33 ].
Malfunction of components in ubiquitination system has been shown to be a cause of male infertility [ 27 , 34 - 36 ]. There appears to be a special requirement for certain components of the ubiquitin system during spermiogenesis, in particular [ 37 ], and it is probable that different spermatogenic phases would require different specialized activities of the ubiquitin system. Mutations in ubiquitination related proteins may also affect specifically spermatogenesis through their testis specific interacting partners [ 36 ]. A malfunction of ubiquitination may cause diverse phenotypes as exemplified in the human and mouse by mutation of Hr6b and Usp14 [ 35 , 38 ]. | Conclusions
In this study we demonstrate the exact KAD phenotype in mature sperm. In addition to the acrosome, the spermatid chromatin is also affected. We have located the homozygous region for KAD within 0.5 Mbp on porcine chromosome 15 containing two genes STK17b and HECW2 . The role of ubiquitination in chromatin remodelling and acrosome formation is consistent with HECW2 being involved in this defect. While a causal mutation for KAD was unable to be identified, our results indicate that the observed phenotype may be caused by a malfunction in the ubiquitination system. Identification of the causal variation for the KAD requires further analysis of the genomic region containing the HECW2 gene. | Background
Male infertility is an increasing problem in all domestic species including man. Localization and identification of genes involved in defects causing male infertility provide valuable information of specific events in sperm development. Correct condensation of the sperm head and development of the acrosome are required for fertile sperm. In the Finnish Yorkshire pig population a knobbed acrosome defect (KAD) has been reported which appears to be of genetic origin. In previous studies we have shown that a large number of affected spermatozoa have a cystic swelling anterior to the apical part of the acrosome.
Results
Characterization of the knobbed acrosome affected sperm revealed that both the acrosomal granules and chromatin are affected. This type of KAD appears to be a previously unknown and serious form of the defect. A genome wide scan with PorcineSNP60 Genotyping BeadChip defined the KAD associated region within 0.7 Mbp on porcine chromosome 15. Two genes, STK17b and HECW2 , located within this region were sequenced. The expression of these genes appeared comparable in KA-affected and control boars. The known function of HECW2 in acrosome development highlighted this gene as a good candidate responsible for the KAD. One nonsynonymous SNP was identified within the HECW2 gene. However, as this mutation was found in homozygous state in individuals with normal sperm, this is not likely to be the causal mutation.
Conclusions
In this study we identified two candidate genes for a severe defect affecting both the sperm acrosome and chromatin that causes infertility. One of these genes, HECW2 , plays an important role in ubiquitination, a prerequisite for chromatin remodelling and acrosome formation, highlighting the involvement of this gene in the knobbed acrosome defect and male infertility. | Authors' contributions
AS carried out the molecular genetics studies, sequence alignments and drafted the manuscript. PU performed the statistical analysis and participated in drafting the manuscript. SN participated in the microscopical studies and contributed to drafting of the manuscript (microscopical studies). SP carried out the microscopical studies. MA participated in the design and coordination of the study. JV participated in the design and helped to draft the manuscript. All authors read and approved the final manuscript.
Supplementary Material | Acknowledgements
Funding for this study was provided by the Finnish Ministry of Agriculture and Forestry (Makera). The assistance of Tiina Jaakkola and Tarja Hovivuori in DNA extraction and Päivi Lahermo (Institute for Molecular Medicine Finland, FIMM) in genotyping with PorcineSNP60 Genotyping BeadChip (Illumina) is greatly appreciated. | CC BY | no | 2022-01-12 15:21:46 | BMC Genomics. 2010 Dec 9; 11:699 | oa_package/06/da/PMC3016419.tar.gz |
PMC3016420 | 21159186 | Background
Rice ( Oryza sativa L.) is a staple food crop for about half of the world's population. In 2008, the total rice-harvested area and rough rice yield in the world were 155.7 million hectares and 661.8 million tons, respectively [International Rice Research Institute (IRRI) 2009]. However, the productivity of rice is severely affected by soil nitrogen nutrient deficiency worldwide. Commercially available urea fertilizer is the most widely used resource to meet a rice crop's nitrogen requirement, of which one third is lost through emission of greenhouse gasses and leaching, causing adverse environmental impacts [ 1 - 3 ]. To meet these challenges and develop environmentally sustainable rice production systems, much attention has been given to natural methods of biological nitrogen fixation (BNF) [ 4 , 5 ] or to increase nitrogen use efficiency (NUE) [ 6 - 8 ].
The genus Oryza comprises 24 species, including 2 cultivated ( O. sativa and O. glaberrima ) and 22 wild species with diverse ecological adaptation. These species are categorized into 10 recognizable genome types (AA, BB, CC, EE, FF, GG, BBCC, CCDD, HHJJ and HHKK) [ 9 , 10 ]. Wild rice has diversified over 40 million years. Wild species are tremendous gene reservoirs for domesticated rice improvement, as they possess many desirable traits, such as resistance to diseases and insect pests or tolerance to different kinds of stresses [ 11 - 14 ]. Oryza longistaminata chev . (2 n = 24, AA), broadly distributed throughout tropical Africa, is a perennial species with characteristics of long anthers, self-incompatibility, allogamy, strong rhizomes and high biomass production on poor soils. In spite of its overall inferior appearance, O. longistaminata has furnished genes for developing perennial rice [ 15 , 16 ] and for breeding blight disease resistance varieties [ 17 ]. To make better use of this potential, more genomic information is required, but there are only few batches of mRNAs or full-length cDNAs (FLcDNAs) of O. longistaminata in public databases, and no genome sequence is available.
Sequencing and analysis of expressed sequence tags (ESTs) has become a primary strategy for functional genomic studies in plants including novel gene discovery, gene expression profiling, microarray and molecular marker development, and accurate genome annotation. After completing the full genome sequence of O. sativa ssp. japonica cv. Nipponbare and the draft genome sequence of the O. sativa ssp. indica cv. 93-11 through a map-based sequencing strategy and through a whole-genome shotgun sequencing approach, respectively [ 18 , 19 ], much efforts were involved into rice ESTs projects. Approximately 1249,110 ESTs and >50,000 full-length cDNA sequences of cultivated rice are currently available in public databases. However, the genomic studies of rice wild relatives are still in their infancy with the exception of the generation of 5,211 leaf ESTs from the O. minuta (BBCC genome) and 1,888 leaf FLcDNAs from the O. rufipogon (AA genome) [ 20 , 21 ]. Especially roots are organs underrepresented in EST studies.
Therefore, a comprehensive survey of ESTs in roots of O. longistaminata was undertaken to provide an overview of O. longistaminata root transcriptome and thus a molecular basis for the identification of useful genes. As newly developed massively parallel 454 pyrosequencing allows rapid generation of sequence data and deep sequencing coverage with reducing labour and cost [ 22 - 24 ], we here characterized the first global root transcriptome of that wild rice species O. longistaminata by 454 GS-FLX pyrosequencing technology. This led to the discovery of a huge amount of novel ESTs which will facilitate gene mining and provide a basis for comparative studies within the genus Oryza . | Methods
Plant materials
The O. longistaminata accession IRGC 110404 (short name Xa21) was grown under nitrogen-limiting conditions in soil without nitrogen fertilizer in the phytotron in Bremen. The soil (from Camargue) had a low percentage of total nitrogen (0,229%) and a high C/N ratio (25.5). The roots and leaves were harvested by snap-freezing in liquid nitrogen and prepared for RNA and DNA isolation, respectively. The seeds of O. longistaminata collected from Namibia were surface-sterilized [ 28 ] and cultured gnotobiotically in plant medium [ 29 ] supplemented with agar (4 g per L).
RNA and DNA extraction, cDNA synthesis
The RNA was extracted by the CTAB method described by Chang et al. [ 30 ] from soil-grown roots and then purified using plant RNeasy columns (Qiagen, Hilden). The RNA from cultured seeds was isolated using TRIzol (Invitrogen) according to manufacturer's instructions. The quality of RNA was evaluated by a Bioanalyzer 2100 (Agilent Technologies, Santa Clara, CA). Genomic DNA was isolated by the CTAB method described by Allen et al. [ 31 ] from leaves. The concentration of DNA was determined spectrophotometrically and the quality of DNA was checked by agarose gel electrophoresis. cDNA was synthesized using the SMART PCR cDNA synthesis Kit (Clontech, Mountain View, CA). cDNA was purified by QIAquick spin columns (Qiagen, Hilden).
454 pyrosequencing, assembly and annotation
Synthesis of cDNA and normalization for pyrosequencing was carried out by MWG (Ebersberg, Germany) using RNA from roots of soil-grown plants without N-fertilizer. High quality polyA+ RNA was isolated from total RNA as template for first- and second-strand synthesis. By using a semirandom priming approach for both strands, an even shotgun-like distribution of cDNA fragments was achieved. The fragments were size-fractionated and normalised by denaturing and re-association. Approximately 10 μg of cDNAs were sheared by nebulisation and sequenced on a 454 GS-FLX pyrosequencing platform. A total of 337,830 raw reads were obtained. SeqClean software http://compbio.dfci.harvard.edu/tgi/software/ was applied to eliminate low quality sequences, poly A/T sequences, adaptor sequences. The cleaned sequences were subjected to the CAP3 program [ 32 ] for clustering and assembly with default parameters. All the consensus sequences were compared with NR database (GenBank). GO accessions were obtained via assignment of Arabidopsis gene identifiers with the strongest BLASTx alignments to the corresponding O. longistaminata ESTs. Comparison of the distribution of cellular component, biological processes or molecular function obtained using GO annotation was done using the GOSlim program http://www.geneontology.org .
The sequences are available at http://www.gabipd.org/ under the accession Xa21_454, and at GenBank (dbEST acc. No. HS317469 - HS388835).
RT-PCR and Southern blot analyses
To validate the presence of novel ESTs detected by pyrosequencing in O. longistaminata , randomly selected sequences were used for expression analysis by RT-PCR (root RNA) and Southern blot (leaf DNA) analyses. About 100 ng total RNA was use to synthesize the first-strand cDNA by SuperScriptTM II Reverse Transcriptase (Invitrogen, Carlsbad, CA) with Oligo(dT)12-18 primers. Specific primer pairs for cDNA amplification were designed by Primer3 software [ 33 ] according to the EST sequences. PCR was performed in a 50 μL reaction volume containing 1 μL cDNA, 1× PCR buffer [10 mM Tris-Hcl (pH 8.0), 1.5 mM MgCl2], 0.2 mM dNTPs, 0.2 μM of each primer, and 1.5 U Taq polymerase (MolTaq). The annealing temperature was 60°C for all primer pairs. After 5 min at 94°C, 35 cycles were carried out with 45 s at 94°C, 45 s at 60°C, 1 min at 72°C for extension and final step of 10 min at 72°C. The PCR products were purified and sequenced by the Sanger method (LGC Genomics, Germany). For Southern blot analysis, 5 μg of genomic DNA was used for restriction endonuclease digestion with HindIII and subjected to Southern blot analysis with digoxygenin-labeled probes according to the protocol described by Neuhaus-Url et al. [ 34 ]. | Results and Discussion
Sequencing and assembly of 454 pyrosequencing ESTs
In order to obtain transcripts of genes that might be required for growth under nutrient stress, O. longistaminata plants were clonally propagated and were adapted to low-nitrogen conditions in unfertilized soil for several months. Mature plants with high biomass production (see Additional file 1 ) were subjected to RNA extraction from roots. As soil-grown roots often yield low quality RNA with inhibitory effects on enzyme activity (reverse transcription or PCR) [ 25 ], several RNA extraction methods were compared. A standard extraction protocol with Trizol yielded degraded RNA (not shown), while RNA extracted by a CTAB-based method was of high quality (Additional file 1 ).
Pooled RNA extracts from two extractions were used for normalization and sequencing of cDNAs. One GS-FLX 454 pyrosequencing run produced a total of 337,830 reads (87.3 Mb) with average sequence length of 258 bp (SD = 24, range = 60-925) from root cDNAs of O. longistaminata . After removal of adaptor sequences, polyA tail and low quality sequences, 337,471 reads remained with a total length of 66.7 Mb and an average length of 197 ± 61 bases, ranging from 20 bp to 393 bp (Additional file 2 ). Only sequences above 100 bp of length were further considered. Clustering and assembling of these sequences produced 43,423 contigs and 32,708 singletons. These data were trimmed again by removing those showing homology (E-value cutoff, e -5 ) to sequences of bacteria, fungi or metazoa, resulting in a total of 71,367 processed unique sequences. The length of contigs varied from 101 bp to 2082 bp with an average of 299 bp, and that of singlets ranged from 101 bp to 393 bp with an average of 215 bp (Additional file 2 ).
The majority of reads was in length of 201-300 bp (95% out of raw reads), which was consistent with the 454 GS-FLX sequencing capacity. The size distribution of O. longistaminata consensuses after assembly was shown in Table 1 , revealing that 92% of them fell between 100 and 500 bp in length. Although none of singlets was longer than 500 bp in the whole EST dataset, there were 3,277 contigs with sequence length larger than 500 bp (Table 1 ). In addition, 21,762 contigs (53%) were still less than 250 bp. This might be due to the short length of the sequencing read and/or the low coverage of the transcriptome represented in this dataset. Most of the contigs were derived from few reads. 11,949 (29% out of contigs) and 7,226 (17.5% out of contigs) consensuses were derived from 2 and 3 reads, respectively (Additional file 3 ).
The sequence data obtained were in a similar range as for other plant EST sequencing projects using this technology [ 26 , 27 ], however with a slightly higher read length, demonstrating the power of this approach to deliver large EST datasets.
Mapping ESTs to the O. sativa genome and transcriptome revealed novel ESTs
Comparison of the unique EST sequences of O. longistaminata to chromosomal and expressed sequences of O. sativa revealed a large set of two types of novel ESTs, those previously not found to be expressed by rice, and those not even detected in the rice genome. The ESTs were aligned to genomic sequences of two O. sativa varieties, japonica - type Nipponbare http://rgp.dna.affrc.go.jp/IRGSP/ and indica -type 93-11 http://rice.genomics.org.cn/rice/index2.jsp , by using the BLASTN program with an E-value cut-off of e -5 . In total, 60,155 (84.3%) out of 71,367 sequence tags were anchored in rice genome. These sequences mapped on all the 12 rice chromosomes (Table 2 ) with almost equal distribution, chromosome 1, 2, 3 harbouring large amount of EST sites accounting for approximately 40% of a total of 60,155 EST sites. The distribution corresponds well to the size of the chromosomes, which highlights the close relationship between these two species. These anchored ESTs also had a relatively high sequence identity with O. sativa sequences ranging from 78% to 100%, with an average of 97%. The sequence identity distribution of 60,155 O. longistaminata ESTs is shown in Figure 1 , revealing that 61% of them had a sequence similarity higher than 98%.
Among 60,155 ESTs, 508 could only be aligned to indica genome sequences and 439 could specifically match japonica genome sequences (see Additional file 4 ). Among these ESTs, 164 out of 508 or 247 out of 439 had high similarity (cut-off score above 100) to indica or japonica cDNA/ESTs sequences, respectively. They might be indica -specific or japonica -specific genes, or they may map to gaps in the rice genome sequences. The latter assumption may be likely for part of these ESTs, as 74 out of the 164 indica -specific genes mapped to ESTs detected in japonica with highest score, and 22 out of 247 japonica- specific ones to indica ESTs. One of the 508 ESTs, Xa21_1574, was selected for further analysis. The Southern blot analysis was consistent with the BLAST results (Figure 2 ). Our findings indicated that O. longistaminata had parallel similarity to japonica and to indica rice at the DNA level.
In order to assess how many O. longistaminata ESTs had already been detected as expressed genes in O. sativa , the ESTs mapping onto the O. sativa genomes were also compared with the Knowledge-based Oryza Molecular Biological Encyclopedia (KOME, http://cdna01.dna.affrc.go.jp/cDNA/ ) cDNA collection, the indica cDNA database http://www.ncgr.ac.cn/ricd/ , and the NCBI rice EST database. 83.3% matched to O. sativa genes found to be expressed previously.
A large amount of ESTs (9,993 or 16.7%) had previously not been detected as expressed. For most of them, we did not find homologies to predicted gene models: Inspection of the 30 longest ESTs showed that 67% shared sequence similarity with O. sativa but not to predicted genes, 23% with genes of predicted functions, and 10% with genes encoding hypothetical proteins. This was also reflected in the lack of functional assignments (see below), as after in silico translation only for a small fraction (777) of these ESTs could be assigned according to Gene Ontology (GO). This emphasizes the power of the next generation sequencing approach to detect novel transcripts or even novel genes. As the O. sativa genome may still contain regions that are not fully annotated, our ESTs might indicate as yet unpredicted genes or UTRs that might be functional in O. sativa as well. On the other hand, O. longistaminata might express a special set of genes in comparison to O. sativa , due the particular conditions - being adapted to low availability of external nitrogen sources-, or due to the interspecies differences in expression.
As another category of novel ESTs, in total, 11,212 (15.7%) of 71,367 unique EST sequences could not be mapped to the O. sativa chromosomes by homology search against genomic sequences. Among them, 250 matched the publicly available O. sativa mRNAs or ESTs. The remaining 10,962 sequence tags showed no significant sequence identity (cut- off e -5 ) with any rice genomic or expressed sequences in public database. Among these, only a very small number (740) had a significant hit in NCBI non-redundant (NR) nucleotide database or ESTs database. The remaining 10,222 ESTs may therefore represent novel genetic material present in O. longistaminata and other root-residing eukaryotes.
Functional classification of O. longistaminata ESTs
The consensus sequences were annotated for sequence similarities using the BLASTX translated sequence comparison against the NCBI non-redundant (NR) protein database. Among the 71,367 contigs and singlets, 34,510 (48.4%) had at least a significant alignment to exisiting gene models in the NR database at an E-value cut-off of e -5 . A majority (51.6%) of the O. longistaminata sequences did not match any known protein sequences. Most of the 10,962 novel sequence tags (15.4%) fell into this category. This can partly be attributed to the short length of most of these uni-ESTs, or a large fraction of the ESTs might represent untranslated regions. Mapping those uni-ETSs to rice gene models supported this assumption. https://www.gabipd.org/database/cgi-bin/GreenCards.pl.cgi .
The unique ESTs were further classified into Molecular Function, Biological Process and Cellular Components, according to the standard Gene Ontology terms (GO; http://www.geneontology.org ). Only to 25,448 O. longistaminata sequence tags GO numbers were assigned, however a broad range of GO categories was covered: the percentage distribution of GO terms is shown in Figure 3 . A total of 20,935 sequences could be assigned to the Molecular Function. Among them, nucleotide binding (31.6%) and binding (general, including small molecules) (31.3%) and catalytic activity (21%) were the most dominant categories. With regard to the category of Biological Process containing 16,036 ESTs, cellular processes (24%) were the most highly represented category. The following categories were protein modification processes (18.9%), metabolic processes (17%), transport (15.9%) and biosynthetic processes (12.5%). Under the category of Cellular Component, 31.9% of 13,492 ESTs were predicted as membrane proteins (general category including different organelles), followed by plastid (16.2%), nucleus (15.8%) and plasma membrane (15.3%) proteins.
Biological Validation of Novel ESTs
Approximately 15.4% of the unique EST sequences detected in the O. longistaminata root transcriptome currently are not similar to rice sequences in databases. These may represent novel genes of O. longistaminata not present in O. sativa , or it may be possible that there are gaps existing in cultivated rice genome sequences, or a small portion of the unmapped sequences might have resulted from contamination by non-rice sources. A total of 14 novel ESTs were randomly selected for RT-PCR to determine the portion of potential novel genes originating from O. longistaminata rather other organisms in our transcript collection. RT-PCR experiments were conducted on RNAs derived from root tissue of clonally propagated O. longistaminata plants grown in soil in the phytotron. Of the 13 primer pairs for PCR, 10 generated RT-PCR products that were of the expected size and whose sequences were confirmed by Sanger sequencing. The results demonstrated that these 10 novel transcripts detected among the 454-ESTs are indeed expressed in O. longistaminata roots grown in soil (Figure 2A ). Among another set of primer pairs for 19 additional ESTs, six yielded a positive result (Additional file 5 ). However, as conditions for PCR amplification cannot be optimized due to lack of intron-free template, these results may be an underestimation. To test for distribution of the putatively expressed genes among different accessions of the same species, O. longistaminata grains collected at the Okavango region of Namibia were used for gnotobiotic cultivation of seedlings in the phytotron, and pooled for analysis. From root RNA extracts, 5 of the 10 primer pairs yielded RT-PCR products with correct size whose sequences were validated by Sanger sequencing again. This confirmed that these fragments indeed originated from this species and not e.g . from root endophytes, and that their expression was conserved within the species. To control occurrence in the genome, Southern blot analysis was carried out from genomic DNA extracted from leaves of O. longistaminata accession IRGC 110404, and from O. sativa . Probes generated from two out of the 10 ESTs detected hybridizing fragments for wild but not for cultivated rice (Figure 2B ). The results indicated that these 2 ESTs are indeed O. longistaminata -specific sequences. Based on these results, we estimated that a large subset of novel sequences was derived from O. longistaminata . The remaining novel EST sequences might be due to the contamination from other sources or due to the 454 sequencing artefacts. | Results and Discussion
Sequencing and assembly of 454 pyrosequencing ESTs
In order to obtain transcripts of genes that might be required for growth under nutrient stress, O. longistaminata plants were clonally propagated and were adapted to low-nitrogen conditions in unfertilized soil for several months. Mature plants with high biomass production (see Additional file 1 ) were subjected to RNA extraction from roots. As soil-grown roots often yield low quality RNA with inhibitory effects on enzyme activity (reverse transcription or PCR) [ 25 ], several RNA extraction methods were compared. A standard extraction protocol with Trizol yielded degraded RNA (not shown), while RNA extracted by a CTAB-based method was of high quality (Additional file 1 ).
Pooled RNA extracts from two extractions were used for normalization and sequencing of cDNAs. One GS-FLX 454 pyrosequencing run produced a total of 337,830 reads (87.3 Mb) with average sequence length of 258 bp (SD = 24, range = 60-925) from root cDNAs of O. longistaminata . After removal of adaptor sequences, polyA tail and low quality sequences, 337,471 reads remained with a total length of 66.7 Mb and an average length of 197 ± 61 bases, ranging from 20 bp to 393 bp (Additional file 2 ). Only sequences above 100 bp of length were further considered. Clustering and assembling of these sequences produced 43,423 contigs and 32,708 singletons. These data were trimmed again by removing those showing homology (E-value cutoff, e -5 ) to sequences of bacteria, fungi or metazoa, resulting in a total of 71,367 processed unique sequences. The length of contigs varied from 101 bp to 2082 bp with an average of 299 bp, and that of singlets ranged from 101 bp to 393 bp with an average of 215 bp (Additional file 2 ).
The majority of reads was in length of 201-300 bp (95% out of raw reads), which was consistent with the 454 GS-FLX sequencing capacity. The size distribution of O. longistaminata consensuses after assembly was shown in Table 1 , revealing that 92% of them fell between 100 and 500 bp in length. Although none of singlets was longer than 500 bp in the whole EST dataset, there were 3,277 contigs with sequence length larger than 500 bp (Table 1 ). In addition, 21,762 contigs (53%) were still less than 250 bp. This might be due to the short length of the sequencing read and/or the low coverage of the transcriptome represented in this dataset. Most of the contigs were derived from few reads. 11,949 (29% out of contigs) and 7,226 (17.5% out of contigs) consensuses were derived from 2 and 3 reads, respectively (Additional file 3 ).
The sequence data obtained were in a similar range as for other plant EST sequencing projects using this technology [ 26 , 27 ], however with a slightly higher read length, demonstrating the power of this approach to deliver large EST datasets.
Mapping ESTs to the O. sativa genome and transcriptome revealed novel ESTs
Comparison of the unique EST sequences of O. longistaminata to chromosomal and expressed sequences of O. sativa revealed a large set of two types of novel ESTs, those previously not found to be expressed by rice, and those not even detected in the rice genome. The ESTs were aligned to genomic sequences of two O. sativa varieties, japonica - type Nipponbare http://rgp.dna.affrc.go.jp/IRGSP/ and indica -type 93-11 http://rice.genomics.org.cn/rice/index2.jsp , by using the BLASTN program with an E-value cut-off of e -5 . In total, 60,155 (84.3%) out of 71,367 sequence tags were anchored in rice genome. These sequences mapped on all the 12 rice chromosomes (Table 2 ) with almost equal distribution, chromosome 1, 2, 3 harbouring large amount of EST sites accounting for approximately 40% of a total of 60,155 EST sites. The distribution corresponds well to the size of the chromosomes, which highlights the close relationship between these two species. These anchored ESTs also had a relatively high sequence identity with O. sativa sequences ranging from 78% to 100%, with an average of 97%. The sequence identity distribution of 60,155 O. longistaminata ESTs is shown in Figure 1 , revealing that 61% of them had a sequence similarity higher than 98%.
Among 60,155 ESTs, 508 could only be aligned to indica genome sequences and 439 could specifically match japonica genome sequences (see Additional file 4 ). Among these ESTs, 164 out of 508 or 247 out of 439 had high similarity (cut-off score above 100) to indica or japonica cDNA/ESTs sequences, respectively. They might be indica -specific or japonica -specific genes, or they may map to gaps in the rice genome sequences. The latter assumption may be likely for part of these ESTs, as 74 out of the 164 indica -specific genes mapped to ESTs detected in japonica with highest score, and 22 out of 247 japonica- specific ones to indica ESTs. One of the 508 ESTs, Xa21_1574, was selected for further analysis. The Southern blot analysis was consistent with the BLAST results (Figure 2 ). Our findings indicated that O. longistaminata had parallel similarity to japonica and to indica rice at the DNA level.
In order to assess how many O. longistaminata ESTs had already been detected as expressed genes in O. sativa , the ESTs mapping onto the O. sativa genomes were also compared with the Knowledge-based Oryza Molecular Biological Encyclopedia (KOME, http://cdna01.dna.affrc.go.jp/cDNA/ ) cDNA collection, the indica cDNA database http://www.ncgr.ac.cn/ricd/ , and the NCBI rice EST database. 83.3% matched to O. sativa genes found to be expressed previously.
A large amount of ESTs (9,993 or 16.7%) had previously not been detected as expressed. For most of them, we did not find homologies to predicted gene models: Inspection of the 30 longest ESTs showed that 67% shared sequence similarity with O. sativa but not to predicted genes, 23% with genes of predicted functions, and 10% with genes encoding hypothetical proteins. This was also reflected in the lack of functional assignments (see below), as after in silico translation only for a small fraction (777) of these ESTs could be assigned according to Gene Ontology (GO). This emphasizes the power of the next generation sequencing approach to detect novel transcripts or even novel genes. As the O. sativa genome may still contain regions that are not fully annotated, our ESTs might indicate as yet unpredicted genes or UTRs that might be functional in O. sativa as well. On the other hand, O. longistaminata might express a special set of genes in comparison to O. sativa , due the particular conditions - being adapted to low availability of external nitrogen sources-, or due to the interspecies differences in expression.
As another category of novel ESTs, in total, 11,212 (15.7%) of 71,367 unique EST sequences could not be mapped to the O. sativa chromosomes by homology search against genomic sequences. Among them, 250 matched the publicly available O. sativa mRNAs or ESTs. The remaining 10,962 sequence tags showed no significant sequence identity (cut- off e -5 ) with any rice genomic or expressed sequences in public database. Among these, only a very small number (740) had a significant hit in NCBI non-redundant (NR) nucleotide database or ESTs database. The remaining 10,222 ESTs may therefore represent novel genetic material present in O. longistaminata and other root-residing eukaryotes.
Functional classification of O. longistaminata ESTs
The consensus sequences were annotated for sequence similarities using the BLASTX translated sequence comparison against the NCBI non-redundant (NR) protein database. Among the 71,367 contigs and singlets, 34,510 (48.4%) had at least a significant alignment to exisiting gene models in the NR database at an E-value cut-off of e -5 . A majority (51.6%) of the O. longistaminata sequences did not match any known protein sequences. Most of the 10,962 novel sequence tags (15.4%) fell into this category. This can partly be attributed to the short length of most of these uni-ESTs, or a large fraction of the ESTs might represent untranslated regions. Mapping those uni-ETSs to rice gene models supported this assumption. https://www.gabipd.org/database/cgi-bin/GreenCards.pl.cgi .
The unique ESTs were further classified into Molecular Function, Biological Process and Cellular Components, according to the standard Gene Ontology terms (GO; http://www.geneontology.org ). Only to 25,448 O. longistaminata sequence tags GO numbers were assigned, however a broad range of GO categories was covered: the percentage distribution of GO terms is shown in Figure 3 . A total of 20,935 sequences could be assigned to the Molecular Function. Among them, nucleotide binding (31.6%) and binding (general, including small molecules) (31.3%) and catalytic activity (21%) were the most dominant categories. With regard to the category of Biological Process containing 16,036 ESTs, cellular processes (24%) were the most highly represented category. The following categories were protein modification processes (18.9%), metabolic processes (17%), transport (15.9%) and biosynthetic processes (12.5%). Under the category of Cellular Component, 31.9% of 13,492 ESTs were predicted as membrane proteins (general category including different organelles), followed by plastid (16.2%), nucleus (15.8%) and plasma membrane (15.3%) proteins.
Biological Validation of Novel ESTs
Approximately 15.4% of the unique EST sequences detected in the O. longistaminata root transcriptome currently are not similar to rice sequences in databases. These may represent novel genes of O. longistaminata not present in O. sativa , or it may be possible that there are gaps existing in cultivated rice genome sequences, or a small portion of the unmapped sequences might have resulted from contamination by non-rice sources. A total of 14 novel ESTs were randomly selected for RT-PCR to determine the portion of potential novel genes originating from O. longistaminata rather other organisms in our transcript collection. RT-PCR experiments were conducted on RNAs derived from root tissue of clonally propagated O. longistaminata plants grown in soil in the phytotron. Of the 13 primer pairs for PCR, 10 generated RT-PCR products that were of the expected size and whose sequences were confirmed by Sanger sequencing. The results demonstrated that these 10 novel transcripts detected among the 454-ESTs are indeed expressed in O. longistaminata roots grown in soil (Figure 2A ). Among another set of primer pairs for 19 additional ESTs, six yielded a positive result (Additional file 5 ). However, as conditions for PCR amplification cannot be optimized due to lack of intron-free template, these results may be an underestimation. To test for distribution of the putatively expressed genes among different accessions of the same species, O. longistaminata grains collected at the Okavango region of Namibia were used for gnotobiotic cultivation of seedlings in the phytotron, and pooled for analysis. From root RNA extracts, 5 of the 10 primer pairs yielded RT-PCR products with correct size whose sequences were validated by Sanger sequencing again. This confirmed that these fragments indeed originated from this species and not e.g . from root endophytes, and that their expression was conserved within the species. To control occurrence in the genome, Southern blot analysis was carried out from genomic DNA extracted from leaves of O. longistaminata accession IRGC 110404, and from O. sativa . Probes generated from two out of the 10 ESTs detected hybridizing fragments for wild but not for cultivated rice (Figure 2B ). The results indicated that these 2 ESTs are indeed O. longistaminata -specific sequences. Based on these results, we estimated that a large subset of novel sequences was derived from O. longistaminata . The remaining novel EST sequences might be due to the contamination from other sources or due to the 454 sequencing artefacts. | Conclusions
In this study, we present a large-scale EST dataset comprising 71,367 unique EST sequences derived from wild rice O. longistaminata by massively parallel pyrosequencing. Among them, 34,510 ESTs matched to known gene models, and 25,448 ESTs were annotated with GO terms. The comparative analysis between wild rice and two domesticated rice subspecies indicated that O. longistaminata had parallel similarity to japonica as to indica rice. Notably, a large amount of ESTs derived from O. longistaminata roots have not yet been detected as expressed in O. sativa , or did not show similarity to publicly available rice sequences or any other genes. Our data contribute to future annotation approaches of the O. longistaminata genome, to identification of O. longistaminata -specific genes and to the comparative study of the evolution among Oryza genus. These novel ESTs will particularly provide a basis for further identification of genes of O. longistaminata underlying adaptation to nutrient-limiting conditions. All EST obtained in this study is attached in the supplemental data (Additional file 6 ). | Background
Oryza longistaminata , an AA genome type (2 n = 24), originates from Africa and is closely related to Asian cultivated rice ( O. sativa L .). It contains various valuable traits with respect to tolerance to biotic and abiotic stress, QTLs with agronomically important traits and high ability to use nitrogen efficiently (NUE). However, only limited genomic or transcriptomic data of O. longistaminata are currently available.
Results
In this study we present the first comprehensive characterization of the O. longistaminata root transcriptome using 454 pyrosequencing. One sequencing run using a normalized cDNA library from O. longistaminata roots adapted to low N conditions generated 337,830 reads, which assembled into 41,189 contigs and 30,178 singletons. By similarity search against protein databases, putative functions were assigned to over 34,510 uni-ESTs. Comparison with ESTs derived from cultivated rice collections revealed expressed genes across different plant species, however 16.7% of the O. longistaminata ESTs had not been detected as expressed in O. sativa . Additionally, 15.7% had no significant similarity to known sequences. RT-PCR and Southern blot analyses confirmed the expression of selected novel transcripts in O. longistaminata .
Conclusion
Our results show that one run using a Genome Sequencer FLX from 454 Life Science/Roche generates sufficient genomic information for adequate de novo assembly of a large number of transcripts in a wild rice species, O. longistaminata . The generated sequence data are publicly available and will facilitate gene discovery in O. longistaminata and rice functional genomic studies. The large number of abundant of novel ESTs suggests different metabolic activity in O. longistaminata roots in comparison to O. sativa roots. | Authors' contributions
HY carried out O. longistaminata root RNA isolation, RT-PCR and participated in sequence analyses, and drafted the manuscript; LH carried out Southern analysis and participated in sequence analyses; TH prepared the plant materials used in this study and co-designed the experiment; BR designed the experiment and assisted in the manuscript preparation. All authors read and approved the final manuscript.
Supplementary Material | Acknowledgements
This work was funded by a grant awarded by the BMBF (Bundesministerium für Bildung und Forschung) in the framework of GABI-FUTURE (no. 315068) to B. R.-H. and T. H. Grains from O. longistaminata were collected under the Research/Collection permit 1358/2009 and Export Permit 74439 by the Ministry of Environment and Tourism, Namibia. | CC BY | no | 2022-01-12 15:21:46 | BMC Genomics. 2010 Dec 15; 11:705 | oa_package/ba/31/PMC3016420.tar.gz |
PMC3016421 | 21182800 | Background
Sweetpotato ( Ipomoea batatas ) is a hexaploid (2n = 6x = 90) dicot and belongs to the family of Convolvulaceae . It is one of the world's important food crops, especially in developing countries. The tuberous roots of sweetpotato are usually used as staple food, animal feed, industrial material or raw material for alcohol production. According to the Food and Agriculture Organization (FAO) statistics, the world production of sweetpotato in 2008 was more than 110 million tons, and the majority came from China, with a production of around 85 million tons from about 3.7 million hectares [ 1 ]. Due to the high sink potential of the tuberous root, sweetpotato has one of the highest dry matter productivity rates among crops [ 2 , 3 ]. In addition to its agricultural importance, the sweetpotato tuberous root, involved in carbohydrate storage and vegetative propagation, is also a unique organ, which has the value of biological research for organogenesis and evolution. Therefore, understanding the processes regulating the root formation and development is of particular importance. During the last decade, a large number of transcriptomic and genomic sequences became available in model organisms, such as Arabidopsis , Antirrhinum and rice, which have greatly improved the understanding of the complexity of growth and development in higher plants. For sweetpotato, a total of 22, 731 EST sequences have been deposited in GenBank database (as of June 2010). After trimming and assembly, only 3,407 contigs and 4,856 singletons were obtained (unpublished data). However, the tuberous root formation and development of sweetpotato are complex biological processes involving morphogenesis as well as dry matter accumulation. The publicly available data are not sufficient for elucidating the molecular mechanisms controlling the traits of interest, and moreover, with traditional methods sequencing of these randomly selected cDNA clones from various tissues often has insufficient coverage of less-abundant transcripts, which usually play irreplaceable functions. In addition, to date, only about 300 SSR markers were developed for sweetpotato [ 4 - 6 ]. EST collections will also facilitate the development of molecular markers for further genetic research in this and related species. Therefore, extensive genomic and transcriptomic sequence data are needed for sweetpotato, which can be used to discover new genes related to tuberous root formation and development, and can also make it possible to construct high density microarrays for further characterization of gene expression profiles during these processes.
However, given that cultivated sweetpotato is a hexaploid outbreeding species with a large genome (2, 205 Mb) [ 7 ] and a high degree of heterozygosity, the prohibitive costs associated with sequencing and assembling such a large and complex genome make it infeasible to consider whole genome sequencing in the near future. Fortunately, transcriptome sequencing is an attractive alternative to the whole genome sequencing. It is well known that the majority of most eukaryotic genomes are composed of non-coding DNA, and transcribed sequences excluding introns contain a high content of functional information [ 8 ]. Furthermore, large collections of ESTs have proven invaluable for functional genomics and molecular marker development [ 9 - 13 ]. Currently, however, traditional sequencing methods for the generation of ESTs require costly and time-consuming approaches involving cDNA library construction, cloning, and labor intensive Sanger sequencing. The newly developed high throughput sequencing technology, i.e. Next Generation Sequencing (NGS), including the Roche/454 Genome Sequencer FLX Instrument, the ABI SOLiD System, and the Illumina Genome Analyser, is a powerful and cost-efficient tool for advanced research in many areas, including re-sequencing, microRNA expression profiling, DNA methylation, especially de novo transcriptome sequencing for non-model organisms [ 10 , 14 - 24 ]. Over the past several years, NGS has greatly accelerated our understanding of the complexity of gene expression, regulation and networks in model and non-model organisms. Though the transcriptome sequencing for non-model organisms using NGS was almost confined to 454 pyrosequencing due to its longer read length compared with the other two platforms [ 10 , 20 , 25 ], it is noteworthy that a draft genome sequence for the giant panda has been generated and assembled successfully using only Illumina Genome Analyser sequencing technology [ 26 ]. Recently whitefly transcriptome was also characterized using this short read sequencing platform [ 27 ].
In the present study, we utilized Illumina paired-end sequencing technology to characterize the root transcriptome of sweetpotato and to develop EST-derived SSR markers. Non-normalized cDNA collections from different types of roots were used to generate a broad survey of genes associated with tuberous root formation and development. To the best of our knowledge, this study is the first exploration to characterize the root transcriptome of sweetpotato through the analysis of large-scale transcript sequences resulting from Illumina paired-end sequencing. In addition to offering valuable sequence resource to sweet potato community, our objective was also to provide an efficient, inexpensive and reliable approach for transcriptome sequencing that can be readily adopted by researchers studying non-model organisms. | Methods
Plant material and RNA extraction
Sweetpotato cultivar "Guangshu 87" was grown in the experimental station of the Crops Research Institute, Guangdong Academy of Agricultural Sciences, Guangzhou, China. Samples were collected from fibrous roots (diameter <0.5 cm), pencil roots (diameter 0.5-1.2 cm) and tuberous roots at three developmental stages of growth: initial tuberous root (diameter 0.5-1.0 cm); swelling tuberous root (diameter 3.0-3.5 cm) and mature tuberous root (diameter >5.0 cm). The sampled tissues were immediately frozen in liquid nitrogen and stored at -80° until use.
For Illumina sequencing, the total RNA of each sample was isolated using a CTAB-based protocol and further purified with the RNeasy Plant Mini Kit (Qiagen, Valencia, CA). RNA quality was verified using a 2100 Bioanalyzer RNA Nanochip (Agilent, Santa Clara, CA) and all five samples had RNA Integrity Number (RIN) value more than 8.5. Then RNA was quantified using NanoDrop ND-1000 Spectrophotometer (NanoDrop, Wilmington, DE). A total of 20 μg of RNA was equally pooled from the five tissues for cDNA library preparation.
cDNA library construction and sequencing
Illumina sequencing using the GAII platform was performed at Beijing Genomics Institute (BGI)-Shenzhen, Shenzhen, China http://www.genomics.cn/index.php according to the manufacturer's instructions (Illumina, San Diego, CA). Briefly, poly (A) RNA was isolated from 20 μg of total RNA using Sera-mag Magnetic Oligo (dT) Beads (Illumina). To avoid priming bias when synthesizing cDNA, the purified mRNA was first fragmented into small pieces (100-400 bp) using divalent cations at 94°C for exactly 5 minutes. Then the double-stranded cDNA was synthesized using the SuperScript Double-Stranded cDNA Synthesis kit (Invitrogen, Camarillo, CA) with random hexamer (N6) primers (Illumina). The synthesized cDNA was subjected to end-repair and phosphorylation using T4 DNA polymerase, Klenow DNA polymerase and T4 PNK. These repaired cDNA fragments were 3' adenylated using Klenow Exo- (3' to 5' exo minus, Illumina). Illumina Paired-end adapters were ligated to the ends of these 3'-adenylated cDNA fragments. To select a size range of templates for downstream enrichment, the products of ligation reaction were purified on a 2% TAE-agarose gel (Certified Low-Range Ultra Agarose, Biorad). A range of cDNA fragments (200 ± 25 bp) was excised from the gel. Fifteen rounds of PCR amplification were performed to enrich the purified cDNA template using PCR Primer PE 1.0 and PCR Primer PE 2.0 (Illumina)] with Phusion DNA Polymerase. The cDNA library was constructed with a fragment length range of 200 bp (±25 bp). Finally, after validating on an Agilent Technologies 2100 Bioanalyzer using the Agilent DNA 1000 chip kit, the cDNA library was sequenced on a PE flow cell using Illumina Genome Analyzer IIx, and the workflow was as follows: template hybridization, isothermal amplification, linearization, blocking, sequencing primer hybridization, and sequencing on the sequencer for Read 1. After completion of the first read, the templates can be regenerated in situ to enable a second 75 bp read from the opposite end of the fragments, i.e., the newly sequenced strands are stripped off and the complementary strands are bridge amplified to form clusters. Once the original templates are cleaved and removed, the reverse strands undergo sequencing-by-synthesis, producing 59,233,468 sequencing reads with 75-mer length. The sequencing data are deposited in NCBI Sequence Read Archive (SRA, http://www.ncbi.nlm.nih.gov/Traces/sra ) [ 68 ] with accession number SRA022988.
Data filtering and de novo assembly
The quality requirement for de novo transcriptome sequencing is far higher than that for re-sequencing, because sequencing errors can create difficulties for the short-read assembly algorithm. We therefore carried out a stringent filtering process. Firstly, we removed reads that do not pass the built-in Illumina's software Failed-Chastity filter according to the relation "failed-chastity < = 1", using a chastity threshold of 0.6, on the first 25 cycles. Secondly, we discarded all reads with adaptor contamination. Thirdly, we ruled out low-quality reads with ambiguous sequences "N". Finally, the reads with more than 10% Q < 20 bases were also removed.
De novo assembly was carried out using SOAPdenovo http://soap.genomics.org.cn/soapdenovo.html with the default settings except K-mer value [ 16 ]. After assessing different K-mer sizes, 29-mer yielded the best assembly for the desired application, and was chosen to construct the de Bruijn graph. Although this higher value reduced the number of assembled contigs, it increased the reliability and longer contigs. The contigs without N were obtained by conjoining the K -mers in an unambiguous path. Then the reads were mapped back to contigs for constructing scaffolds with the paired end information. SOAPdenovo connected the contigs using N to represent unknown sequences between each two contigs, and thus scaffolds were made. Paired-end reads were used again for gap filling of scaffolds to get sequences with least Ns and could not being extended on either end. Such sequences were defined as Unigenes. To evaluate the depth of coverage, all usable reads were realigned to the unigenes using SOAPaligner (Release 2.20, 08-13-2009) [ 28 ] with the default settings except the following changes, -m 0 -x 1000 -s 40 -l 35 -v 2. Detailed options could be found at this website http://soap.genomics.org.cn/soapaligner.html .
Finally, BLASTx alignment ( E value <10 -5 ) between unigenes and protein databases like NCBI non-redundant protein (Nr) database http://www.ncbi.nlm.nih.gov , Swiss-Prot protein database http://www.expasy.ch/sprot , the Kyoto Encyclopedia of Genes and Genomes (KEGG) pathway database http://www.genome.jp/kegg , and the Cluster of Orthologous Groups database http://www.ncbi.nlm.nih.gov/COG was performed, and the best aligning results were used to decide the sequence direction of unigenes. If the results of different databases conflicted with each other, a priority order of Nr, Swiss-Prot, KEGG and COG should be followed when deciding the sequence direction of unigenes. When a unigene happened to be unaligned to none of the above databases, a software named ESTScan [ 69 ] was used to predict its coding regions as well as to decide its sequence direction.
Gene annotation and analysis
To assess the quality of the de novo assembly through comparative genome analysis, a similarity search against A. thaliana gene index was conducted using BLASTN algorithm with E value less than 10 -5 . The A. thaliana gene index (version 15) was downloaded from TIGR gene indices (currently curated at Harvard University, http://compbio.dfci.harvard.edu/tgi/ ). The BLAST result was parsed by a Perl script written based on the bioperl module SearchIO.pm.
For further annotation of unigenes using various bioinformatics approaches, the unigenes were firstly searched against the Nr database and the Swiss-Prot protein database using local BLASTx with E value cutoff of 10 -5 . To estimate the number of annotated unigenes that matched to unique genes in the two databases, these files were then filtered for the duplicate in protein accessions. With Nr annotation, Blast2GO program [ 42 ] was used to get GO annotation according to molecular function, biological process and cellular component ontologies http://www.geneontology.org . The unigene sequences were also aligned to the COG database to predict and classify possible functions. Pathway assignments were carried out according to the Kyoto Encyclopedia of Genes and Genomes pathway database [ 70 ] also using BLASTx with E value threshold of 10 -5 .
Development of cDNA-derived SSR markers
A Perl script known as MIcroSAtellite (MISA, http://pgrc.ipk-gatersleben.de/misa/ ) was used to identify microsatellites in the unigenes. In this study, cDNA-based SSRs were considered to contain motifs with two to six nucleotides in size and a minimum of 4 contiguous repeat units. Frequency of cSSR refers to kilobase pairs of cDNA sequences containing one SSR. Primer premier 6.0 (PREMIER Biosoft International, Palo Alto, CA) was used to design PCR primers in the flanking regions of SSRs. Primers were designed based on the following criteria: (1) primer length with a minimum of 18 bp long; (2) melting temperature between 46°C and 55°C with a maximum discrepancy within 4°C among primers; (3) PCR product size ranging from 100 to 350 bp. In total, we designed 100 pairs of primers (Additional file 2 , Table S2) and tested these primer sets for successful PCR amplification in initial screening test. | Results
Illumina paired-end sequencing and de novo assembly
With the purpose of generating a broad survey of genes associated with tuberous root formation and development, RNA was extracted from fibrous roots, pencil roots and tuberous roots at three developmental stages. Using Illumina paired-end sequencing technology, each sequencing feature can yield 2 × 75 bp independent reads from either end of a DNA fragment. In this study, a total of 59,233,468 raw sequencing reads with the length of 75 bp were generated from a 200 bp insert library. An assembler, SOAPdenovo http://soap.genomics.org.cn developed specifically for use with next-generation short-read sequences, was employed for de novo assembly. After stringent quality check and data cleaning, approximately 51 million high-quality reads were obtained with 99.30% Q20 bases (base quality more than 20). Based on the high quality reads, a total of 208,127 contigs were assembled with an average length of 202 bp. The length of contigs ranged from 75 to 6,891 bp. Contigs with length more than 100 bp accounted for 72.4% (Table 1 ).
With paired-end reads, it is possible to identify contigs derived from the same transcript as well as the distances between these contigs. We, therefore, mapped the reads back to contigs, and then with paired-end information joined contigs into scaffolds using "N" to represent unknown nucleotides between each two contigs. As a result, 103,775 scaffolds were obtained with an average length of 377 bp (table 1 ). Scaffolds with the length ranging from 100 to 400 bp accounted for 75.28%. Although 80.07% scaffolds had not a gap at all (Figure 1 ), roughly 1.28 Mb gaps (3.27% of total unigene sequences) remained unclosed.
To further shorten the remaining gaps, we gathered the paired-end reads with one end mapped on the unique contig and the other end located in the gap region and performed local assembly with the unmapped end to fill in the small gaps within the scaffolds. Such sequences containing least Ns and not being extended on either end were defined as unigenes. In this step, more than half of gaps were filled, and only 0.52 Mb gaps (1.60% of total unigene sequences) remained unclosed. The gap distribution for unigenes was shown in Figure 1 . Finally the de novo assembly yielded 56,516 unigenes with an average length of 581 bp and a total length of 32.85 Mb (Table 1 ). The length of assembled unigenes ranged from 200 to 10,679 bp. There were 30,861 unigenes (54.61%) with length varying from 200 to 400 bp, 18,050 unigenes (31.94%) in the length range of 401 to 1000 bp, and 7,605 unigenes (13.46%) with length more than 1000 bp (Table 1 ).
To evaluate the quality and coverage of the assembled unigenes, all the usable sequencing reads were realigned to the unigenes using SOAPaligner [ 28 ], allowing up to 2 base mismatches. The sequencing depth ranged from 0.1 to 4,079 folds, with an average of 48.36 folds. About 92.5% of the unigenes were realigned by more than 10 reads, 56.2% were remapped by more than 100 reads, and almost 10% were realigned by more than 1000 reads (Figure 2 ). To further assess the extent of transcript coverage provided by unigenes and to evaluate how coverage depth affected the assembly of unigenes, we plotted the ratio of assembled unigene length to A. thaliana ortholog length against coverage depth (Figure 3A ). Most of A. thaliana ortholog coding region can be covered by our individual unigenes, although a large number of deeply covered unigenes failed to cover the complete coding regions of their A. thaliana orthologs. It is worth noting that, to a certain extent, increased coverage depth can result in higher coverage of the coding regions. Actually, in many cases, multiple unigenes covered different regions of A. thaliana orthologs. Plotting the summed proportion of A. thaliana orthologs covered by all Illumina unigenes showed that 502 orthologs could be covered by unigenes with a percentage more than 80%, and the cover percentage of around 5,000 orthologs ranged from 50-80%. Additionally, 27% orthologs were covered with only 20% or lower (Figure 3B ). The results indicated that additional sequencing would be needed for more comprehensive transcriptome coverage.
Functional annotation by searching against public databases
For validation and annotation of assembled unigenes, sequence similarity search was conducted against the NCBI non-redundant protein (Nr) database and the Swiss-Prot protein database using BLASTx algorithm [ 29 , 30 ] with an E value threshold of 10 -5 . The results indicated that out of 56,516 unigenes, 27,435 (48.54%) showed significant similarity to known proteins in Nr database and matched 18,496 unique protein accessions. As expected, the similar percentage was found for the search against Swiss-Prot database. Of all the unigenes, 26,287 (46.21%) had BLAST hits in Swiss-Prot database and matched 11,914 unique protein accessions. Compared with [ 27 ], in which only 16.2% had BLAST hits in Nr database, the higher percentage in this study was partially due to the higher frequency of long sequences in our unigenes (581 bp average length versus 266 bp average length of whitefly) [ 27 ]. As reported by [ 10 ], the longer contigs were more likely to have BLAST matches in the protein databases. Our results also showed that 79% of unigenes over 500 bp in length had BLAST matches, whereas only 30% of unigenes shorter than 300 bp did (Figure 4 ). The E-value distribution of the top hits in the Nr database revealed that 41.42% of the mapped sequences showed significant homology (less than 1.0E-50), and nearly 20% of the sequences with greater than 80% similarity were found (Figure 5A and 5C ). The E-value and similarity distributions of the top hits in the Swiss-Prot database had a comparable pattern with 30% and 15% of the sequences possessing significant homology and similarity, respectively (Figure 5B and 5D ). Altogether, BLAST searches identified a total of 20,755 unique protein accessions, indicating that in this study the Illumina paried-end sequencing project generated a substantial fraction of sweetpotato genes.
Of all the 20,755 unigenes, 26 were uniquely mapped by more than 8,000 reads, which represented the most abundant transcripts in sweetpotato root cDNA library (Additional file 1 , Table S1). Since sweetpotato tuberous root contains approximately 70% starch of the total dry weight, it is not surprising that some transcripts encoding the enzymes involved in starch metabolism were highly expressed, such as plant glycogenin-like starch initiation protein 1, ADP-glucose pyrophosphorylase beta subunit, granule bound starch synthase I, alpha -1, 4 glucan phosphorylase L isozyme and Beta -amylase [ 31 - 34 ]. Besides the high starch content, sweetpotato tuberous root also contains plenty of other components, such as alkaloid and vitamin C (Ascorbic Acid). Therefore, we also found two transcripts were highly expressed, one encoding the putrescine methyltransferase, which participates in alkaloid biosynthesis [ 35 ]; the other encoding the GDP-D-mannose 3,5-epimerase (GME), which is generally considered to be a key enzyme of the major ascorbate biosynthesis pathway in higher plants through converting GDP-d-mannose to GDP-l-galactose [ 36 ]. Notably, we found an abundant transcript encoding Rac-like GTP-binding protein, which was preferentially expressed at the tip of root hairs and believed to be involved in cell polarity control during the actin-dependent tip growth of root hairs [ 37 , 38 ]. Some transcripts encoding superoxide dismutase and metallothionein-like protein were also highly expressed, and these enzymes may play a role in the defense system or keeping metal homeostasis or detoxification [ 39 - 41 ]. In addition, abundant transcripts encoding ribosomal proteins and Like-Sm ribonucleoprotein (LSM)-related were also identified. However, it is noteworthy that the two most abundant transcripts, which had 49,469 and 30,626 reads mapped, respectively, showed no significant similarity to any known gene.
Functional classification by GO and COG
Gene Ontology (GO) is an international standardized gene functional classification system which offers a dynamic-updated controlled vocabulary and a strictly defined concept to comprehensively describe the properties of genes and their products in any organism. GO has three ontologies: Molecular function, Cellular component and Biological process. On the basis of Nr annotation, the Blast2GO program [ 42 ] was used to obtain GO annotation for unigenes annotated by Nr. Then the WEGO software [ 43 ] was used to perform GO functional classification for these unigenes. In total, 5,046 unigenes with BLAST matches to known proteins were assigned to gene ontology classes with 16,595 functional terms. Of them, assignments to the biological process made up the majority (6,547, 39.45%) followed by molecular function (5,882, 35.44%) and cellular component (4,166, 25.10%, Figure 6 ).
The assigned functions of unigenes covered a broad range of GO categories. Under the biological process category, metabolic process (1,994 unigenes, 30.46%) and cellular process (1,837 unigenes, 28.06%) were prominently represented, indicating that some important metabolic activities occur in sweetpotato root. Interestingly, 459 unigenes were assigned to the pigmentation category. It was also noteworthy that a large number of genes (352 unigenes) involved in response to different stimulus. Under the category of molecular function, binding (2,904 unigenes, 49.37%) and catalytic (2,340 unigenes, 39.78%) represented the majorities of the category. Among the 2,904 unigenes assigned to the binding part, protein binding (679 unigenes) represented the most abundant classification, followed by ion binding (563 unigenes), ATP binding (521 unigenes), DNA binding (350 unigenes) and RNA binding (284 unigenes) (data not shown). For the cellular component category, 2,850 unigenes were located into intracellular, whereas only a few genes were assigned to extracellular region, macromolecular complex and virion.
The Cluster of Orthologous Groups (COG) database is a database where the orthologous gene products were classified. Every protein in COG is assumed to be evolved from an ancestor protein, and the whole database is built on coding proteins with complete genome as well as system evolution relationships of bacteria, algae and eukaryotes. All unigenes were aligned to the COG database to predict and classify possible functions. Out of 27,435 Nr hits, 11,983 sequences were assigned to the COG classifications (Figure 7 ). Among the 25 COG categories, the cluster for General function prediction only (3,432, 17.01%) represented the largest group, followed by Transcription (1,789, 8.87%), Replication, recombination and repair (1,665, 8.25%), Posttranslational modification, protein turnover and chaperones (1,577, 7.82%), Signal transduction mechanisms (1,487, 7.37%), Carbohydrate transport and metabolism (1,200, 5.95%) and Translation, ribosomal structure and biogenesis (1,161, 5.75%), whereas only a few unigenes were assigned to Nulcear structure and Extracellular structure. In addition, 619 unigenes were assigned to Secondary metabolites biosynthesis, transport and catabolism (Figure 7 ).
Functional classification by KEGG
The Kyoto Encyclopedia of Genes and Genomes (KEGG) Pathway database records the networks of molecular interactions in the cells, and variants of them specific to particular organisms. Pathway-based analysis helps to further understand the biological functions and interactions of genes. Firstly, Based on a comparison against the KEGG database using BLASTx with an E-value cutoff of <10 -5 , out of the 56,516 unigenes, 17,598 (31.14%) had significant matches in the database and were assigned to 124 KEGG pathways. Among them, 11,056 unigenes having enzyme commission (EC) numbers were assigned to the metabolic pathways. As shown in Figure 8A , the KEGG metabolic pathways contained carbohydrate metabolism, the biosynthesis of secondary metabolite, amino acid metabolism, lipid metabolism and energy metabolism. In the secondary metabolism, 2,493 unigenes were classified into 19 subcategories, and most of them were mapped to phenylpropanoid biosynthesis, stilbenoid, diarylheptanoid and gingerol biosynthesis, limonene and pinene degradation, and flavonoid biosynthesis (Figure 8B ). Surprisingly, in the KEGG map, most enzymes were mapped to the unigenes. These results indicated the active metabolic processes in sweetpotato root, but also implies that a variety of metabolites are synthesized in the root, suggesting that it, besides storing starch, is a valuable food source with various nutrients, such as complex carbohydrates, dietary fiber, beta carotene, vitamins, flavonoid and isoflavonoid, anthocyanin and alkaloid.
In addition to the genes assigned to the metabolism pathways, 3,205 unigenes were sorted to the genetic information processing involving transcription, translation, folding, sorting, degradation, replication and repair, and about 700 unigenes were classified into membrane transport, signal transduction, immune system and environmental adaptation. The results demonstrated the powerful ability of high-throughput sequencing to identify novel genes in non-model organisms, and these annotations also provided a valuable resource for investigating specific processes, functions and pathways involved in root formation and development.
Development and characterization of cDNA-derived SSR markers
For further assessment of the assembly quality and development of new molecular markers, all of the 56,516 unigenes generated in this study were used to mine potential microsatellites which were defined as di- to hexanucleotide SSR with a minimum of four repetitions for all motifs. Using the MISA Perl script http://pgrc.ipk-gatersleben.de/misa/ , a total of 4,114 potential cSSRs were identified in 3,594 unigenes, of which, 423 sequences contained more than 1 cSSR, and 275 cSSRs were present in compound form (Table 2 ). In order to identify the putative function of genes containing the cSSR loci, 3,594 unigenes were searched against UniProt database http://www.uniprot.org with E -value cutoff less than 10 -5 . Among them, 2,266 unigenes had BLAST hits to known proteins in this database. Based on the cSSR-containing sequences, 100 pairs of cSSR primers were designed using Primer Premier 6.0 (PREMIER Biosoft International, Palo Alto CA). The detailed information of designed primers is shown in Additional file 2 , Table S2. Of the 100 designed cSSRs, 44 were found in the coding regions, 21 in the 5' untranslated regions (5' UTR), 13 in the 3' UTR and 22 in those genes without any hit to known proteins. Among the 100 primer pairs, 92 primer pairs were successful in PCR amplification in cultivated sweetpotato. The remaining 8 primers failed to generate PCR products at various annealing temperatures and Mg 2+ concentrations and would be excluded from further analysis. Of the 92 working primer pairs, 47 amplified PCR products at the expected sizes, and 12 primer pairs resulted in larger PCR products than what expected, suggesting that there may be an intron within the amplicons, and PCR products of the other 33 primer pairs were smaller than expected, suggesting the occurrence of deletion within the genomic sequences or a lack of specificity or the possibility of assembly errors.
In addition, the frequency, type and distribution of the potential 4,114 cSSRs were also analyzed in this study. The compilation of all cSSRs revealed that, on the average, one cSSR can be found every 7.99 kb in unigenes, and the frequency of cSSR was 7.78%. Among the 4,114 cSSRs, the di-and tri-nucleotide repeat motifs were the most abundant types (1,782, 43.32%; 1,747, 42.46%, respectively), followed by tetra- (330, 8.02%), penta- (142, 3.45%) and hexa-nucleotide (113, 2.75%) repeat motifs. Di- to hexa-nucleotide motifs were further analyzed for cSSR length (or number of repeat units, Table 3 ). cSSR length was mostly distributed from 12 to 20 bp, accounting for 83.76% of total cSSRs, followed by 21 - 30 bp length range (638 cSSRs, 15.51%). There were 30 cSSRs with length larger than 30 bp.
Within the searched cSSRs, 160 motif sequence types were identified, of which, di-, tri-, tetra-, penta- and hexa-nucleotide repeat had 4, 10, 30, 57 and 59 types, respectively. The AG/CT di-nucleotide repeat was the most abundant motif detected in our cSSRs (1,216, 29.6%), followed by the motif AAG/CTT (593, 14.4%), AT/TA (429, 10.4%), AAT/ATT (180, 4.38%), ACT/ATG (159, 3.9%), AGT/ATC (159, 3.9%), CCG/CGG (157, 3.8%), AGG/CCT (154,3.7%) and AC/GT (135, 3.3%). The frequency of remaining 151 types of motifs accounted for 22.6% (Figure 9 ). | Discussion
Illumina paired end sequencing and assembly
Transcriptome sequencing is one of the most important tools for gene discovery. However, large-scale EST sequencing using the traditional Sanger method is time-consuming and expensive. During the past several years, the NGS technology has become a tremendous approach for high-throughput gene discovery on a genome-wide scale in non-model organisms. In addition to its great improvement of efficiency and speed, NGS platforms can eliminate the bacterial cloning step that can bias the composition of the cDNA library. Due to its long read length and appearance ahead of the other two platforms, Roche GS FLX has been the most widely used platform for de novo transcriptome sequencing in many organisms, such as chestnut [ 19 ], pine [ 10 ], olive [ 44 ], ginseng [ 20 ], A. thaliana [ 45 , 46 ], maize [ 47 ], Artemisia annua [ 48 ], fish [ 18 ], insects [ 49 , 50 ], and worms [ 17 ]. In contrast, Illumina transcriptome or genome sequencing was mainly limited to organisms with reference genomes available [ 14 , 51 - 53 ]. Over the last two years, with the further confirmation that the relatively short reads can be effectively assembled [ 15 ], especially with the great advantage of paired-end sequencing [ 54 ], the Illumina transcriptome or whole genome de novo sequencing and assembly have been successfully used for model [ 12 , 16 , 55 - 58 ]and non-model organisms [ 25 - 27 , 59 ]. Consistent with these publications, our results also indicated that relatively short reads from Illumina paired-end sequencing can be effectively assembled and used for novel gene discovery and SSR marker development in non-model organism. Here, approximately 59 million of 75-bp paired-end reads were generated from Illumina Genome Analyzer IIx. Such great numbers of reads and paired-end information resulted in a relatively high depth of coverage (average = 48.36 x). These sequences also produced longer unigenes (mean = 581 bp) than those assembled in previous studies for example, butterfly (197 bp) [ 60 ], Eucalyptus grandis (247 bp) [ 11 ], coral larval (440 bp) [ 17 ], lodegpole pine (500 bp) [ 10 ]) and whitefly (clusters: 372 bp; singletons: 265 bp) [ 27 ]).
In this study, we also provided evidence that a paired-end strategy had a strong impact on assembly as opposed to single-end approach. During de novo assembly, contigs were assembled from 75 bp reads data excluding mate-pair information, however, unigenes were assembled from the same read data combining with the mate-pair information. A comparison between contigs and unigenes revealed that both average and maximum lengths of unigenes were greatly larger than those of contigs, though the number of contigs was more than that of unigenes (Table 1 ). This illustrated the critical importance of reads pairs for obtaining high-quality assemblies. The high quality assembled unigenes were validated by a high proportion of unigenes matching to known proteins using BLASTx and by the efficient PCR amplification of cSSR markers developed in our unigenes.
Nonetheless, only 40% of reads were assembled into unigenes, which is less than that reported for 454 transciptome assemblies (e.g., 88% [ 11 ], 90% [ 17 ], 48% [ 10 ]). Large numbers of un-assembled reads could result from several causes, including the relatively short reads generated by Illumina Genome analyzer, the assembly options (e.g., the K-mer size), genes expressed at low levels, repeat regions, and the difficulties with de novo transcriptome assembly caused by the alternative splicing. These high-quality unassembled reads are still a very important sequence resource for sweetpotato. To combine the use of longer reads for example from the FLX-454 sequencing platform would possibly further improve the de novo assembly.
When we realigned all the usable sequencing reads to the unigenes, a 48 × average coverage depth was obtained. However, of the 56,516 unigenes, 406 (0.7%) had a coverage depth less than 1. This is in part due to the drawback to the de Bruijn graph approach [ 61 ], which is the algorithm used by SOAPdenovo. In de Bruijn approach, the reads are decomposed into k-mers, which maybe cause the loss of information. In a few cases, only partial K-mers from the reads can be used for assembly, leading to assembled sequences that are not supported by the underlying reads. This also implied that the paralogs that share a high level of sequence similarity may have been assembled into one contig because they can not be distinguished due to the short read length and the lack of reference genome.
Functional annotation of unigenes
Estimating the number of genes and the level of transcript coverage is an important issue for transcriptome sequencing projects, but is difficult in this study due to the lack of a reference genome. Using BLAST, we indirectly evaluated the transcriptome coverage breadth by determining the number of unique genes in our collection. A great number of unigenes could match unique known proteins in public databases, which implied that our Illumina paired-end sequencing project yielded a substantial fraction of unique genes from sweetpotato. Like [ 10 , 19 ], if we assumed that the number of genes in sweetpotato was commensurate with that in Arabidopsis (25,000 genes, [ 62 ]), our annotated unigenes (20,755 genes with unique protein accession numbers) would likely represent more than 80% of genes in sweetpotato. A large number of unigenes were assigned to a wide range of gene ontology categories and COG classifications (Figure 6 , 7 ), also indicating that our paired-end sequencing data represented a wide diversity of transcripts. Based on the KEGG pathway, the well represented pathways were carbohydrate metabolism, biosynthesis of secondary metabolite, amino acid metabolism, lipid metabolism and energy metabolism (Figure 8A ). In the secondary metabolism, 2,493 unigenes were classified into 19 different subcategories (Figure 8B ). These results indicated the active metabolic processes in sweetpotato root development. Notably, we also found all of the genes involved in the biosynthesis of brassinosteroid (pathway not shown). We estimated that the expression of brassinosteroid biosynthetic genes was lower than that of the genes involved in the biosynthesis of starch and sugar. Therefore, these results also strongly suggested that most of the genes involved in the different metabolic processes came into being through high-throughput Illumina transcriptome sequencing. Furthermore, the unigenes without BLAST hits likely corresponded to 3' or 5' untranslated regions, non-coding RNAs, or short sequences not containing a known protein domain, most of which might represent potential sweetpotato-specific genes. Taken together, such large number of sequences and deep depth of coverage can provide sufficient transcriptomic sequence information for discovering novel genes, and also confirm that high throughput Illumina paired-end sequencing is an efficient, inexpensive and reliable tool for transcriptome characterization and gene discovery in non-model species. Generally speaking, cDNA normalization is often used when gene discovery is the primary purpose of sequencing. According to previous publication, there is no real advantage to normalization when thousands of sequences were generated [ 18 ]. In this study, such deep depth and wide breadth of coverage provided by the powerful Illumina paired-end sequencing platform suggested that it was feasible to obviate the need for normalization.
cSSR marker identification and characterization
In this study, a total of 100 pairs of high quality PCR primers were designed and used for further assessment of the assembly quality. Of these, 92 (92%) could successfully yield amplicons. Among the 92 working primer pairs, 47 amplified PCR products at the expected sizes, and 45 primer pairs resulted in larger or smaller PCR products than what expected, suggesting that there may be an intron or deletion within the amplicons or a lack of specificity, it also can not ruled out the possibility of assembly errors due to the short read length. This result was similar to previous studies in which success rates of 60-90% amplification have been reported [ 63 - 67 ], and also provided evidence for the quality validation of our assembled unigenes and the possibility of the utility of the cSSRs produced in the present study.
As is commonly known, polymorphic SSR markers are important for research involving genetic diversity, relatedness, evolution, linkage mapping, comparative genomics, and gene-based association studies. Next generation transcriptome sequencing produces plenty of sequences for molecular marker development. Currently there exist only several hundreds genetic markers in sweetpotato. The 4,114 cSSRs identified from our data will provide a wealth of markers for further genetic study. Based on these identified cSSR-containing sequences, we will design more PCR primers and assess their polymorphism among cultivated and wild Ipomoea species and provide a more valuable resource of genetic markers for future research in sweetpotato. | Conclusion
In this study, in addition to the characterization of the root transcriptome of sweetpotato, we achieved some valuable resources for new gene discovery and cSSR marker development for further study. Many genes generated in the present study will certainly accelerate the understanding of the processes regulating sweetpotato root formation and development. To the best of our knowledge, this is the first attempt using Illumina paired-end sequencing technology for sweetpotato root transcriptome de novo sequencing and assembly without reference genome. Additionally, in these generated sequences, 4,114 cSSRs were identified and characterized as potential molecular markers. The enormous size and complexity of sweetpotato genome make it essential to develop thousands of molecular markers for the fine-scale mapping of interest traits. Thousands of cSSR markers produced in this study will enable genetic linkage mapping construction and gene-based association studies. The results demonstrated that Illumina paired end sequencing can be used as a fast and cost-effective approach to the gene discovery and molecular marker development for non-model organism, especially those with large genome. | Background
The tuberous root of sweetpotato is an important agricultural and biological organ. There are not sufficient transcriptomic and genomic data in public databases for understanding of the molecular mechanism underlying the tuberous root formation and development. Thus, high throughput transcriptome sequencing is needed to generate enormous transcript sequences from sweetpotato root for gene discovery and molecular marker development.
Results
In this study, more than 59 million sequencing reads were generated using Illumina paired-end sequencing technology. De novo assembly yielded 56,516 unigenes with an average length of 581 bp. Based on sequence similarity search with known proteins, a total of 35,051 (62.02%) genes were identified. Out of these annotated unigenes, 5,046 and 11,983 unigenes were assigned to gene ontology and clusters of orthologous group, respectively. Searching against the Kyoto Encyclopedia of Genes and Genomes Pathway database (KEGG) indicated that 17,598 (31.14%) unigenes were mapped to 124 KEGG pathways, and 11,056 were assigned to metabolic pathways, which were well represented by carbohydrate metabolism and biosynthesis of secondary metabolite. In addition, 4,114 cDNA SSRs (cSSRs) were identified as potential molecular markers in our unigenes. One hundred pairs of PCR primers were designed and used for validation of the amplification and assessment of the polymorphism in genomic DNA pools. The result revealed that 92 primer pairs were successfully amplified in initial screening tests.
Conclusion
This study generated a substantial fraction of sweetpotato transcript sequences, which can be used to discover novel genes associated with tuberous root formation and development and will also make it possible to construct high density microarrays for further characterization of gene expression profiles during these processes. Thousands of cSSR markers identified in the present study can enrich molecular markers and will facilitate marker-assisted selection in sweetpotato breeding. Overall, these sequences and markers will provide valuable resources for the sweetpotato community. Additionally, these results also suggested that transcriptome analysis based on Illumina paired-end sequencing is a powerful tool for gene discovery and molecular marker development for non-model species, especially those with large and complex genome. | Authors' contributions
ZYW conceived, organized and planned the research, contributed to RNA extraction and data analysis, and drafted the manuscript. BPF conceived this study and participated in design, coordination and manuscript preparation. JYC provided the plant material for transcriptome sequencing and participated in tissue collections. XJZ provided the plant materials for SSR analysis. ZXL helped to design PCR primers and SSR results analysis. LFH participated in RNA and DNA extraction. XLC participated in RNA and DNA extraction and SSR experiment. YJL participated in manuscript preparation and revision. All authors read and approved the final manuscript.
Supplementary Material | Acknowledgements
We appreciate great advice and assistance on data analysis and comments from Dr. Xiaoping Chen. We would also like to thank the Beijing Genomics Institute now located in Shenzhen for assistance in raw data processing and related bioinformatics analysis. This work was supported by the earmarked fund for the National Modern Agro-industry Technology Research System (nycytx-16-B-5), the National Natural Science Foundation of China (No. 31000737), the Natural Science Foundation of Guangdong Province, China (No. 10151064001000018) and the President Foundation of Guangdong Academy of Agricultural Sciences, China (No. 201009). | CC BY | no | 2022-01-12 15:21:46 | BMC Genomics. 2010 Dec 24; 11:726 | oa_package/b5/c8/PMC3016421.tar.gz |
PMC3016423 | 21246056 | Introduction
Traditionally, stem cells were thought to be located only in tissues where differentiated cells were most susceptible to loss and the need for replacement great, such as the skin [1] , intestinal epithelia [2] and the blood [3] . Since the adult central nervous system (CNS) was considered to lack a significant amount of neuronal death, and have no regenerative capacity, the existence of neural stem cells (NSC) seemed both unlikely, and unnecessary. However, in 1992 the existence of NSCs within the adult mammalian CNS with the ability to give rise to new neurons was demonstrated [4] . Like stem cells found in other tissues, NSCs (which line the entire ventricular neuroaxis of the adult mammalian CNS [4] , [5] ) exhibit the defining in vitro stem cell characteristics [2] , [6] of proliferation, extensive self-renewal, generation of a large number of progeny, and multi-lineage differentiation potential as well as the in vivo characteristic of regenerating tissue after injury [4] , [7] , [8] . Adult stem cells represent a relatively quiescent reservoir of uncommitted cells. These cells have the ability to divide throughout the lifespan of the organism to give rise to more committed progenitor cells generating a large number of undifferentiated cells. These progenitors ultimately differentiate into lineage-restricted functional cells. Due to their ability to give rise to new cells, the factors regulating the division of stem and progenitor cells, and the differentiation of their progeny is of great interest in treating CNS disorders resulting from the loss or inappropriate functioning of cells. Therefore, the development of tools enabling stem cell-specific study represents a formidable challenge. Stem cells are difficult to visually define, as there exists no well-accepted positive marker. As a result these cells are defined based on a functional definition. While employing a functional read-out has made it possible to identify the presence (or absence) of stem cells in a population, it unfortunately prohibits the direct isolation or discrimination of stem cells from non-stem cells thereby precluding any meaningful quantitative data pertaining to their frequency and/or expansion rate.
A growing body of evidence supports the hypothesis that a population of tumor-initiating cells (TICs), which exhibit biological properties similar to normal somatic stem cells, maintains malignant tumors. TICs are postulated to reside in acute myeloid leukaemia [9] , as well as in breast [10] , [11] , prostate [12] , lung and mesenchymal tumors [13] . Importantly, neural TICs have also been isolated and found to exhibit very similar functional properties to neural stem cells [14] , [15] , [16] , [17] . The so-called cancer stem cell model suggests that it is these stem cell characteristics that make the TICs resistant to treatment and drive the tumor recurrence. As a result these cells represent an essential target for effective anticancer therapy. Therefore development of methods investigating their biology and kinetic behavior is relevant to the design of innovative treatments targeting this specific cell population.
In the CNS, one of the methods to isolate and expand somatic and cancer stem cells is the neurosphere assay (NSA) [4] , [14] , [15] , [18] . Of interest and attesting to its robustness, the free-floating sphere culture system is also used to study amongst others, breast cancer stem cells [11] , [19] . However, while the NSA is an appropriate method to identify stem cell activity, we contend that the enumeration of spheres is not appropriate to measure stem cell frequency or expansion rate as doing so results in an overestimation [20] , [21] .
The current study presents the development, validation and application of a method enabling specifically quantification of somatic and cancer stem cell symmetric division rate using free floating sphere assay. | Materials and Methods
Ethics Statement
Animal studies were approved by the University of Florida Institutional Animal Care and Use Committee (IACUC, Permit Number: 2008-01502) and the University of Queensland Animal Ethics Committee (UQAEC, Permit Number: QIMR-P1159).
The use of human samples in this study was approved by the University of Florida Institutional Review Board (IRB Project # 127-2009). Written informed consent was provided by all participants.
Mouse neural stem cell culture
Neural stem cells harvested from ganglionic eminences of E14 wild type mice or from the periventricular region of adult mice (wild type or growth hormone receptor knock out) were cultured with EGF and bFGF or with EGF alone for 6 to 7 passages in the neurosphere assay as described in [21] .
The neural colony forming cell assay method
After 4 passages in the NSA, the different groups of cells [fetal E14 NSCs, aged adult (20 months) NSC and adult NSCs cultured with different combination of mitogens, EGF and bFGF or EGF alone] were cultured in the Neural Colony Forming Cell Assay as described in [21] .
Primary cell culturing and propagation of human glioblastoma
Cells were isolated from human glioblastoma (hGBMs) as described [16] and cultured in the neurosphere assay supplemented with with 20 ng/ml hEGF, 10 ng/ml basic fibroblast growth factor, and 2 μg/ml heparin. To measure the effect of TGFβ2 on the symmetric division rate of the LTP/STP cancer cells, the treatment was performed by adding 100 ng/ml of TGFβ2 to growth medium at every passage for 4 passages.
Xenotransplantation of brain tumor cells
The tumorigenicity of hGBM cells was measured in vivo after intrastriatal or subcutaneous injection (s.c.) using non-obese diabetic/severe combined immunodeficient mice (NOD/SCID). Details of the procedure are described in the supplemental data ( Materials and Methods S1 ).
Propagation and Culture of Mammospheres from Established Breast Cancer Cell Lines
First, our breast cancer cell lines (A = KPL-1, B = MCF-7 and C = BT-474) were cultured in adherent/monolayer conditions [48] . For culturing cells as spheres we have used the mammosphere assay based upon culturing neural stem cells in serum-free medium containing recombinant human epidermal growth factor (rhEGF) and basic fibroblast growth factor (rhbFGF) [11] , [19] . Mammosphere culture protocol is detailed in the supplemental data ( Materials and Methods S1 ).
Breast cancer cell transplantation
6–8 week old NOD.Cg- Rag1 tm1Mom Il2rg tm1Wjl /SzJ mice (The Jackson Laboratory, Bar Harbor, Maine, USA) were used for the subcutaneous (s.c.) injections of breast cancer cells. Transplantation paradigm is described in supplemental data ( Materials and Methods S1 ). | Results
Thought experiment
Unlike the culturing and passaging of most lines where the majority of cells survive disaggregation and go on to proliferate until the culture becomes confluent, during passaging in the NSA, the majority (>90%) of the cells die or do not further proliferate. This is supported by the fact that during one passage the majority of the dividing cells give rise to at least 256 progenies by undergoing a minimum of eight cell divisions (data not shown). This would result in a 256-fold expansion at each passage if every single plated cell were growth factor-responsive and divided 8 times. However our experiments described below show a cellular fold expansion between 1 and 20 with the different cell types we used. These data, together with published clonality data [20] , [21] , [22] demonstrate that less than 10% of the cells plated in the NSA contribute to the overall population expansion. Therefore only the surviving fraction of growth factor-responsive sphere-forming cells divide, form spheres, and renew the founding population. Cell death notwithstanding, during each passage, there is a geometric increase in the number of cells that are generated. Typically, if 100,000 cells are plated, greater than 90,000 of the cells die within the first 24–48 hours, leaving 10,000 cells to proliferate, form spheres and ultimately generate about 500,000 cells. This 5-fold expansion tends to be fairly consistent when passing cells over time and never shows a time-dependent escalation in fold expansion (i.e. 5-fold, 7-fold, 8-fold etc.) [23] . Moreover, each individual neurosphere line is essentially unique with regards to the fold-increase in the number of cells generated from passage to passage (some lines may show a 5-fold expansion while others a 6 or 4-fold expansion), but consistent within the particular line.
The ability to indefinitely serially passaged NSCs [23] and the fact that most of the cells die or stop proliferating at each passage, indicates that the population must be maintained by a long-term proliferating cell(s) (by definition a cell with stem cell features). We content that the frequency of long-term proliferating (LTP) cells (aka NSCs) will be reflected in the rate at which the population expands (i.e. fold increase from passage to passage) and this is reflected in the slope of the growth curve. In order to understand how self-renewing symmetric divisions of LTP cells affect the growth curve, consider the following thought experiment ( Figure 1 ).
( Figure 1a ) In the case of a LTP cell that generates a sphere of 1000 cells without undergoing any symmetric cell divisions, the resulting sphere will contain only a single LTP cell. Upon a subsequent passage, all the cells will again die or not participate to the culture expansion except for the single LTP cell, which survives, divides, and forms a new sphere. As we continue to passage this sphere (or population of cells) in this manner we would observe a 1-fold expansion, which is represented by a flat growth curve as seen in Figure 1a(i) .
Now consider that a single LTP cell undergoes one self-renewing symmetric cell division as it generates a sphere of 1000 cells. In this case the sphere would have 2 LTP cells. Upon a subsequent passage, 998 of the cells would die and each of the two LTP cells would give rise to a sphere of 1000 cells. This doubling of the total number of spheres (and thus cells) would continue, giving a growth curve that looks like Figure 1a(ii) .
Finally, if we consider that the LTP cell undergoes 3 symmetric divisions giving rise to 4 LTP cells and 996 non-LTP cells, this would produce a 4-fold expansion at each passage and a growth curve that would be expressed as in Figure 1a(iii) .
( Figure 1b ) If we now keep the number of LTP cells in a sphere constant (say, 4) and with each iteration change the total cells generated per sphere from 1000 (i) to 2000 (ii) to 4000 (iii), we find that the elevation of the growth curve is affected but not its slope (as seen by an unchanged fold expansion). This indicates that the total number of cells generated does influence the graph but not the slope of the growth curve.
Therefore, the cell fold expansion, represented by the slope of the growth curve, reflects the rate at which the LTP cells expand. Given that the LTP cell expansion rate is a direct indication of the numbers of self-renewing symmetric cell divisions, it follows that the fold expansion or slope can be used to predict LTP (or stem) cell self-renewing symmetric division.
Mathematical modeling
Assumptions
Here we propose a mathematical model that allows one to quantify self-renewing symmetric stem cell division in tissue culture.
We break the class of all possible cells into two types (i) dividing cells and (ii) non-dividing (ND) cells. Examples of ND cells are cells that have fully differentiated or cells that have died. We further break the class of dividing cells into two subtypes, long-term proliferating (LTP) cells and short-term proliferating (STP) cells. The lifetime of the LTP cells is defined to be infinite, and in the context of an experimental setting, means that the lifetime is longer than that of the experiment. The products of a LTP cell division are assumed to be either two LTP cells (symmetric self-renewing division) or a LTP cell and a STP cell (asymmetric cell division). We assume the LTP cell differentiative symmetric division rate (LTP→STP+STP) to be null in the original growing conditions of the assay. We further assume that during the experiment time the survival and self-renewing properties of the LTP cells is definite and stable (defined by a steady state as discussed below).
The lifetime of the STP cells is defined to be finite, which in the context of an experimental setting, means that the lifetime is significantly shorter than that of the experiment. The products of a STP cell division are assumed to be any binary combination of STP cells and ND cells. It should be emphasized that this specifically excludes STP cell division from producing a LTP cell. We further assume that any dividing cell, when placed in the correct environment, will proceed through its cell cycle and eventually divide, in a manner that is independent of the presence of other cells. The products of the cell division will adhere and the cell cycle continues for all dividing cells. After time, a cluster of cells, hereafter referred to as a sphere, will have developed. Importantly, these assumptions imply that a sphere originating from a STP cell will contain STP cells and ND cells, while a sphere originating from a LTP cell will contain LTP cells, STP cells and ND cells.
Passaging neurospheres necessarily involves the dissociation of the spheres into the individual constituents (i.e. cells). A fraction of these constituent cells are then randomly sampled and seeded into a new flask containing an environment permissive for cell division ( Figure 1 ). Here we assume that LTP cells are stem-like cells, while STP cells are more restricted progenitor cells. Hence, the long-term expansion of the population depends directly on the expansion of LTP and not STP cells. We previously demonstrated that 95% of the spheres in the NSA cannot be passed more than 4 or 6 times suggesting the majority of the spheres are derived from STP cells and that LTP cells exhibit a higher proliferative potential [20] , [21] . Therefore, to accurately define a population of cells as containing stem-like (i.e. LTP) cells the overall time-course of the experiment needs to span greater than 4 to 6 passages.
Direct modeling of the Neurosphere Assay
After a few initial passages (corresponding to a “recovery period” if starting with freshly dissected primary tissue), the dissociating, plating, and growing of spheres in bulk culture will become consistent as the process reaches a stable state that reflects a complex equilibrium between cell survival, death, proliferation and differentiation. That is, an initial number of cells are seeded (e.g., 2.5×10 5 ), which generate spheres and produce a total cell count for the flask (e.g., 1×10 6 ), a portion of which are harvested (e.g., 25% are taken) and used to seed another flask for passage. The measurable quantities are the cell count at the start of the passage, T i , and the cell count at the end of the passage, T f . The fold expansion, F , is calculated by
After the adaptation period of the cells to culture conditions, when the experiment has entered into the stable state, F is constant within the limits of experimental error. If the state is stable then it must also be true that the initial numbers of LTP, STP and ND cells are the same for every passage. Similarly, the final numbers must be the same for every passage. Furthermore these cell types must be undergoing the same overall fold expansion, i.e.,
Since LTP cells can only be created from LTP cells, and every LTP cell forms a sphere, then it must also be true, that on average, there are F LTP cells in every LTP derived sphere. That is, the number of LTP cells created in an LTP originated sphere (defined to be l ) must be given by, l = F .
The same analysis cannot be applied to STP cells because both LTP and STP cells produce STP cells. Similarly, the same analysis cannot be applied to ND cells. As such, data analysis becomes a matter of measuring the cell counts at the start and end of each passage, dividing the two to provide a fold expansion, then averaging the fold expansions of passages in the stable state. By definition, this average is equivalent to the number of LTP (i.e. NSCs) cells in an LTP-derived sphere. Therefore, this methodology should accurately reflect the frequency of stem-like cells in stem-like cell derived spheres.
LTP self-renewing symmetric division rate
The smallest time unit in the above model is a single passage. We now derive a model for the intra-passage LTP cell numbers. As noted previously, LTP cell division has two possible outcomes (i) a self-renewing symmetric division (LTP→LTP+LTP) or (ii) an asymmetric division (LTP→LTP+STP). We denote the probability of the first outcome by p ll and the second outcome by p ls . Since there are no other possible outcomes, the sum of these two probabilities must be unity. Let the cell cycle time of the LTP cells be denoted by c l . The probability of a symmetric cell division per unit time is thus p ll / c l . This can also be interpreted as the rate of LTP cell symmetric division and we will denote it by K ll . Since only LTP cells produce LTP cells, the rate of growth of LTP cell numbers is proportional to the current total numbers of LTP cells. It should also be noted that an asymmetric division does not change the total LTP cell numbers. The rate of growth of LTP cell numbers can thus be expressed as
This expression can be solved to express the absolute numbers of LTP cells at a time t ,
It is now possible to equate this model with the above direct model. For a passage starting at t = 0 and finishing at t = t f ,
Rearranging this expression yields a method of calculating the rate of LTP cell symmetric division
That is, the rate of LTP cell self-renewing symmetric division can be calculated by taking the natural logarithm of the fold expansion and dividing by the passage time. Therefore, changes in the fold expansion (slope of the growth curve) reflect modifications in the LTP (i.e. stem) cell frequency by variations in their self-renewing symmetric cell division rate.
Validation of the model using the Neural Colony Forming Cell Assay (N-CFCA)
Similar to NSCs (LTP cells), progenitor (STP) cells have the ability to proliferate, and generate progeny that can be differentiated into functional cells. However, unlike stem cells, progenitor cells have a more limited proliferation potential overtime. The recently developed neural colony-forming cell assay (N-CFCA) exploits these differences in proliferative ability, enabling one to discriminate NSCs from progenitors based on the size of colonies they produce when transferred to culture [21] . Consistent with the assumption that progenitor cells exhibit limited proliferative capacity compared to stem cells, and that the size (diameter) of the colony can be used to distinguish its founder cell type, we demonstrated that large colonies (>2 mm) have a greater proliferative potential and exhibit all of the key tissue culture stem cell characteristics (extensive self-renewal, generation large number progeny and multi-lineage differentiation potential) compared to smaller colonies (which do not exhibit these stem cell criteria). Therefore, the N-CFCA provides a method to enumerate neural stem cells frequency [21] .
To support the mathematical modeling of the NSA with biological experiments we have compared the culturing and expansion of embryonic and adult murine NSCs in conditions that result in different growth rates (i.e. different fold expansion implying different stem cell self-renewing symmetric division rate). While the NSA model does not allow us to accurately quantify numbers of stem cells (due solely to fact that we do not know how many stem cells we started with), it does allow a comparison between groups of populations of the stem cell expansion rate, which reflects the frequency of symmetric stem cell division. In this particular case we compared the cell fold expansion of the following conditions ( Table 1 ): (Group1) Fetal E14 NSC cultures vs. Aged adult (20 months) NSC cultures, (Group 2) Fetal E14 NSC cultures with the mitogen EGF vs. the use of EGF + bFGF, (Group 3) Adult NSC cultures with the mitogen EGF vs. the use of EGF + bFGF. From these measures the effective stem cell symmetric division rate (K ll ) was derived ( Table 1 ). Fetal stem cells cultured with both growth factors exhibited a 7.16-fold increased expansion rate compared to 24-months old stem cells cultured with EGF alone (group 1) and a 1.33-fold difference compared to the fetal cultures exposed to EGF alone (group 2). Group 3 displayed a 1.57-fold difference between the two conditions. These differences within the groups in the rate stem cells expanded was correlated to the absolute numbers of stem cells measured using the N-CFCA ( Fig. 2 ) and the comparison results are shown in Table 2 . To test the assumption that both methodologies were similar we compared the difference of their output to zero using the Student's t-test. The p value of the statistical test was 0.28 demonstrating that both assays predict a similar change in the ratio of neural stem cell number or fold expansion (aka self-renewing symmetric division rate), validating this mathematical interpretation of the NSA.
Application of the model
We then used the model as a methodology for studying somatic stem cell self-renewing symmetric division mechanism. Using the N-CFCA, we previously showed that growth hormone receptor knock out (GHR −/−) mice exhibited significantly fewer periventricular region derived stem cells as compared to wild type animals (23±3 vs 40±3) [24] . Similarly, cultured in the NSA, compared to wild type neural stem cells, GHR −/− NSCs expanded at a significant lower rate (2.04±0.2 vs 3.63±0.4, Fig. 3a ) [25] correlated to a decreased self-renewing symmetric division rate ( Fig. 3b ). Once again, both assays predicted the same ratios between the two populations (1.74 and 1.78 for the N-CFCA and the NSA respectively) [24] . These results suggest that growth hormone signaling controls the self-renewal of somatic neural stem cells by regulating their symmetric division rate. Pluchino and colleagues described the inhibitory effect of chronic inflammation on stem cell self-renewal [26] . This study reported that exposure of adult subventricular zone derived stem cells to inducing-inflammation Th1 cytokines caused a significant decrease in the number of NSCs measured with the N-CFCA and that this phenomenon was accompanied with a diminution of the slope of the growth curve [26] . These data further validate our mathematical interpretation of the neurosphere assay and support the model where Th1 cytokines down-regulate somatic adult stem cell expansion rate via symmetric cell division related mechanism.
The math model could also be applied to cancer biology. Several types of cancer, including those of the breast and CNS, contain cells exhibiting stem-like cell features such as long-term repopulating property [10] , [14] , [16] , [27] . The cancer stem cell hypothesis states that tumors are hierarchically organized and maintained by a distinct subpopulation of long-term repopulating cancer stem-like cells providing therapeutic refractoriness properties. Therefore, understanding the dynamic of these cells is of great interest to understand cancer biology and to design innovative and specific treatments. We used our model to compare cancer stem-like cell (aka LTP cancer cells) self-renewing symmetric division frequency and tumor progression between several adult human glioblastoma multiforme (hGBM) cell lines grown in the Neurosphere assay conditions [16] . The three different hGBM samples analyzed (A, B and C) and generated in our laboratory exhibited distinct expansion profiles (reflected by different slopes of the curve) and LTP cancer cell symmetric division rates ( Fig. 3c–e ). Orthotopic transplantation of 200,000 cells from hGBM samples A, B and C into the striatum of immuno-compromised mice led to tumor formation followed by death of the animals ( Fig. 3f ). Importantly, the disease progression was directly and inversely correlated with the self-renewing symmetric division rate of the LTP cancer cells. This was expressed by comparing the mean survival of host animals implanted with one of the three hGBM tumor stem cell lines to the LTP cancer cell symmetric division rate (K ll ) ( Fig. 3g ).
In addition to brain tumors we have applied our model to breast cancer. We cultured and propagated in vitro three different breast tumor cell lines (KPL-1, MCF-7 and BT-474 named respectively line A, B, and C) by applying similar culture conditions used for the NSA. We used serum free medium containing human epidermal growth factor (rhEGF) and basic fibroblast growth factor (rhbFGF) allowing breast tumor cells to grow and form spherical non-adherent mammospheres [11] , [19] , measured the fold expansion of the breast cancer lines over six to eight passages and calculated their respective K ll ( Fig. 4a–b ). Our mathematical model of the mammosphere assay predicts a rate of LTP cancer cell symmetric division of 0.125 ± 0.011 for line A, 0.078 ± 0.005 for line B and 0.026± 0.003 for line C. 10 6 cells of each breast tumor cell line were transplanted under the skin of immuno-compromised mice and formed tumors at different rate ( Fig. 4c ). Similar to brain tumor experiments, cell lines exhibiting higher LTP cancer cell symmetric division rate led to faster tumor progression combined with poorer survival as demonstrated in the graph comparing K ll to the mean survival of the transplanted animals ( Fig. 4c–d ).
Altogether these results suggest that the LTP cancer cell self-renewing symmetric division rate measured in vitro using Sphere Assays can be used to predict tumor progression based on the notion that increased self-renewing symmetric divisions of the LTP cancer cells will produce more tumor initiating cells resulting in a more aggressive tumor. These data also validate the potential application of our model in cancer and support the significance of the malignant LTP cell compartment in driving the expansion of the tumor and influencing the disease outcome.
The model can also be used to identify agents that specifically target the LTP cancer cell population vs. those that target the STP cancer cell population. Previously we had shown that exposure of cultured hGBM to BMP4 reduces the K ll suggesting that it is targeting the LTP cancer cell population (i.e. cancer stem cells), which was confirmed by a significant reduction in the ability of BMP4 treated hGBM cells to initiate tumor formation in immunocompromised hosts [17] . Here we have extended this paradigm, using the model to identify agents that may or may not target the STP cancer cells (i.e. progenitor cells). Transforming growth factor beta 2 (TGFβ2), known primarily for its growth-inhibitory properties, is both a suppressor and promoter of tumorigenesis, producing a malignant phenotype in tumor-derived cells in culture and exhibiting high expression levels in advanced tumors [28] . Addition of TGFβ2 to hGBM tumor cell cultures resulted in an increase in the number of sphere-initiating cells ( Fig. 5a ). However TGFβ2 had no significant effect on the fold expansion and the rate of LTP cancer cells self-renewing symmetric divisions based on modeling of serial passage data ( Fig. 5b–c ). Based on the significant increase in the number of sphere forming cells, and in the absence of applying the math model, one would have interpreted the data to imply that TGFβ2 increased the proliferation of tumor initiating cells (i.e. LTP cells), while in fact it appears to not affect this population. Our data confirm the proliferative effect of TGFβ2 described in the literature [28] , [29] but suggests that it drives the proliferation of STP cancer cells (or progenitor cells) as opposed to LTP cancer cells (aka cancer stem cells). Although TGFβ2 specifically increased the pool of STP cancer cells in culture, as expected, it did not enhance the tumor progression when the cells were transplanted either subcutaneously ( Fig. 5d ) or intracranially ( Fig. 5e ), as TGFβ2 did not increase the number of LTP tumor-initiating cancer stem cells. This is in line with the notion that the LTP cancer cell sub-population is associated to the ability of a population of cells to initiate and drive tumor progression and further supports a direct correlation between the rate LTP cancer cell expand in vitro and the aggressiveness of the tumor in vivo .
Together these results indicate the meaningful potential of the use of our model in studies aimed at identifying genes or molecules that regulate somatic and cancer stem cell activity. | Discussion
The discovery of stem cells in the mature mammalian nervous system opened the door to the design and development of therapeutics to replace cells lost to injury or disease [4] , [30] . Central to this approach is a detailed understanding of the mechanisms that drive NSC division and differentiation of their progeny in the context of different disease states. However, while assays exist to evaluate the division of precursor cells both in vivo (i.e. BrdU, Ki67, label retention) and in vitro (BrdU, cell counts, sphere forming assays) unraveling the contribution of stem cells from progenitor cells is difficult due to the lack of specific stem cell markers [31] . While functional assays have been widely used for assessing stem cell activity both within [7] and outside [32] the CNS, they are also fraught with difficulties related to an ambiguous read-out [20] . We have previously addressed this dilemma by developing an assay (Neural-Colony Forming Cell Assay) that is a hybrid of the tried and true classic hematopoietic precursor cells assays [32] and the NSA [4] , and utilizes the extensive proliferative ability of neural stem cells relative to progenitor cells [21] . Here we report on the development, validation and application of a mathematical model that takes advantage of a peculiarity of the NSA (majority of the cells die at passage except for the growth factor responsive sphere forming cells) and allows us to estimate the relative variation in the frequency of LTP cells in this culture system. While we make the assumption that LTP cells are equivalent to NSCs, this is not without warrant, given that this feature is a defining characteristic, distinguishing stem cells from other types of proliferating precursors [21] , [33] , [34] , [35] . We also contend that the model provides a sensitive meaningful read-out of self-renewing symmetric NSC division, as expansion of this population is dependent on this mode of division. Practically, the model can be used as a metric to compare the effects of genetic and epigenetic influences on self-renewing symmetric somatic NSC and cancer stem cell division, thereby allowing the identification of agents that can be used to increase and decrease stem cell numbers, respectively.
The neurosphere assay has been shown to be useful for somatic and cancer stem cell studies (for review [18] ). The specific molecular and biochemical dissection of stem cell self-renewal and differentiation mechanisms have been hampered due to the heterogeneous nature of the neurosphere assay. Thus the free-floating culture system contains a mixed population of cells at various stages of commitment with this heterogeneity increasing with sphere size. To overcome the cellular complexity observed in this assay, adherent culture systems have been employed to isolate and expand somatic and cancer stem cells [36] , [37] with the intent of providing a less heterogeneous population [38] . Notwithstanding, our mathematical modeling overcomes the barriers created by the heterogeneous cellular composition within the NSA by using a simple algorithm that enables rigorous and specific assessment of the behavior of the stem cell compartment based on bulk population analysis.
To explore and understand the complexity of a biological system and to overcome the inherent limitations of biological experiments, mathematical modeling and computer simulation approaches are widely used in system biology [39] . Different models have been proposed to provide tools to test key mechanisms at the cellular level that link the somatic/tumor stem cell compartment to tissue function and dysfunction. In the field of cancer, math modeling is not a new concept and has been used since the 1950's [40] . For example, Boman and colleagues described a kinetic model of a colonic crypt with the hypothesis that tumor formation in the colon is governed by crypt stem cell pool overpopulation via an increase in symmetric division [35] , [41] . They proposed three distinct compartments - the stem cell fraction with a theoretical unlimited self-renewal capacity, the progenitor pool which displays limited renewal capacity and finally the differentiated compartment, with cells having no proliferative ability and a finite life span. The paradigm considers that stem cells are at the top of the cellular hierarchy and that the total population size remains stable overtime in normal colonic crypt, consistent with steady state conditions. The central tenet of this model is that the long-term renewable population (aka stem cells) assures tissue homeostasis and that only changes in the dynamic of the stem cell pool, via perturbation of their symmetric division rate, can account for the biologic characteristic of colorectal cancer development and progression [35] . Similarly, supported by numerical simulation, Dingli et al . modeled the response to treatments regarding the hierarchy and dynamic of the physiological or neoplastic hematopoietic system involving somatic and cancer stem cells. They tested several therapeutic scenarios demonstrating the critical need to target specifically the stem cell load for successful therapy [40] . As opposed to these examples of theoretical models testing the influence of hypothetic scenarios that involved kinetic changes in the stem cell pool using numerical simulation, our model is a mathematical interpretation of experimental observations enabling effective measurement of a specific cellular compartment (i.e. LTP cells) known to govern tissue homeostasis or oncogenesis.
The ability to predict LTP cell symmetric division rate using the sphere assay math model was experimentally tested using the NCFCA. This assay was chosen due to its aptitude to quantify specific changes in the neural stem cell compartment based on the notion that progenitor cells show a restricted proliferative potential compared to stem cells that have a more unlimited proliferative ability. Hence, these two populations can be distinguished based on the size of the colony they generate. Using this methodology we were able to correlate a comparison of the actual stem cell frequency (N-CFCA) ( fig. 2 ) with stem cell expansion rate (related to symmetric division rate, in our math model) under different experimental circumstances ( Table 1 ). We found corresponding, nearly identical (non-significant) changes in the size of the stem cell pool in both assays ( Table 2 ). Although we validated our mathematical modeling using an in vitro assay, its direct correlation to physiological in vivo measurements ( fig. 3 and 4 ) supports that the model is rooted in biologically reasonable assumptions. Due to its simplicity and robustness the mathematical interpretation presented in this manuscript provides meaningful and compelling information about stem cell frequency and the rate these cells expand via symmetric division. As stem cells fundamentally contribute to tissue homeostasis in adults and because stem-like cells within some cancers are hypothesized to be a critical component of malignancy, validation of our model enabling direct assessment of somatic/cancer stem cell dynamics provides metric to better understand the biology of stem cell division. For instance, in validating the model we assessed the effects of mitogens (EGF and FGF) and age on symmetric NSC divisions. Our model predicted 57% increase in expansion rate when adult mouse neural stem cells were cultured with both EGF and bFGF compared to EGF alone ( Table 1 , group 3). These changes were confirmed by a 50% increase in neural stem cell frequency measured with the NCFCA ( fig. 2 , group 3). Similarly the math model prediction of a 87% decrease in neural stem cell expansion rate as a process of aging (Table 1, group 1), which was confirmed by the 89% decrease in neural stem cell frequency in the 24-months cultures compared to the embryonic cultures ( fig. 2 , group 1). Additionally, data mining in the current scientific literature corroborated the relationship between the output of the NCFCA and the stem cell expansion rate obtained using the NSA, hence further supporting the validity of our math model [26] . Not only providing a renewable source of cells for neurodegenerative disease studies, the NSA presents a suitable system for pharmaceutical and neurotoxicological screening aiming at designing new therapeutic approaches to target the self-renewal capacity that is essential for controlling the stem cell pool. Finally, measuring symmetric cancer stem cell (i.e. LTP cancer cells) division provides a tool to study tumor biology as well as to test effect of drugs on this particular cellular compartment that may be responsible for tumor resistance and driving long-term tumor growth ( fig. 3 and 4 ) [15] , [33] , [42] , [43] , [44] , [45] , [46] .
Validation of the math model supports use of this methodology as a tool that can be used for the discovery of exogenous signaling agents and endogenous genetic elements that specifically or generally regulate the stem cell compartment. Understanding, from a functional point of view, stem cell regulatory signaling provides bases for rational drug discovery aimed at treating diseases that result in cell loss or disorders that can benefit from the generation of new cells. In addition, aging is known to adversely affect the stem cell pool outside of the CNS, identifying pharmaceuticals and nutraceuticals that can preserve the functional aspects of the NSC compartment may have an important outcome in reducing the burden of age-related CNS degeneration. In addition, our model may find application in cancer biology. It is becoming generally accepted that many solid tissue cancers contain cells exhibiting stem cell characteristics and that this population and their unique properties may contribute to tumor progression and treatment resistance. Enumerating this population and quantifying the effect(s) of treatment is difficult as stem cells (both normal and malignant) are defined by function. While single and combination of markers have been useful, their validity, when it comes to accurately defining the entire cancer stem cell pool, has been challenged [47] . Methodologies that are able to functionally measure symmetric cancer stem cell (LTP) division under a variety of experimental conditions will benefit the targeting of this component, which is important for cancer stem cell expansion. In this study we positively correlate the LTP cell expansion rate with GBM and breast tumor progression in vivo , and demonstrate how the Sphere Assay mathematical model can be used as readout for agents that specifically reduce or eliminate the tumor stem cell pool. Hence, our method can be used to detect and grade the occurrence of the LTP cancer cells and to study their contribution to the maintenance and therapy resistance of the tumor. Hence our modeling not only provides insight into cell kinetics mechanism leading to tumor growth, but as well has significance for elaboration of new approaches toward cancer treatment. | Conceived and designed the experiments: LPD GE KB PB RLR BAR. Performed the experiments: LPD BJM. Analyzed the data: LPD GE BJM JAL AV RLR BAR. Wrote the paper: LPD GE RLR BAR.
Representing a renewable source for cell replacement, neural stem cells have received substantial attention in recent years. The neurosphere assay represents a method to detect the presence of neural stem cells, however owing to a deficiency of specific and definitive markers to identify them, their quantification and the rate they expand is still indefinite. Here we propose a mathematical interpretation of the neurosphere assay allowing actual measurement of neural stem cell symmetric division frequency. The algorithm of the modeling demonstrates a direct correlation between the overall cell fold expansion over time measured in the sphere assay and the rate stem cells expand via symmetric division. The model offers a methodology to evaluate specifically the effect of diseases and treatments on neural stem cell activity and function. Not only providing new insights in the evaluation of the kinetic features of neural stem cells, our modeling further contemplates cancer biology as cancer stem-like cells have been suggested to maintain tumor growth as somatic stem cells maintain tissue homeostasis. Indeed, tumor stem cell's resistance to therapy makes these cells a necessary target for effective treatment. The neurosphere assay mathematical model presented here allows the assessment of the rate malignant stem-like cells expand via symmetric division and the evaluation of the effects of therapeutics on the self-renewal and proliferative activity of this clinically relevant population that drive tumor growth and recurrence. | Supporting Information | CC BY | no | 2022-01-13 08:14:23 | PLoS One. 2011 Jan 5; 6(1):e15844 | oa_package/85/eb/PMC3016423.tar.gz |
||
PMC3016440 | 21246057 | Introduction
Alzheimer's disease (AD) is a neurodegenerative disease characterized by progressive cognitive impairment and, at the cellular level, by synaptic damage, intracellular neurofibrillary tangles and beta-amyloid precursor protein (βAPP) processing dysfunction that leads to overabundance of the 42 amino acid amyloid-beta (Aβ42) peptide. Aβ42 promotes neuroinflammation, synaptic toxicity, and apoptosis, and it transitions extracellularly from an oligomer to an aggregate that, in turn, become a major component of senile plaques [1] – [6] . Aβ42 peptides are generated from βAPP via tandem cleavage by beta- and gamma- (β- and γ-) secretases; alternatively an alpha-secretase distintegrin and metalloproteinase 10 (ADAM10) cleaves βAPP to yield a soluble form of βAPP, sAPPα, via the non-amyloidogenic or neurotrophic pathway.
Docosahexaenoic acid (DHA; C22:6), an omega-3 essential fatty acid family member, is enriched in central nervous system (CNS), synaptic and other cellular membranes as an acyl chain of membrane phospholipids. DHA is involved in the building and function of the CNS, as well as synaptogenesis, cognition, neuroprotection, synaptic function and vision [7] – [10] . Current clinical trials favor a role for DHA in slowing cognitive decline in elderly individuals without dementia but not for the prevention or treatment of dementia, including AD [11] , [12] . Deficiencies in DHA biosynthesis by the liver correlate with cognitive impairment in AD patients [13] , supporting the significance of the liver supply of DHA to the CNS in neurodegenerative diseases [13] , [14] . In AD transgenic mice dietary DHA restores cerebral blood volume, reduces Aβ deposition, and ameliorates Aβ pathology [15] , [16] .
The recent identification of the DHA-derived stereoselective mediator neuroprotectin D1 (NPD1; 10R,17S-dihydroxy-docosa-4Z,7Z,11E,15E,19Z hexaenoic acid) provides a specific mechanism to understand DHA-mediated modulation of neuroinflammation and neuroprotection. NPD1 elicits neuroprotective activity in brain ischemia-reperfusion and in oxidative-stressed retinal cells [17] – [19] . DNA microarray profiling suggests a down-regulation of pro-inflammatory genes as well as of some pro-apoptotic genes of the Bcl-2 gene family [9] . NPD1 further influences βAPP processing and decreases Aβ42 release [9] , and its precursor DHA elicits an Aβ42-lowering effect both in vitro and in vivo [8] , [20] , [21] . In addition, free radical-mediated DHA peroxidation products accumulate during ischemia and neurodegeneration. These oxidation products in turn may form protein adducts and other cytotoxic molecules that promote further free radical injury [22] – [24] .
The ligand-activated transcription factor peroxisome proliferator-activated receptor γ (PPARγ) regulates lipoprotein metabolism, adipogenesis and insulin sensitivity, and it has been implicated in AD [25] – [30] . PPARγ activation underlies some of DHA's anti-inflammatory actions [31] – [34] . Moreover, PPARγ is a potential NPD1 target since it has a fatty acid binding pocket for polyunsaturated fatty acids [31] and their derivatives, including DHA [35] .
In the present study, we assessed DHA and NPD1 abundance in control and aged 3xTg-AD mouse hippocampus and used aging human neuronal-glial (HNG) primary cells to characterize NPD1 bioactivity on: neuroinflammatory events and apoptosis; to test the mechanism of NPD1-mediated regulation of Aβ42 secretion; and to assess the significance of PPARγ in the homeostatic bioactivity of NPD1. Here we provide evidence that, besides protecting against Aβ42-induced neurotoxicity via anti-inflammatory and anti-apoptotic bioactivity, NPD1 down-regulates the amyloidogenic processing of βAPP, thus reducing Aβ42 production. Moreover, NPD1 anti-amyloidogenic action through selective targeting of both the α- and β-secretase-mediated processing of βAPP and anti-amyloidogenic action are exerted through PPARγ receptor activation. | Materials and Methods
Studies and procedures were performed according to National Institutes of Health and Canadian Council on Animal guidelines, and animal protocols were approved by the Institutional Animal Care and Use Committee at the Louisiana State University Health Sciences Center, New Orleans (IACUC #2705, IBC# 08126 and 082303), and by the Laval University Animal Ethics Committee (approval ID = 07–113 and 07–061).
Reagents and Antibodies
The following reagents and antibodies were obtained commercially and used without further purification: Aβ42 peptides (American Peptide, Sunnyvale, CA); antibodies for COX-2, APP-NT, APP-CT and β-actin (Sigma, St Louis, MS); antibodies for B94, ADAM9, ADAM10, BACE1 and PS1 (Santa Cruz Biotechnology, Santa Cruz, CA); antibodies for sAPPα and sAPPβ sw (American Research Products, Belmont, MA); IRDyeTM 700 or 800 infrared dye-labeled secondary antibodies (Rockland Immunochemicals, Gilbertsville, PA); FITC fluorescein conjugated secondary antibody (BD Biosciences, San Jose, CA); Cy3 fluorescein conjugated secondary antibody (GE Healthcare, Piscataway, NJ). Additional BACE1 antibodies Ab2077 (Abcam, Cambridge MA); sc-73729 and sc-33711 (Santa Cruz); PA1-757 (Affinity Bioreagents, Rockford, IL) and 61-3E7 (MAB5308, Millipore, Billerica, MA) were used to analyze BACE1 abundance; the identity of BACE1 was also confirmed via C-terminal micro-sequencing (data not shown). DHA was obtained from (Cayman Chemical, Ann Arbor, MI) and stereochemically pure NPD1 was prepared via total organic synthesis and quantified according to reported chemical and physical procedures using molecular biology grade ethanol as vehicle (concentration 0.1 μg/μl ∼0.3 mM) [36] , [37] .
3xTg-AD Animals
3xTg-AD mice, harboring the PS1 (M146V), APP (Swe) and tau (P301L) human transgenes were reared according to established protocols and received control diets (Teklad 2018, rodent diet; Harlan Teklad, Inianapolis IN) [38] , [39] . Non-transgenic mice used here were littermates from the original PS1-knockin mice and are on the same background as 3xTg-AD mice (C57BL6/129SvJ) [39] . Animals (N = 5 to 6 per group) were sacrificed at 4 and 12–13 months and the frontal lobe, rostro-temporal lobe and hippocampus were isolated and analyzed for DHA and NPD1 using LC-PDA-ESI-MS-MS as previously reported [9] , [17] , [18] , [37] ( Figure 1 ).
Human neuronal-glial (HNG) Cell Culture
We used stressed primary human neuronal-glial co-culture (HNG) as a cellular model to address NPD1-mediated signaling and mechanistic questions relevant to AD [9] , [27] , [40] , [41] . In brief, human neural progenitor cells (Lonza, Allendale, NJ) were initially plated as free-floating aggregates (neurospheres) in a 75 cm 2 uncoated plastic flask in neural progenitor basal medium (NPBM) supplemented with human recombinant basic fibroblast growth factor (hbFGF), human recombinant epidermal growth factor (hEGF) and human leukemia inhibitory factor (hLIF). Differentiation into HNG cells was induced by plating neurospheres onto 6-well plates pre-coated with poly-L-ornithine and cultured in neurobasal medium supplemented with 1×B27 (Invitrogen, Carlsbad, CA) and 1% of FBS. Cells were subsequently cultured for 1 week until 80% confluence at 37°C in a humidified 5% CO 2 atmosphere incubator; 24 hours after plating HNG cells then strongly adhere to the plate bottom [9] , [27] .
Plasmid Constructs and Transient Transfection of HNG Cells
Plasmid containing APP695 cDNA bearing the Swedish mutation APP sw (Swedish double mutation APP695 sw , K595N, M596L) was a generous gift from Dr. T Golde of the Mayo Clinic (Jacksonville, FL). cDNA clones of full length hBACE1 genes were from Open Biosystem (Huntsville, AL). HNG cells were plated in 6-well plates at 80% confluence and transiently transfected using Fugene HD transfection reagent (Roche Applied Science, Indianapolis, IN) with 2 μg per well of hAPP695 sw plasmid DNA alone or together with pEGFP (green fluorescent protein; BD Biosciences-Clontech), hPPARγ, or hBACE1 at a DNA (μg):reagent (μl) ratio of 1∶3. After 24 h, cells were typically incubated with 0, 50, 100 or 500 nM NPD1 or vehicle for 48 h before assay.
Small Interfering RNA-mediated Gene Silencing
HNG cells were transfected with predesigned siRNA (Santa Cruz Biotechnology) to knock down human ADAM9 or ADAM10 mRNA. HNG cells over-expressing βAPP sw were transfected with a total of 60 pmol of ADAM9, ADAM10 or control siRNA using Lipofectamine 2000 transfection reagent (Invitrogen) and cultured for 24 h. The medium was replaced with a fresh one containing 500 nM of NPD1 and cells were cultured for another 48 h before assay.
Aβ42 Oligomer Preparation
Aβ42 peptides were initially solubilized in hexafluoroisopropanol (HFIP) (Sigma), aliquoted, and stored at −20°C as an HFIP film [42] . After evaporating HFIP, aliquoted peptide was re-suspended with DMSO to 5 mM and diluted with phenol red free F12 media (Invitrogen) to a concentration of 100 μM. Peptide for the oligomer preparation was incubated at 4°C for 24 h prior to use [42] . The oligomeric status of Aβ was verified by sodium dodecyl sulfate polyacrylamide gel electrophoresis (SDS-PAGE; Figure S1 ).
Immunocytochemistry and Imaging Analysis
HNG cells cultured on 8-well chamber slides (BD Biosciences, San Jose, CA) were fixed with 4% paraformaldehyde, then permeabilized and blocked with 0.125% Triton X-100 (Sigma) and 2% normal goat serum (GE Healthcare) in PBS at room temperature (RT) for 1 h. Cells were incubated overnight at 4°C with antibodies for β-tubulin III, GFAP, COX-2, B94 or APP-NT. Cells were washed 3 times with PBS and incubated for 3 h at RT with secondary antibodies conjugated with Cy3 or FITC fluorescein. After washing and drying, slides were applied with mounting medium (Vector Laboratories, Burlingame, CA) and observed under Zeiss Axioplan Inverted Deconvolution Fluorescent Microscope (Carl Zeiss, Oberkochen, Germany). Positively-stained cells were quantified using the manual counter function of the NIH ImageJ software.
Mediator lipidomic analysis
Lipids were extracted by homogenization of cells or tissues in chloroform/methanol and stored under nitrogen at −80°C [17] , [18] , [36] . For quantification, lipid extracts were supplemented with deuterated labeled internal standards, purified by solid-phase extraction, and loaded onto a Biobasic-AX column (Thermo-Hypersil-Keystone; 100 mm ×2.1 mm; 5-μm particle sizes) run with a 45-min gradient protocol, starting with solvent solution A (40∶60∶0∶01 methanol:water:acetic acid, pH 4.5; 300 μl/min); the gradient typically reached 100% solvent B (99.99∶0.01 methanol:acetic acid) in 30 min, and was then run isocratically for 5 min. A TSQ Quantum (Thermo-Finnigan) triple quadrupole mass spectrometer and electrospray ionization was used with spray voltage of 3 kV and N 2 sheath gas (35 cm 3 /min, 350°C). Parent ions were detected on full-scan mode on the Q1 quadrupole. Quantitative analysis was performed by selective reaction monitoring. The Q2 collision gas was argon at 1.5 mTorr, and daughter ions were detected on Q3. Selected parent/daughter ion pairs for NPD1 and unesterified DHA were typically 359/153 m/z and 327/283 m/z , respectively. Calibration curves for NPD1 and DHA (Cayman Chemical) were acquired; NPD1 was generated via biogenic synthesis using soybean lipoxygenase and DHA, purified by HPLC, and characterized by LC-PDA-ESI-MS-MS according to reported biophysical criteria [9] , [17] .
MTT cell viability assay, Hoechst staining, TUNEL assay and caspase-3 activity assay
Cell viability was measured by 3-[4,5-dimethylthiazol-2-yl]-2,5-diphenyl tetrazolium bromide (MTT) reduction assay (Sigma). HNG cells were incubated with 5 μM of Aβ42 in the absence or presence of 50 nM of NPD1 for 48 h. MTT was added to a final concentration of 0.5 mg/ml and incubated for 2 h. Medium was then removed and equal volumes of isopropanol were added to dissolve the resulting formazan crystals. Absorbance was spectrophotometrically measured with a SpectraMax Microplate Reader (Molecular Devices, Sunnyvale, CA) at 570 nm. HNG cells were further incubated with 2 μM Hoechst 33258 (Invitrogen) for 45 min at 37°C before imaging. Cells were then viewed by using a Nikon DIAPHOT 200 microscope under UV fluorescence. Images were recorded by a Hamamatsu Color Chilled 3CCD camera and PHOTOSHOP 7.0 software. Positively stained cells were counted manually using ImageJ software. The apoptotic nuclei containing free 3′-OH termini were detected using DeadEnd Fluorometric TUNEL Kit (Promega, Madison, WI). Samples were analyzed under a Zeiss Deconvolution Microscope. Caspase-3 activity from cell lysates was detected using Caspase 3 Colorimetric Assay Kit (Sigma). The absorbance was measured at 405 nm using a SpectraMax Microplate Reader.
Total RNA Extraction and RT-PCR
HNG cells were lysed and total RNA was extracted with TRIzol (Invitrogen). RNA quality and quantity were analyzed by using a 2100 Bioanalyzer (Agilent Technologies, Santa Clara, CA). 28S/18S ratio for each RNA sample was typically greater than 1.8. For reverse transcription, a Superscript III First-Strand SuperMix (Invitrogen) was used. 1 μg of total RNA was used as a template to react with 10 μl of 2×RT Reaction Mix and 2 μl RT Enzyme Mix. Final total volume was 20 μl. Samples were incubated at 25°C for 10 min and then 50°C for 30 min. Reactions were stopped by heating to 85°C for 5 min, and RT product was amplified with Phusion High Fidelity DNA Polymerase in a GeneAmp PCR System 9700 (Applied Biosystems, Foster City, CA). The primers used in these experiments (Integrated DNA Technologies, Coralville, IA) and their sequences are as follows: 5′-TTTGATGATGGCGTACTTGG-3′ , 5′-AGTTTGTCCCCAGATGTTGC-3′ for ADAM10; 5′-TACAATGCTGACTATGGCTAC-3′ , 5′-CTGATGCGTGAAGTGCTG-3′ for COX-2; 5′-CAAAGTAGACCTGCCCAGAC-3′ , 5′-GACCTCTCTCTAATCAGCCC-3′ for TNF-α; 5′-TGAGGACCTGAAG CCACTGTTCAA-3′ , 5′-TGCGCTTGACCTCACTGTTGGATA-3′ for B94 and 5′-AGATGTGGATCA GCAAGCAGGAGT-3′ , 5′-GCAATCAAAGTCCTCGGCCACATT-3′ for β-actin (internal control; Lukiw and Pogue 2007; Lukiw et al., 2005). The PCR consisted of initial incubation at 98°C for 45 s, denaturation at 98°C for 10 s, annealing at 56°C for 30 s and extension at 72°C for 20 s, for 35 cycles, and final extension at 72°C for 7 min. PCR products were further analyzed on 1.5% agarose gels; relative band intensity was quantified using Quality One software (Invitrogen).
SDS-PAGE and Western Blotting
Conditioned media were collected from cultured HN cells after various treatments and protease inhibitor cocktail (Sigma) was added to 1% final concentration [9] , [27] . Cells were then washed twice with ice-cold DPBS and lysed and harvested in RIPA buffer (Sigma) supplemented with 2% protease inhibitor cocktail, then centrifuged at 10,000× g for 15 min at 4°C. Supernatants were collected and quantified using Bio-Rad's (Hercules, CA) DC Protein Assay kit. 30 μg of cell lysate or 20 μl - of conditioned media were electrophoresed on 4–15% Tris-HCl gradient gels at 100 V for 80 min or 10–20% Tris-Tricine gels for the detection of CTFs at 50 mA for 120 min. Proteins were transferred to an Immobilon FL PVDF membrane (Millipore, Billerica, MA) at 100 V for 60 min. Membranes were incubated with primary antibody overnight at 4°C, followed by incubation with IRDye 800 or Alexa 680-conjugated secondary antibodies for 5 h at RT. After repeated washing with Tris-buffered saline, the membrane was then visualized by the Odyssey Infrared Imaging System (LI-COR, Lincoln, NE).
Sandwich ELISA Analysis of TNFα and Aβ42
Secreted TNF-α, Aβ42 and total Aβ were detected using a Human TNF ELISA Kit (BD Biosciences, San Jose, CA), a human amyloid β 42 ELISA kit (Sigma) and a human amyloid β (1-x) assay kit (American Research Products), respectively. After reactions, the plates were immediately measured at 450 nm by a SpectraMax Microplate Reader.
Human Preadipocyte Differentiation Assay
Human preadipocytes maintenance and differentiation procedures were performed according to the manufacturer's instructions with modifications (Zen-Bio, Research Triangle Park, NC). Briefly, upon the initiation of the differentiation assay, preadipocytes were incubated in adipocyte medium supplemented with IBMX (0.5 mM) and NPD1, DHA or vehicle. A concentration range of 0.1–5 μM of each lipid was used. After 3-day incubation, the cell medium was replaced with the adipocyte medium without IBMX. Eight days after vehicle or lipid treatment, the media was removed and the cells were fixed with formalin (7% formaldehyde in PBS). Cells were then stained with Oil Red O (Sigma, Saint Louis, MS) and pictures were taken with a Nikon Eclipse TS100 inverted microscope (Nikon USA, Melville, NY). The Oil Red O-stained total lipid was then eluted with 100% isopropanol and quantified by measuring OD value at 500 nm with a SpectraMax Microplate Reader.
Cell-based PPARγ Transactivation Assay
The two plasmids used for the transactivation assay (PPARγ-GAL4 and MH100-tk-luc) were kindly provided by Dr. Ronald Evans of Salk Institute (La Jolla, CA) [32] . Luciferase assay was performed using Promega's Luciferase Assay System. Light units from firefly luciferase and β-galactosidase activities were measured in a Luminoskan Ascent microplate luminometer (Thermo Fisher Scientific, Waltham, MA). Luciferase values were expressed as relative light units and normalized to the level of β-galactosidase activity. Changes in PPARγ activity were expressed as “fold induction” relative to the vehicle control values ( Figure S2 ).
Statistical Analysis
All experiments were repeated at least three times using independent culture preparations. Data are presented as mean ± S.E. Quantitative data were statistically analyzed by one-way analysis of variance (ANOVA) followed by pair-wise comparisons using the Fisher's least significant difference test. A p <0.05 was considered significant. | Results
DHA and NPD1 deficits in 3xTg-AD mouse hippocampus
DHA and NPD1 levels were assayed in the hippocampus of 3xTg-AD mice, harboring the PS1 (M146V), APP (Swe) and tau (P301L) human transgenes that model several human AD features [38] , [39] . DHA and NPD1 levels were analyzed using LC-PDA-ESI-MS-MS-based lipidomic analysis as previously described ( Figure 1 ) [9] , [18] . Both DHA and NPD1 showed age-related changes in 4-month old versus 12–13 month old 3xTg-AD animals. DHA concentration in the hippocampus was found to be reduced 2-fold between 4-month old control versus 4-month 3xTg-AD animals, and 3-fold between 4 and 12-13 month old control animals ( Figure 1A ). NPD1 in the hippocampus showed dramatic reductions both in aging control animals; a 12-fold reduction between 4 and 12–13 month controls and a 3-fold reduction between 4 and 12–13 month 3xTg-AD mice ( Figure 1B ).
NPD1 protects HNG cells from Aβ42-induced apoptosis
Phase contrast and immunofluorescence of differentiated HNG cells expressing the neuronal marker β-tubulin III and the astrocyte marker glial fibrillary acidic protein (GFAP) revealed neuronal-glial co-cultures containing about 50% neurons under these conditions ( Figure 2 ). NPD1 was shown to counteract Aβ42 oligomer-induced apoptosis in HNG cells using MTT, Hoechst 33258 staining, TUNEL and assay of caspase-3 activity ( Figure 3A–G ). These assays showed that over 48 h Aβ42 oligomer triggers about 50% cell death with concomitant nuclear compaction and striking apoptotic changes ( Figure 3 ). Aβ42 peptides also enhanced caspase-3 activity at least 6-fold, an effect that was reduced in the presence of NPD1. Co-incubation of 50 nM NPD1 with Aβ42 oligomer resulted in enhanced cell viability and attenuation of Aβ42 peptide-mediated apoptosis and cytotoxicity ( Figure 3 ).
NPD1 down-regulates Aβ42 oligomer-induced pro-inflammatory gene expression
Our previous DNA microarray-based analysis suggested anti-inflammatory bioactivity of NPD1 in HNG cells, as shown by their attenuation of Aβ42 peptide-induced elevation of the pro-inflammatory genes COX-2, TNF-α and B94 [9] . Here we extended these studies by exploring NPD1 actions at both the mRNA and protein levels using RT-PCR, Western assay, ELISA assay and immunocytochemistry ( Figures 3 and 4 ). The relative basal abundance of TNF-α mRNA was low, B94 mRNA increased during incubation at 6 h, and constitutive expression of COX-2 mRNA occurred during incubation. Aβ42 increased mRNA abundance of COX-2, TNF-α and B94 at 3, 6 and 12 h ( Figure 4A,B ). COX-2 mRNA stood out because it displayed immediate early-inducible gene behavior upon Aβ42 peptide exposure [9] . Protein expression of cellular COX-2, TNF-α secreted to the incubation media, and immunocytochemistry of COX-2 and B94 showed Aβ42-stimulated enhancement and NPD1 (50 nM) markedly reduced Aβ42 oligomer-stimulated mRNA increases as well as COX-2, TNF-α and B94 protein expression ( Figure 4C ). NPD1 therefore elicits potent down-regulation in the expression of a specific set of pro-inflammatory and pro-apoptotic genes known to be up-regulated in AD hippocampus and in stressed HNG cell models of AD [9] , [41] , [43] – [46] . Messenger RNA, Western, ELISA and immunohistochemisty data are presented in Figures 4 and 5 .
NPD1 represses amyloidogenic processing of βAPP with concomitant stimulation of non-amyloidogenic processing
Aβ42-peptides are secreted from human brain cells as they age or in response to physiological stress [4] , [9] , [27] , [47] , [48] . The processing of βAPP holoenzyme and secretion of βAPP fragments is controlled in large part by alpha-, beta- and gamma- (α-,β- and γ-) secretases [3] , [4] . To assess the effects of NPD1 on secretase-mediated Aβ42 peptide generation, we used HNG cells transiently-transfected with βAPP sw and assayed for the abundance of the α-secretase–generating enzymes precursor-ADAM10 (pro-ADAM10), mature-ADAM10 (m-ADAM10), β-amyloid cleavage enzyme (BACE1) and the gamma-secretase presenilin-1 (PS1) ( Figure 6 ). Western blot analysis revealed that the steady-level of BACE1 was reduced by 500 nM of NPD1. Meanwhile, the active and mature form of ADAM10 (m-ADAM10), the putative α-secretase, was dose-dependently increased in response to NPD1. We did not find changes in the pro-ADAM10, the inactive precursor or in the mRNA abundance of ADAM10 (data not shown; Figure 6 ). The undergoing changes in these two secretases are in agreement with alterations in Aβ42 peptide abundance, and in other cleavage products of βAPP ( Figure 7 ). Interestingly, both m-ADAM10 and BACE1 levels were elevated in βAPP-over-expressing cells ( Figure 6 ). Presenilin 1 (PS1), the main catalytic component for γ-secretase, remains unchanged after different βAPP sw or NPD1 treatments ( Figure 6 ). This same pattern was also seen in their C-terminal counterparts, CTFβ and CTFα; importantly, no change was observed in the steady-state level of the neural cell abundant βAPP (holo-βAPP; see Figure 7A ). NPD1-mediated up-regulation of m-ADAM-10 and down-regulation of BACE1 was apparent with maximal effect at 500 nM, the highest concentration used in these experiments ( Figures 6 and 7 ).
Quantification of βAPP Fragments
As Aβ42 peptide generation is regulated by differential βAPP processing, NPD1-mediated Aβ42 peptide reduction is due to altered βAPP processing, and thereby altered βAPP cleavage products should confirm these catabolic outcomes. To test this idea, we used HNG cells over-expressing βAPP sw , and measured levels of N-terminal (sAPPα and sAPPβ sw ) and C-terminal fragments (CTFα and CTFβ) of βAPP as well as holo-βAPP protein upon exposure to increasing concentrations of NPD1. We show that NPD1 lowers sAPPβ sw secretion and elevates sAPPα in a dose-dependent manner ( Figure 7 ). This observation is paralleled by a decrease in CTFβ and an increase in CTFα in the same cellular fractions and a significant 3.4-fold increase in mADAM10 ( Figure 7 ).
Silencing of ADAM9 and ADAM10 and overexpression of BACE1
Collectively, these data suggests the participation and modulation of BACE1 and ADAM10 activities in NPD1-mediated regulation of βAPP processing. Just like ADAM10, ADAM9 is also endowed with α-secretase activity [6] , [49] , and changes in BACE1 abundance may also contribute to Aβ42 peptide reduction. We therefore investigated whether ADAM 9, ADAM10 and BACE1 are essential to NPD1's regulation of βAPP processing by knocking down siRNA-targeted ADAM9 and ADAM10 genes. We also over-expressed BACE1 by transfecting HNG cells with a plasmid bearing the human BACE1 full length cDNA. We then measured total βAPP and other βAPP cleavage fragments in the presence of NPD1 with or without ADAM 9 siRNA or ADAM10 siRNA knockdown or BACE1 over-expression. As seen in Figure 8 , when compared to controls (control siRNA) no changes occurred in ADAM9 siRNA group while ADAM10 siRNA-mediated knockdown almost completely abrogated the induction of sAPPα and CTFα by NPD1. Similarly, moderate over-expression of BACE1 overturned NPD1-induced reduction in Aβ42 peptides along with sAPPβ sw and CTFβ fragments. These results in combination with those shown in Figures 6 and 7 strongly suggests that NPD1's regulatory action targeting βAPP processing may be mediated in part through coordinated up-regulation of the α-secretase ADAM10 and down-regulation of BACE1 enzymatic activity.
NPD1 is a PPARγ activator
PPARγ is a key anti-inflammatory and Aβ-lowering mediator, and several polyunsaturated fatty acids and their derivatives are ligands for PPARγ. Thus, we asked whether NPD1 influences PPARγ actions, and if this could be related to its neuroprotective bioactivity. We first tested NPD1 as a potential PPARγ activator using primary human adipocyte differentiation. PPARγ is an adipogenesis modulator, and PPARγ agonists induce adipocyte differentiation. Adipogenesis assay was used for screening potential PPARγ-active compounds. Primary human pre-adipocytes were treated with 0.1, 1 and 5 μM of NPD1 or DHA during differentiation induction (see Figure 9 ). Ligand-induced differentiation was assessed by Oil Red O staining. NPD1 led to enhancement of differentiation in the primary human pre-adipocytes, while equivalent doses of its precursor DHA displayed little adipogenic activity, suggesting that NPD1 does display PPARγ activity ( Figure 9 and Figure S2 ). To further evaluate the activity of NPD1, we used a cell-based PPARγ transactivation reporter assay. HNG cells co-transfected with hPPARγ-GAL4 and MH100-tk-luc were incubated with increasing concentrations (0.1, 1.0, 5.0 and 10 μM) of NPD1 or DHA for 24 h. NPD1, but not its precursor DHA, increased reporter activity in a dose-dependent manner indicating that NPD1 acts as an activator of an inducible PPARγ response ( Figure 10A ).
The anti-amyloidogenic effect of NPD1 is PPARγ-dependent
We further examined whether PPARγ is involved in the regulation by NPD1 of βAPP processing. First, we studied the effect of PPARγ on Aβ42 peptide production in HNG cells over-expressing βAPP sw by either transiently-transfecting PPARγ cDNA or using the PPARγ agonist, rosiglitazone. The efficiency of transfection and subcellular localization of both proteins were monitored by immunofluorescence and Western blotting. The majority of PPARγ expression was found to be associated with the HNG cell nuclei ( Figure 10B,C ). In both PPARγ-transfected and PPARγ agonist-treated HNG cells, we observed a decrease in the amount of secreted total Aβ and Aβ42. The decrease was comparable to that conferred by NPD1 treatment. To determine whether PPARγ is required in this action, HNG cells were also incubated with the PPARγ antagonist, GW9662. GW9662 reversed the Aβ peptide reduction in NPD1-treated cells and in PPARγ over-expression or PPARγ agonist-treated cells as well ( Figure 11A ). These results suggest that PPARγ is required for NPD1's regulation of Aβ release. To further define the action of PPARγ on βAPP processing and its implication in the anti-amyloidogenic effect of NPD1, we analyzed the levels of βAPP fragments using the above treatments. Similar to NPD1-induced reduction in sAPPβ and CTFβ, in cells over-expressing PPARγ or treated with PPARγ agonist, these two β-secretase cleavage products were substantially down-regulated ( Figure 11B,D,E ). Just as in the case of Aβ peptide release, this down-regulatory effect was reversed by the addition of PPARγ antagonist in all relevant treatment groups ( Figure 11B,D,E ). Note that GW9662 alone caused no changes in either sAPPβ sw or CTFβ ( Figure 11C ). In contrast, unlike NPD1, PPARγ overexpression or PPARγ agonist did not modify the levels of sAPPα or CTFα. Nor did the PPARγ antagonist abolish the NPD1-induced increase in these fragments ( Figure 11B,D,E ). Meanwhile, no changes in holo-βAPP by PPARγ were observed ( Figure 11B ). These data suggest that PPARγ is involved in NPD1's regulation via the β-secretase pathway but not via the α-secretase pathway. We next examined the levels of ADAM10 and BACE1, the putative α- and β-secretase that are actively involved in NPD1's modulation of APP processing. In agreement with the alterations in levels of βAPP fragments, PPARγ activation reduced the steady-state level of BACE1 expression but did not affect ADAM10. PPARγ antagonism abolished the NPD1-induced decrease in BACE1 but was not able to reverse the increase in mature ADAM10 level ( Figure 11B,F ). | Discussion
DHA partially counteracts cognitive decline in the elderly [11] . Moreover, omega-3 essential fatty acid-rich diets are associated with a trend in reduced risk for MCI and with MCI conversion to AD, whereas DHA has been shown to be beneficial in transgenic AD models [8] , [10] , [11] , [16] , [50] . The 15-lipoxygenase-1- (15-LOX-1) DHA-derived NPD1 displays neuroprotective bioactivity in brain and retinal cells against various insults, including oxidative injury, ischemia-reperfusion and inflammation [9] , [17] , [18] , [51] – [53] . Both AD brain [9] and the 3xTg-AD mouse exhibit reductions in DHA and NPD1 ( Figure 1 ). In this study we further characterized the anti-inflammatory and anti-apoptotic activity of NPD1 in co-cultures of HNG cells stressed with Aβ42 oligomer, and studied the NPD1-mediated modulation of α- and β-secretase activity that resulted in reduced shedding of Aβ42.
AD is marked by synaptic damage, neuronal atrophy and cell death in the hippocampus and entorhinal cortex [4] , [54] – [56] . Neurotoxicity induced by Aβ42 aggregates appears to drive microglial-mediated neuroinflammatory responses and apoptosis [3] , [4] , [50] , [57] . Oxidative stress, calcium overload, mitochondrial dysfunction and membrane impairments, along with activation of caspases and cell death are associated with Aβ42 up-regulation [55] . We found that NPD1 induces HNG cell survival after Aβ42-oligomer-mediated stress and reduced Aβ42-triggered apoptosis. NPD1 attenuated caspase-3 activation and decreased compacted nuclei and fragmented DNA [18] , [19] ( Figure 3 ). These observations are in agreement with the NPD1-mediated up-regulation of anti-apoptotic Bcl-2, Bcl-xl and Bfl-1 expression and the decrease in the pro-apoptotic expression of Bax, Bad and Bik [9] , [18] .
Neuroinflammatory neurodegeneration associated with Aβ42 is an important contributory event to AD neuropathology [54] , [56] . In these experiments primary HNG cells were used, as human primary neurons do not survive well in the absence of glial cells [9] , [29] ( Figure 2 ). While we cannot exclude the possibility that glial cells are providing some neuroprotective ‘shielding’, both neuronal and glial cells release cytokines when exposed to Aβ42 that, in turn, activate more microglia and astrocytes that reinforce pathogenic signaling. NPD1 is anti-inflammatory and promotes inflammatory resolution [17] , [18] , [37] , [53] . In HNG cell models of Aβ42 toxicity, microarray analysis and Western blot analysis revealed down-regulation of pro-inflammatory genes (COX-2, TNF-α and B94), suggesting NPD1's anti-inflammatory bioactivity targets, in part, this gene family [9] . These effects are persistent, as shown by time-course Western blot analysis in which protein expression was examined up to 12 h after treatment by Aβ42 and NPD1.
Although counteracting Aβ42-induced neurotoxicity is a promising strategy for AD treatment, curbing excessive Aβ42 release during neurodegeneration is also desirable. DHA could lower Aβ42 load in the CNS by stimulating non-amyloidogenic βAPP processing, reducing PS1 expression, or by increasing the expression of the sortilin receptor, SorLA/LR11 [8] , [21] , [41] , [58] . In contrast to a previous report by Green et al. [16] that suggested that Aβ peptide reductions in whole brain homogenates of 3xTg AD after dietary supplementation of DHA were the result of decreases in the steady state levels of PS1, our experiments in primary HNG cells showed no effects of NPD1 on PS1 levels, but a significant increase in ADAM10 coupled to a decrease in BACE1 ( Figure 5 ). These later observations were further confirmed by both activity assays ( Figures 6 and 7 ) and siRNA knockdown ( Figure 8 ). NPD1 reduces Aβ42 levels released from HNG cells over-expressing APP sw in a dose-dependent manner. Our examination of other βAPP fragments revealed after NPD1 addition, a reduction in the β-secretase products sAPPβ sw and CTFβ occurred, along with an increase in α-secretase products sAPPα and CTFα, while levels of βAPP expression remained unchanged in response to NPD1. Hence these abundance- and activity-based assays indicate a shift by NPD1 in βAPP processing from the amyloidogenic to non-amyloidogenic pathway. Previously sAPPα has been found to promote NPD1 biosynthesis from DHA [9] , while in the present study NPD1 works to stimulate sAPPα secretion, creating positive feedback and neurotrophic reinforcement. Secreted sAPPα's beneficial effects include enhanced learning, memory and neurotrophic properties [6] . NPD1 further down-regulated the β-secretase BACE1 and activated ADAM10, a putative α-secretase. Our ADAM10 siRNA knockdown and BACE1 over-expression-activity experiments confirmed that ADAM10 and BACE1 are required in NPD1's regulation of βAPP. NPD1 therefore appears to function favorably in both of these competing βAPP processing events.
PPARγ activation leads to anti-inflammatory, anti-amyloidogenic actions and anti-apoptotic bioactivity, as does NPD1. Some fatty acids are natural ligands for PPARγ, which have a predilection for binding polyunsaturated fatty acids [59] – [61] . Our hypothesis that NPD1 is a PPARγ activator was confirmed by results from both human adipogenesis and cell-based-transactivation assay ( Figures 9 and 10 ). NPD1 may activate PPARγ via direct binding or other interactive mechanisms [33] , [62] . Analysis of βAPP-derived fragments revealed that PPARγ does play a role in the NPD1-mediated suppression of Aβ production. Over-expressing PPARγ or incubation with a PPARγ agonist led to reductions in Aβ, sAPPβ and CTFβ similar to that with NPD1 treatment, while a PPARγ antagonist abrogated these reductions. Activation of PPARγ signaling is further confirmed by the observation that PPARγ activity decreased BACE1 levels, and a PPARγ antagonist overturned this decrease. Thus, the anti-amyloidogenic bioactivity of NPD1 is associated with activation of the PPARγ and the subsequent BACE1 down-regulation. The difference between the bioactivity of NPD1 concentrations for anti-apoptotic and anti-amyloidogenic activities (50 nM vs. 500 nM) may be due to the different cell models used (i.e., Aβ-peptide stressed vs. βAPP sw -over-expressing HNG cells) and/or related mechanisms.
Although Aβ-lowering effects of PPARγ have been reported, the molecular mechanism of this action remains unclear. Induction of βAPP ubiquitination, which leads to enhanced βAPP degradation and reduced Aβ peptide secretion, has been suggested [60] . Alternatively, Aβ clearance might be involved, or regulation by PPARγ may be due to enhancement of insulin sensitivity and increases in brain insulin degrading enzyme [59] . Our results suggest that decreases in BACE1 may be the cause for Aβ reduction [27] , [63] . A reason for these conflicting reports may be that cell models and culture conditions used varied; in our study, we used HNG cells transiently over-expressing βAPP sw while previous reports employed cell lines using stable βAPP expression. Similar to the model of Sastre et al. [63] , our cells underwent increases in αβ overproduction. Excessive Aβ causes inflammatory responses in both neuronal and glial cells [27] . Since inflammatory signaling plays a role in AD pathogenesis, we believe HNG cell cultures are a valuable model for Aβ42 -mediated cellular actions. The fact that comparable results of our study were obtained at a much lower drug concentration (0.5 μM of rosiglitazone vs. 10–30 μM in previous reports) ( Figure 10 ) underscores the highly sensitive nature of HNG cells after βAPP transfection. It is still possible that PPARγ may repress BACE1 by antagonizing activities of other transcription factors that promote BACE1 expression, such as STAT1, NF-κB and AP1 [64] . It is noteworthy that BACE1 expression in HNG cells was increased after βAPP over-expression. The fact that PPARγ did not affect the levels of sAPPα and CTFα besides PPARγ antagonist being unable to reverse NPD1-elicited increase in these fragments, clearly show that PPARγ is not essential for NPD1's regulation on the non-amyloidogenic pathway. Further analysis of ADAM10 showed no change occurring in ADAM10 following PPARγ activation, nor did PPARγ antagonists affect NPD1-enhanced expression of mature ADAM10. Therefore, modulation by NPD1 of α-secretase and βAPP processing are independent of PPARγ. ADAM10 is synthesized as an inactive zymogene and is processed to its mature form by cleavage of the pro-domain by pro-protein convertases (PPCs), such as furin and PC7 [65] . Other evidence also demonstrated that protein kinase C (PKC) and mitogen-activated protein (MAP) kinase, particularly extracellular signal-regulated kinases (ERK1/2), are involved in regulation of α-secretase activity [62] , [66] , [67] . No cross-talk between the PCs and PKC or MAP kinases has been reported. Since in our study only the mature ADAM10 was increased, it is likely that the PPCs are implicated in NPD1 actions.
PPARγ antagonist GW9662 also failed to reverse the anti-apoptotic effect of NPD1, indicating that PPARγ is not implicated in NPD1 anti-apoptotic bioactivity ( Figure 10 ). NPD1 attained this neuroprotection at a concentration of 50 nM, at which its PPARγ activity is far from physiologically relevant in the in vitro system. Other mechanisms have been proposed to explain DHA's anti-apoptotic and anti-inflammatory effects, including maintenance of plasma membrane integrity, activation of Akt signaling [68] , and conversion into other derivatives [23] , [50] . These findings also provide clues for NPD1's potential targets. NPD1 inhibits NF-κB activation and COX-2 expression in brain ischemia-reperfusion [17] , while Aβ peptide-induced apoptosis is associated with ERK and p38 MAPK-NF-κB mediated COX-2 up-regulation [44] . Neuroprotection mediated by NPD1 may further involve components of signaling pathways upstream of NF-κB activation and DNA-binding [9] .
Our results provide compelling evidence that NPD1 is endowed with strong anti-inflammatory, anti-amyloidogenic, and anti-apoptotic bioactivities in HNG cells upon exposure to Aβ42 oligomers, or in HNG cells over-expressing βAPP sw . These results suggest that NPD1's anti-amyloidogenic effects are mediated in part through activation of the PPARγ receptor, while NPD1's stimulation of non-amyloidogenic pathways is PPARγ-independent. Suggested sites of NPD1 actions are schematically presented in Figure 11 . NPD1 stimulation of ADAM10 coupled to suppression of BACE1-mediated Aβ42 secretion clearly warrants further study, as these dual secretase-mediated pathways may provide effective combinatorial or multi-target approaches in the clinical management of the AD process. | Conceived and designed the experiments: YZ WJL NGB. Performed the experiments: YZ WJL. Analyzed the data: YZ WJL NGB. Contributed reagents/materials/analysis tools: JWW CJ NAP FC. Wrote the paper: YZ WJL NGB.
Current address: University of Texas Health Sciences Center, Houston, Texas, United States of America
Neuroprotectin D1 (NPD1) is a stereoselective mediator derived from the omega-3 essential fatty acid docosahexaenoic acid (DHA) with potent inflammatory resolving and neuroprotective bioactivity. NPD1 reduces Aβ42 peptide release from aging human brain cells and is severely depleted in Alzheimer's disease (AD) brain. Here we further characterize the mechanism of NPD1's neurogenic actions using 3xTg-AD mouse models and human neuronal-glial (HNG) cells in primary culture, either challenged with Aβ42 oligomeric peptide, or transfected with beta amyloid precursor protein (βAPP) sw (Swedish double mutation APP695 sw , K595N-M596L). We also show that NPD1 downregulates Aβ42-triggered expression of the pro-inflammatory enzyme cyclooxygenase-2 (COX-2) and of B-94 (a TNF-α-inducible pro-inflammatory element) and apoptosis in HNG cells. Moreover, NPD1 suppresses Aβ42 peptide shedding by down-regulating β-secretase-1 (BACE1) while activating the α-secretase ADAM10 and up-regulating sAPPα, thus shifting the cleavage of βAPP holoenzyme from an amyloidogenic into the non-amyloidogenic pathway. Use of the thiazolidinedione peroxisome proliferator-activated receptor gamma (PPARγ) agonist rosiglitazone, the irreversible PPARγ antagonist GW9662, and overexpressing PPARγ suggests that the NPD1-mediated down-regulation of BACE1 and Aβ42 peptide release is PPARγ-dependent. In conclusion, NPD1 bioactivity potently down regulates inflammatory signaling, amyloidogenic APP cleavage and apoptosis, underscoring the potential of this lipid mediator to rescue human brain cells in early stages of neurodegenerations. | Supporting Information | We would like to thank Dr. Dennis Selkoe, Harvard Medical School (Boston, MA) for providing APP695wt plasmid; Dr. Todd Golde, Mayo Clinic (Jacksonville, FL) for providing the βAPP695 sw plasmid; and Dr. Ronald Evans at Salk Institute (San Diego, CA) for providing MH100-tk-luc and PPARγ-GAL4 plasmids. We also would like to thank Changde Zhang at LSUHSC (New Orleans, LA) for technical assistance. | CC BY | no | 2022-01-13 08:14:23 | PLoS One. 2011 Jan 5; 6(1):e15816 | oa_package/51/ba/PMC3016440.tar.gz |
|
PMC3016441 | 21264334 | Introduction
There are many situations in which we would like to determine the haplotype composition of a polymorphic sample or population. However, while rapidly decreasing sequencing costs are making it feasible to accomplish this by simply sequencing large numbers of individual samples from the population (perhaps using bar-coding), it is often impossible or very costly to obtain individual samples, because it would involving labor-intensive cloning and culturing. Currently, to analyze different pathogen strains, scientists usually culture samples to isolate them and sequence them separately [1] , [2] , [3] . However, determining the relative frequencies of strains within hosts this way would be prohibitively expensive. Analogously, in cancer studies, techniques have been developed for identifying and extracting oncocyte from tissue before doing further analysis [4] , [5] , [6] , but determining the proportions of different cell lineages in tissue this way is not practicable. In the field of mRNA expression studies, many efforts have gone into experimental methods for isolating alternatively spliced isoforms [7] , [8] , but these are again not suitable for high-throughput analysis. Besides such general applications, there are many specific problems involving pooled haplotype analysis. A recent example is mitochondrial heteroplasmy, which is believed to impact aging [9] . If one wants to use sequencing to do further investigation on the haplotype level (instead of single marker level), a haplotype analysis tool is needed.
In addition to naturally pooled samples, it is sometimes sensible to pool samples artificially simply to reduce cost. For example, it is going to be common to pool samples with extreme phenotypes to do association mapping [10] , and it is also possible to envision monitoring haplotype frequency changes in cohorts by sequencing. As long as it is possible to infer haplotype frequencies, low-coverage sequencing (the total coverage could be the same or even smaller than number of haplotypes) may well be more cost-effective than sequencing a large number of individuals.
To facilitate those applications, we provide PoolHap, a computational tool for inferring haplotype frequencies from pooled samples when haplotypes are known. The PoolHap pipeline assumes that the investigators have sequenced a pool of samples and have run a mapping tool (e.g. BWA [11] ) to map the short reads to a reference genome to call SNPs from the consensus sequence. For each bi-allelic heterozygote variation called, PoolHap calculates the ratio between numbers of reads supporting the two alleles and the total coverage as the evaluated allele frequencies. Then PoolHap infers the haplotype frequencies.
The key insight into why PoolHap works is that the very large number of SNPs that come with genome-wide coverage can compensate for the uneven coverage across the genome. We use a regression model. Assume we have h potential known haplotypes and many SNPs in which we can choose n most informative bi-allelic SNPs (See Methods for how we define and find informative SNPs). We model the allele frequencies vector (observed in the assembly) as a dependent variable Y ; SNP alleles in haplotypes as independent variables X 1 , ..., X h , where each X i is a n -vector composed of 0 and 1 regarding to the alleles in the corresponding i th haplotypes; haplotype frequencies as coefficients b 1 , ..., b h . Then solving the regression Y = b 1 X 1 + ... +b h X h yields the estimated haplotype frequencies (see Methods for precise formalization and methodology). The advantage of this model is that (1) the estimation of frequencies does not suffer from co-linearity between X i s and (2) it is robust to coverage bias between regions (see Methods for detailed arguments). In what follows, we will examine the performance of PoolHap under various conditions using both simulated and real data, and discuss its advantages as well as limitations. | Methods
PoolHap main algorithm
Assume we have h potential haplotypes and many SNPs in which we can choose n most informative bi-allelic SNPs (See next subsection on how we define and find informative SNPs). We model the SNP allele frequencies (observed in the assembly) as a dependent variable Y ; SNP alleles in haplotypes as independent variables X 1 , ..., X h , where each X i is a n -vector composed of 0 and 1 regarding to the alleles in the corresponding i th haplotypes; haplotype frequencies as coefficients b 1 , ..., b h . Then solving the regression Y = b 1 X 1 + ... +b h X h yields the estimated haplotype frequencies. Following are more details.
Let us say we have n SNPs chosen from the mixed assembly. We model the allele frequency as a dependent variable, and consider the actual frequencies vector (observed in the assembly) as a sample of n realizations of this variable.
At the same time, we model each haplotype as independent variable X 1 , ..., X h , where each X i has a realization of an n -vector composed of 0 and 1 regarding its allele in at corresponding SNPs.
We model the relationship between the observed assembly and the haplotypes as a random equation:
where coefficients b 1 , ..., b m are haplotype frequencies. Then solving the regression Y = b 1 X 1 + ... +b m X m yields the estimate of haplotype frequencies. If we denote the estimated frequencies as , then the error can be specified as
Here we assume the coverage of different SNPs is independently sampled from the pool, which is approximately true. The exceptions could be (1) there is a duplication/deletion region carrying multiple SNPs. (2) The SNP density is high so that one short read covers multiple SNPs. Nevertheless, this assumption is not crucial in solving the regression.
The first advantage of this model is that it does not suffer from co-linearity between X i s which frequently happened in regression analysis: it usually will cause an estimate of coefficients of correlated independent variables with very big absolute values and reverse sign as long as their sum is the same. The good point here is that since we model one allele as 0 and the other 1 so that one of the alleles totally does not contribute to the sum. Therefore, only the coefficients on the 1′s take effects in the regression. For example, haplotypes (0,0,0,1) and (1,1,1,0) are strongly correlated but changing the coefficient of (0,0,0,1) cannot compensate for the coefficient change of (1,1,1,0).
Another point is that we model the relationship between haplotypes and mixed assembly and treat SNPs as samples. Thus the biased coverage at a particular SNP or region can be regarded as sampling variance. As long as we have large a number of informative SNPs, i.e., large sample size, we get a good estimate. In this sense, it is robust to uneven coverage of NGS between haplotypes on particular SNPs, as long as the coverage are randomly distributed. For the same reason, it does not matter if we have very low coverage as long as there are sufficient SNPs. However, in case the coverage is systematically biased, which may happen in practice, this model will be biased. An example we observed: there is one strain by which the library construction is biased towards to some regions. When including this strain, the inference of this strain is incorrect and the rest are influenced.
An interesting topic we have not discussed so far is how do we code the alleles. Should we code major alleles are 1 and minors are 0? Or should we code minor alleles are 1? The answer is neither of them is correct. We should just randomly choose an allele to be 1. The reason is that the “major” alleles have to be calculated from the mixed assembly that is not reliable. Consider the following extreme example: there is a SNP with type A/T, and the real allele frequencies of the mixed haplotypes should be 0.5 v.s. 0.5. It is nature that the observed frequency is not exactly the same to 0.5. Then if one fix the coding of major allele as 1, then we find that in all cases, regardless whether the observed frequencies of A is 0.4 or 0.6, we always select an allele with observed frequency bigger than 0.5. Similar situation applies for non-extreme cases as well.
SNP selection algorithm
As a regression based method, the PoolHap algorithm favourites the configuration in which the correlations between independent variables are small. However, as stated in the above subsection, the special configuration here guarantees that it does not matter if two haplotypes are correlated as long as they have many different homologous alleles. So here the optimal configuration will be reached when the differences between all pairs of haplotypes are as large as possible. More precisely, one should choose the SNPs so that the proportion of different alleles in any two haplotypes is large. Therefore we define the informative SNPs as the set of SNPs that maximizes the proportion of different SNPs. In this subsection, we will give general derivation on the average pair-wise differences as a theoretical upper bound and propose our criteria and algorithm with respect to the smallest pair-wise differences afterwards.
From the derivation below, we know that the mean difference of all pairs of haplotypes is fixed when the population allele frequencies are fixed. (Please notice that we denote “allele frequency” as the allele frequencies observed in the assembly in the above subsection. That is, the number of reads supports the alleles divides the number of total reads covering this location. But here, by “allele frequency”, we are referring the number of haplotypes with this allele divide the total number of haplotypes.)
Let us say we have h haplotypes and n SNPs with alleles frequencies ( f 1 , ..., f n ). We use h i to denote the i th haplotype and h i , m to denote the m th SNP at i th haplotype. Then the mean difference proportion between pair of haplotypes is
Therefore we know that the mean of proportion of different SNPs between haplotypes are decided by the population allele frequencies regardless of how the alleles are distributed to the different haplotypes.
The maxima of the above expression can be derived as:
where the equality holds when, for all m , f m = 0.5.
The above equation gives the upper bound of the performance of SNP selection. However, selecting SNPs with MAF = 0.5 does not necessarily give the best configuration. The reason is that, given the maximal mean difference in the population, there might be a pair of haplotypes with very little difference, therefore the problem similar to co-linearity will happen. So we use the criteria that the smallest proportion of difference among any pairs of haplotypes has to be maximized. To maximize the above measurement, we use the following simple greedy algorithm:
Initially, we randomly choose n SNPs as an initial selection, S 0 . Then, in the iterative process, from each current selection S i we identify the pair of haplotypes that reaches the smallest proportion of difference, and change one SNP so that this smallest proportion is increased. If this selection results in a new lower difference between another two haplotypes, we ignore this attempt; otherwise we reach another new selection S i+1 . This process is iterated until either we reach a satisfactory smallest proportion (which is 45% of the SNPs by default, or specified by users) or the number of iterations reaches a pre-specified threshold. We replicate this process 20 times to avoid local maxima.
In theory, this greedy algorithm does not guarantee the global performance. However, in practise, we found that the smallest proportion approaches the upper bound given in formula (5) within around 10,000 iterations when we wanted to find 20,000 SNPs from whole genome A. thaliana data. When the candidate SNPs have the right MAF to maximize the upper bound to (6), this algorithm can also approach that.
In practice, if the number of SNPs needed is relatively small compared with the whole data, PoolHap SNP selection program will halt easily. Otherwise, it will try to find the SNPs with lower quality. The users can specify the number of SNPs needed in the analysis.
Besides the above procedure of selecting informative SNPs, we also suggest that users select SNPs to avoid mapping errors, SNP calling errors, as well as structural rearrangements. The current mappers for NGS (e.g., BWA [11] ) usually gives SNP quality and coverage of the SNPs. We suggest the users to select SNPs with high SNP qualities and mediate coverage. But we do not provide this function as part of PoolHap pipeline due to many existing mappers and SNP calling methodologies and file formats.
Sub-function for RNA-Seq data
The problem of detecting frequencies of multiple mRNA isoforms from RNA-Seq data is simply modified to be dealt with by the PoolHap method. As stated in the RNA-Seq simulation section, there are two small problems in making use of the PoolHap algorithm directly on RNA-Seq data: (1) some isoforms share the same exon but not exactly the same coordinates; (2) the coverage of the first and last exons will be lower than the one it should be according to the actual frequency because of the length of the reads. Therefore, in addition to the main algorithm, we developed a sub-function to process RNA-Seq data.
For problem (1), analogous to the approach adopted in by Jiang and Wong [15] , we treat the difference between different isoforms on the same exon as another exon. For example, if isoform A and B share the same exon from rough location at chromosome 2 and coordinates 100 to 200+. But, exactly, the exon at A is from 100 to 200, and B is from 100 to 210. Then we extract 200 to 210 as another exon so that A and B share a new exon from 100 to 200, but B has another “exon” from 200 to 210 whereas A does not.
For problem (2), we check the length of first and last exon (the pseudo-exon generated in the last step also counted). If they are not significantly longer than the read length, then we remove them before entering the regression. We use a cut-off that the exon length must be larger than 5 times of the reads. In practice, users can tune this parameter.
Please note that the data this sub-function is working on is the isoforms generated by different gene models. In case the user wanted to get relative abundance of paternal and maternal transcripts by looking at heterozygotes SNPs in the mRNA, s/he will need to use the standard PoolHap, treating paternal/maternal transcripts as known haplotypes. | Results
Inferring the frequencies of known haplotypes in a pooled sample of simulated Arabidopsis thaliana data
To examine how the performance of PoolHap depends on SNP number and sequencing coverage when haplotypes are known, we use simulations based on real sequencing data from A. thaliana , generated as part of the 1,001 genomes project ( http://www.1001genomes.org ). We have sequenced the whole genome of over 50 Swedish strains. There are around 20 million 76 bp paired-end reads for each strain, yielding around 20x coverage. We map all the reads to col-0 reference to call the SNPs by BWA and SAMtools [12] . The number of SNPs of these strains ranges from 183,883 to 523,756. Because these strains are inbred lines, most of the SNPs are homozygous.
Based on the above SNPs identified in read data, pools of 6 strains and 34 strains were simulated. For each simulation, we select all SNPs that show up in 40%∼60% of the strains that have SAM quality score [12] 255 (which is the highest score) and are bi-allelic in all strains as the candidate SNPs. We specify the number of SNPs to be used, and select these as the most informative ones using PoolHap's SNP-selection function. Finally, we generate random samples to achieve a pre-determined mean and standard deviation (SD) of coverage. We use the average absolute value of differences between true and inferred frequencies as a matrix of error. For each combination of mean of coverage, SD of coverage, and number of SNPs, we replicate 30 random frequencies and take the average of the errors as final error. The results of 34 strains are depicted in Figure 1 ( a ) and ( b ) , and the 6 strains are depicted in Figure 1 ( c )and ( d ) .
We find that even with as little as 2x coverage (standard deviation 0.5), and 1,000 selected SNPs (from 189k SNPs) PoolHap is able to estimate the haplotype frequencies to within 2% of their true value ( Figure 1 ( b ) ). Results are only slightly better when coverage standard deviation is reduced to zero ( Figure 1 ( a ) ), indicating that PoolHap algorithm is free of stochastic coverage which frequently occur with NGS. Errors reduce to less than 1% with increasing coverage of selected SNPs. These observations confirm the intuition that PoolHap can successfully take advantage of large number of SNPs to correctly infer haplotype frequencies with low and uneven coverage.
Simulated A. thaliana gene expression data
PoolHap can also be applied to RNA-Seq data to infer the relative abundances of different transcripts of the same gene. In theory, the situation is the same as other haplotype-known problems. We can just simply encode the exons as if they are SNPs: if the exon is presence in one isoform, then we encode it as 1, otherwise 0. However, there are some differences in this particular application: (1) some isoforms share the same exon but have different 5′ or 3′ termini; (2) the coverage of the first and last exons will often be artifactually lower. To facilitate this application, we developed a sub-function special for RNA-Seq ( Method ). We simulate A. thaliana RNA-Seq data based on the gene models downloaded from TAIR website ( www.arabidopsis.org ) with random frequencies. Using the same reads simulation procedure and the same error measurement to the former simulation, we assess the performance of PoolHap on all genes of A. thaliana with more than one isoform.
We find that with 50x coverage we can infer the frequencies of three-isoform genes with less than 5% error. Precision is marginally less for more complex genes, and increases with increased coverage. In some genes, because the fact that different isoforms show very small differences or even the same after our processing to deal with problems (1) and (2), the regression solver cannot distinguish different independent variables (i.e., meets singular matrix) therefore fails to find the solution. We treat those cases as “trivial gene model” and omit them in the performance evaluation. The results for the remaining genes are depicted in the Figure 2 .
As one can see, the performance is not as good as the simulation on whole genome data. The reason is, due to the limited difference between isoforms in some genes, the ability of distinguish them is also limited.
So far we have shown the results on simulated data, which is a simplified vision of real applications. However, a few factors absence in the simulation, e.g., copy number variations, library duplications, sequencing/mapping errors, may be important in real applications. In the following we test PoolHap with real NGS reads.
Application to a pooled sample of known A. thaliana haplotypes
We apply PoolHap to the mixture of a subset of real data of the reads described before. We randomly select a subset of reads from 6 (or 16) strains with predefined proportions, map these reads to the reference genome by BWA, call SNPs at the known polymorphism sites, and then use the PoolHap method to infer the proportions of strains.
The total coverage of this dataset is 20x, which is currently the typical coverage of one Illumina lane for A. thaliana . The results are presented in Table 1 and 2 . We find that with 6 strains, we correctly infer the proportion of each strain with an average of 1.6% difference between the actually proportion and our predicted proportion. With 16 samples, results are similar, the average difference being 0.6%. The use of mixed read data sets is more difficult than simulations, probably due to library duplications, genome structural rearrangements, sequencing errors, and mapping errors, etc. For comparison purpose, we also list the results inferred from simulation in the same condition to show how the performance decreases slightly with real data. | Discussion
As one can see from both the simulation and real data analysis, the algorithm works well for whole genome A. thaliana data. This is partially because of the fact that we have plenty of SNPs to be selected (which is the nature of whole genome resequencing). In case one does not have that many SNPs, the SNP-selection algorithm will have to be modified to select more individual-specific SNPs. This could also improve the performance on rare haplotypes, which is not well handled in the current version of PoolHap. We hope to improve this in future versions of PoolHap.
It should be noted that in the regression equation of haplotype known part of PoolHap, all the regression coefficients must be positive (or zero) and their sum has to be 1. In our implementation, we have not taken advantage of those constraints, partly because we have not found a good way to incorporate them, but also because they provide a means of checking the results. If the results are correct, i.e., close to the real frequencies, they must approximately satisfy the above constraints. If not, it means that there is systematic bias in the coverage, indicating that the results in this run are not reliable (due to copy number variations, sequencing error, or library duplications, etc). If we force the regression to be solved with respect to those constrains, we lose the possibility of this kind of valuable control.
The current version of PoolHap is ready for haplotype-known applications. We are also developing the extensions for haplotype-unknown functions. There are many scenarios where this may be applied, for example, when pathogen DNA has been extracted from a group of patients That may all have infections from parasites of mixed genotypes. The haplotypes in one sample may share some genetic information with the ones in the others. In this case, we can make use of the LD in the population and try to infer the haplotype frequencies by iteratively sampling from the global haplotype distributions like PHASE [13] does.
If one considers short indels as common variations like SNPs, PoolHap is ready to incorporate them into analysis. However, due to the premature status of indel calling in NGS platforms [14] , we have not applied it in our real data analysis. We believe this will be feasible in the near future. | Performed the experiments: VN. Analyzed the data: QL. Wrote the paper: QL DCJ MN. Conceived the study: QL MN CTS. Designed the algorithm: QL KY ZN. Wrote the program: QL QZ.
With the advance of next-generation sequencing (NGS) technologies, increasingly ambitious applications are becoming feasible. A particularly powerful one is the sequencing of polymorphic, pooled samples. The pool can be naturally occurring, as in the case of multiple pathogen strains in a blood sample, multiple types of cells in a cancerous tissue sample, or multiple isoforms of mRNA in a cell. In these cases, it's difficult or impossible to partition the subtypes experimentally before sequencing, and those subtype frequencies must hence be inferred. In addition, investigators may occasionally want to artificially pool the sample of a large number of individuals for reasons of cost-efficiency, e.g., when carrying out genetic mapping using bulked segregant analysis. Here we describe PoolHap, a computational tool for inferring haplotype frequencies from pooled samples when haplotypes are known. The key insight into why PoolHap works is that the large number of SNPs that come with genome-wide coverage can compensate for the uneven coverage across the genome. The performance of PoolHap is illustrated and discussed using simulated and real data. We show that PoolHap is able to accurately estimate the proportions of haplotypes with less than 2% error for 34-strain mixtures with 2X total coverage Arabidopsis thaliana whole genome polymorphism data. This method should facilitate greater biological insight into heterogeneous samples that are difficult or impossible to isolate experimentally. Software and users manual are freely available at http://arabidopsis.gmi.oeaw.ac.at/quan/poolhap/ . | We thank Emmanouil Dermitzakis for discussions, Andreas Sommera for sequencing, Petar Forai and Ümit Seren for IT support, and Michael Flanagan for his public available Java library ( http://www.ee.ucl.ac.uk/~mflanaga/java/ ). | CC BY | no | 2022-01-13 08:14:23 | PLoS One. 2011 Jan 5; 6(1):e15292 | oa_package/97/bf/PMC3016441.tar.gz |
||
PMC3016442 | 21143978 | Background
Osteoporosis affects over 200 million people worldwide [ 1 ], and the fractures it can cause represent a considerable financial burden to healthcare systems [ 2 - 6 ]. This challenge is compounded by an increasingly aging population [ 2 , 6 , 7 ], particularly since the clinical consequences of osteoporosis can significantly impair quality of life, physical function, and social interaction and can lead to admission to long-term care [ 4 , 8 ]. Although guidelines are available for osteoporosis disease management [ 9 - 14 ], patients are not receiving appropriate diagnostic testing or treatment [ 15 - 17 ]. One potential solution to closing these practice gaps is to use clinical decision support systems (CDSSs), which can facilitate disease management by translating high-quality evidence at the point of care. We conducted a systematic review of randomised controlled trials to determine what features of current tools may support clinical decision-making in osteoporosis disease management [ 18 ]. Findings indicated that few osteoporosis CDSSs exist and that the disease-management components of interventions were lacking in most studies [ 18 ]. Interventions consisting of reminders and education targeted to physicians and patients appeared more promising for increasing osteoporosis investigations and treatment than did single-component or single-target interventions. Findings from the systematic review and input from clinicians and experts in information technology and human-factors engineering were used to develop a conceptual model of an osteoporosis tool. This model was qualitatively explored in a series of focus groups to determine how physicians perceived this conceptual model and which key features, functions, and evidence were needed to transform it into a functional prototype [ 19 ].
The resulting prototype tool is targeted to both physicians and patients and consists of three components: (1) an electronic osteoporosis Risk Assessment Questionnaire (RAQ) completed by eligible patients on a tablet PC in the clinic examination room; (2) a paper-based, Best Practice Recommendation Prompt (BestPROMPT) outlining appropriate osteoporosis disease-management recommendations for use by physicians at the point of care; and (3) a paper-based, Customised Osteoporosis Educational (COPE) sheet given to patients at the end of their physician visit. The first component of the tool ( i.e ., the RAQ) is designed so it can be completed on a tablet PC by eligible patients (men ≥65 years and women ≥50 years of age) in a clinic examination room during the 5- to 15-minute waiting period prior to the actual physician visit. Completion of the RAQ in the examination room provides privacy for patients and the ability to use the audio support feature of the tool. Patients can listen to the questions out loud (default) or turn off the sound at any time during the questionnaire. Once the questionnaire is completed, RAQ responses are processed using a decision algorithm programmed into the tablet PC, which automatically generates two paper-based outputs using a wireless printer: one for the physician ( i.e ., the BestPROMPT sheet) and one for the patient ( i.e ., the COPE sheet). The BestPROMPT provides a summary of the patient's RAQ responses, a section outlining appropriate osteoporosis disease-management recommendations ( e.g ., to initiate bone mineral density testing or osteoporosis medications such as bisphosphonates), and a graph to plot the patient's 10-year absolute fracture risk. These features were designed so that physicians would be able to use this information with their patients at the point of care. The COPE sheet summarizes patients' osteoporosis risks according to their RAQ responses and provides a section outlining osteoporosis information customised to their identified risks ( i.e ., an explanation of what each risk factors means, and what they can do about them).
Although information technology or CDSSs, such as the osteoporosis tool, can address important barriers to clinical practice and may enhance the safety, quality, and patient-centeredness of care while increasing efficiency [ 20 , 21 ], there is an increasing body of evidence showing unanticipated and undesired consequences to implementation of these systems [ 22 - 26 ]. Without careful consideration of system design, function, and end-user perspectives, these systems can fail if rushed to become an integral part of healthcare systems and practices either during rigorous evaluation or after implementation of such interventions [ 27 ]. If information technology systems are integrated without evaluating how they might impact end users or their existing workflow, they have the potential to be ineffective, function poorly, and result in medical or technology-induced errors [ 22 , 23 ]. Usability testing is an investigation of the human-computer interaction--to provide practical feedback on the design of computer systems and user interfaces and provide information about the process of using a system to characterize decision-making, reasoning skills, and the information-processing needs of participants as they perform representative tasks that require complex information processing [ 28 - 30 ]. Another important consideration in the prototype development process is iterative systems analysis, which involves the evaluation of the system during the design phase, followed by further cycles of redesign and testing. These evaluations are needed to ensure that the needs of end users are considered over what researchers and designers might perceive as important user requirements. Importantly, iterative analysis is needed before a system is ever considered for implementation in clinical practice [ 31 ].
The objectives of the current study were to conduct a usability evaluation of the three components of the osteoporosis tool to assess how well the prototype meets functional goals (features, format, and interface) and usability needs (outcome impact goals and end users' requirements and information needs) and to determine end users' perceptions of the facilitators and barriers to using the prototype at the point of care. | Methods
To determine if the osteoporosis prototype meets the usability needs of all end users, a usability study was planned for each component of the tool: usability study 1 (the BestPROMPT); usability study 2 (the RAQ), and usability study 3 (the COPE sheet). All three studies were designed according to the usability framework described by Kushniruk and Patel [ 28 ] because it promotes an evidence-based approach to deriving knowledge and is regarded as the most useful method for testing usability in the medical context [ 32 , 33 ]. It was anticipated that the osteoporosis tool would be changed iteratively throughout the usability evaluation studies, retested and evaluated, and a final modification made once the desired functionality and usability were achieved.
All usability studies were approved by the University of Toronto and St. Michael's Hospital research and ethics boards, and a written informed consent was obtained from all participants. All studies consisted of 30- to 60-minute, one-on-one sessions with an experienced moderator using a standardised, structured worksheet combined with a semistructured discussion using open-ended questions to evaluate each tool component. Participants were encouraged to 'think aloud' and verbalise their thoughts about the component being tested. The target sample size for each study was five to eight participants because evidence indicates that 70% of severe usability problems can be uncovered within the first five users and up to 85% by the eighth user, after which the yield of identified problems tends to drop and is also less significant [ 28 , 34 ].
Usability study 1: evaluation of the BestPROMPT sheet
The first study was conducted with full-time family physicians and general internal medicine specialists in the greater Toronto area between May and September 2008. Physicians were randomly selected from the College of Physicians and Surgeons of Ontario database using a standardised faxed recruitment letter. To reach the target sample size of eight participants, purposive sampling from the St. Michael's Family Practice Unit in Toronto was required. Population exclusion criteria were general internists who saw patients in a subspecialty practice that excluded the possibility of seeing patients with osteoporosis.
Usability sessions were designed to evaluate the BestPROMPT sheet with relevant end users for appropriate content and format and to include tasks that would be representative of the real uses of the sheet. This involved showing physicians how the BestPROMPT sheet is generated so that potential barriers to using it at the point of care can be addressed in the context of participants' own workflow. To achieve this, the moderator simulated a patient at risk for osteoporosis and completed the RAQ on a tablet PC, which the physician participant observed. Using a structured worksheet, the BestPROMPT copy that was generated during this exercise was used in the second part of the usability session to elicit feedback on format ( e.g ., font, spacing), readability, and understandability using a five-point Likert scale. Open-ended questions were used to probe what participants found the most/least useful about the BestPROMPT and the barriers to using the sheet at the point of care; we also included a validated, 10-item System Usability Scale [ 35 ] to assess the subjective usability of the tool.
Usability study 2: evaluation of the RAQ
The second study was conducted with patients at risk for osteoporosis (men ≥65 years of age and postmenopausal women) between October and December 2008. Patients were selected purposively from the patient population of one family physician at the St. Michael's Family Practice Unit until at least five to eight patients (per input device) were tested or usability problems were eliminated. To maximize the number of eligible patients to be recruited, sessions were planned with patients immediately following their family physician visit.
Usability sessions were designed to evaluate the RAQ with its relevant end users ( i.e ., patients at risk for osteoporosis) for appropriate content, format, navigation, and input device (stylus, mouse and keyboard, or touch screen). The moderator used a standardised, pilot-tested script and worksheet for the sessions, which included tasks that would be the most representative of the real uses of the RAQ. A goal for these sessions was to ensure that the RAQ could be completed by participants with little to no assistance from the moderator ( i.e ., to simulate what might be expected in real practice). The usability sessions consisted of three parts: In part 1, the moderator documented observed events as participants completed each RAQ question. This was supplemented by an embedded program, which generated a timed log of each tap/click/touch to enable the calculation of the time it took to complete the RAQ and frequency of incidents and data entry errors. The incident log was developed based on the critical incidence technique pioneered by Flanagan et al . [ 36 ], which can provide an objective assessment of events that make the difference between success and failure ( i.e ., the critical incident) [ 36 ]. We defined an incident in terms of its negative impact: a problem or error according to two levels of severity (critical or general). A critical incident was defined as a problem that completely halted the normal functioning of the RAQ ( e.g ., unable to initiate the questionnaire), whereas a general incident could occur within one session or across sessions but did not inhibit the completion of the RAQ ( e.g ., mis-tapping of a button, activating the 'Warning' window). Incident types were classified as navigational, interface, technical, input-device related, question to moderator, or other. General incidents occurring at least two times within one or across sessions were elevated to critical status. Immediate changes were made only for critical incidents. In the second part of the usability session, observed critical incidents were used as memory probes to clarify the problem and to identify what influence the incident had on the interaction with the system. The last part of the session consisted of a series of semistructured, open-ended questions about the format, interface, features, and content of the RAQ and what participants liked/disliked about the questionnaire.
Usability study 3: evaluation of the COPE sheet
The third study was conducted with patients at risk for osteoporosis in December 2008. Participants were selected purposively from the same family physician's patient population as used in usability study 2 until at least five to eight patients were recruited or usability problems were eliminated. Usability sessions were designed to evaluate the COPE sheet with its relevant end users ( i.e ., patients at risk for osteoporosis) for appropriate content and format. The sessions consisted of two parts: In part 1, participants were asked to complete the RAQ so they could observe how the COPE sheet is generated. This process enabled testing whether the decision algorithm accurately translated the response inputs from the RAQ into the educational content of the COPE sheet. In part 2, the moderator conducted a semistructured interview with participants to explore their understanding of the COPE sheet, what they might do if they had any unanswered questions about their osteoporosis risks, and if they might consider discussing the sheet with their physician. The moderator also asked participants to rate the readability, understandability, and format of the COPE sheet using a verbal five-point Likert scale.
Data collection and analysis
All usability sessions were audiotaped and transcribed verbatim. Usability study 2 was also videotaped to observe users' physical behaviour as they interacted with the RAQ. Data collection and analysis consisted of a combination of qualitative analysis to assess the effect of technology on participant reasoning and decision-making, and quantitative analysis to assess data from the demographic questionnaire, System Usability Scale, critical incident log sheet, and Likert-type questions.
Qualitative data
Qualitative content analyses were guided by the constant comparative method of grounded theory methodology [ 31 ] and verbal protocol-analysis techniques [ 28 , 29 ]. Audio and video data were coded from transcripts using a process of open, axial, and selective coding [ 37 , 38 ] using NVivo 8 software (QSR International, Cambridge, MA, USA). Two researchers independently developed a coding scheme by identifying, classifying, and labelling the primary patterns in the data from the transcripts. During open coding, the constant comparative approach was used to group the codes into categories (where each category was considered a unit of analysis) and identify themes. Axial coding was then done to look at the interrelationship of categories [ 37 ]. The frequency and consistency with which participants indicated categories in the transcripts were used to provide credibility to these categories. We performed a calibration exercise between two reviewers for appropriately classifying themes into categories using Kappa statistics (in NVivo 8), and any disagreements (considered as <90% agreement) were resolved through consensus by a third reviewer. Videos from usability study 2 were viewed by one researcher and coded only for themes related to general and critical incidents. Data from the coded video were used to supplement themes identified by audio transcripts and to corroborate incident log records from direct observation of participants.
Quantitative data
Quantitative data were analysed using frequency analysis of demographic questions, task accuracy, and frequency and classes of problems encountered; descriptive statistics to calculate proportions and time to completion of tasks ( e.g ., mean time to RAQ completion with standard deviations [SDs]); Likert-scale questions (mean scores with SDs); independent sample t -tests for comparing groups for differences in mean time to RAQ completion (with standard errors of the means [SEs]); and a one-way between-groups analysis of variance (ANOVA) to compare the effects of the three input devices on mean time to RAQ completion. Time data were converted from minutes:seconds to total seconds for data entry into the statistical software, and means and SDs were reconverted to minutes:seconds for results tables; means and their 95% confidence intervals (CIs) for comparison groups were converted to minutes. All statistical analyses were carried out using SPSS (Macintosh version 17.0; IBM Corporation, Somers, NY, USA).
Testing-session worksheets and components of the osteoporosis tool were modified and refined according to changes suggested by quantitatively and qualitatively analysed data and retested if findings indicated that significant changes were recommended. The analysis was thus cumulative and iterative, with new versions of the tool components building on proceeding versions. This procedure was continued with the transcripts and data of subsequent usability sessions until themes were saturated. | Results
Usability study 1 (BestPROMPT)
Table 1 shows the characteristics of the 11 physicians (9 family physicians and 2 general internists; 46% between 46 and 55 years of age) who participated in the usability study. The mean overall System Usability Scale score was 80.5 (SD 9.5), which indicates a subjective global view of the BestPROMPT as "relatively easy to use" [ 35 ].
Usability worksheet results
Data analyses of the semistructured interviews identified three broad categories of themes:
1. Participants' perceptions of the barriers to using the BestPROMPT : 91% of physicians identified lack of time as the biggest barrier to using the sheet in family practice. Some were concerned that patients might not finish the RAQ in time for the visit or that the tool would be problematic in settings with no extra examination rooms. Other identified barriers to using the tool were related to workflow and administrative processes, such as increased clinic staff workload ( e.g ., explaining the tool to patients, alteration of workflow to make the BestPROMPT available at the point of care). About half of the participants were particularly concerned that the tool may disrupt the real reason for the visit and interrupt or delay the care of patients with more serious symptoms ( e.g ., chest pain). Suggestions to overcome the lack of clarity in the Recommendation Box section of the sheet were to highlight the Diagnosis section, to distinguish between the Diagnosis and Treatment Recommendation sections, and to indicate when a bone mineral density test should be repeated.
2. Participants' perceptions of the facilitators to using the BestPROMPT : Features that were perceived as facilitators were the inclusion of a 10-year absolute fracture risk graph to show patients which risk region (low, moderate, high) they fell into, the inclusion of a Justification section for the recommendations, and the provision of the most important information about risk, diagnosis, and treatment on one page. Participants liked the RAQ summary table because it provided an overview of their patients' responses and highlighted their major and minor risk factors. Some thought that this information could be used as a reminder about risk factors that may have been overlooked or forgotten, and to select which patient should have a bone mineral density test or which treatment should be started.
3. Participants' perceptions of using the BestPROMPT at the point of care : Most participants indicated that they would use the tool at the point of care but not necessarily during a standard scheduled visit. Suggestions were to use the sheet during a dedicated visit for osteoporosis or a physical examination, and physicians believed that these options would provide more time to discuss the information with patients. Suggestions to enhance point-of-care use were to ensure that the practice workflow is considered during tool implementation and to enable the wireless printing of the BestPROMPT so it can be available for review by physicians prior to the patient visit.
Usability study 2 (RAQ)
Nineteen patients (mean age 72 years; 53% women) from the practice of one family physician participated in the usability study (Table 2 ). Sixty-eight percent of participants indicated previous experience with using a computer, but less than half (47%) reported ever having used the Internet. The first nine participants (47%) tested the RAQ using a stylus pen as the pilot input device. Subsequent patients were alternated between the mouse/keyboard or touch screen. After two alternations of these devices, participants found the touch screen considerably easier to use, so the mouse/keyboard testing was discontinued.
Usability worksheet results
Time to RAQ completion
The mean time to RAQ completion was 6:35 (minutes:seconds) (SD 5:15) (Table 2 ). There was no difference between participants with previous computer use or Internet experience compared with those with no experience for time to RAQ completion (mean difference range 0:22 to 0:47 seconds). Although the mean time to RAQ completion decreased by almost four minutes from initial testing with a stylus pen to the touch screen (Figure 1 ); a one-way ANOVA analysis showed no significant difference between the three input devices for mean time to RAQ completion (Table 3 ).
Critical incident analysis
Of 81 incidents observed among 19 participants, 42 were critical and 36 were general incidents (6 general incidents were elevated to critical status). Navigational problems ( i.e ., moving from one RAQ page to the next without assistance) accounted for 60% of the total critical incidents, and 20% of problems were related to input device ( i.e ., mis-tapping, clicking or touching on the tablet PC screen). Most critical incidents (80%) occurred with the first nine participants testing the stylus pen (range zero to eight incidents), but decreased from five incidents (participant 10) to one incident (participants 11 to 13), to no critical incidents observed with the last six participants using the touch screen (Figure 1 ). Data analysis identified three broad categories of themes from the critical incident log and the semistructured interview of patients:
1. Participants' perceptions of the facilitators to using the RAQ : Fifteen of 19 participants (79%) thought that the questions were clear and simple and easy to read, understand, and use overall. Participants liked the audio feedback and picture aids because these clarified and helped to understand the questions. Of those who tested the touch screen (N = 8), most participants (88%) liked it because it was familiar, even if they had never used a computer: 'It was made easy for me, it was completely natural because it's similar to banking machines, there you've got to touch the screens too, so this reminded me of that'.
2. Participants' perceptions of the barriers to using the RAQ : Several format features impacted use, including the 'Audio' button on the Start page, which many found confusing as it interfered with the successful initiation of the questionnaire. Navigational problems were also identified, including the tendency to unintentionally bypass the second part of two-part questions such as the Periods and Bone Mineral Density pages.
3. Participants' suggestions for improving the RAQ : Suggestions for additional clarity were provided, including creating separate entry fields to distinguish between surname and first name, providing definitions for conditions ( e.g ., rheumatoid arthritis), and providing more direction for participants to move from one page to the next.
Usability study 3
Eight participants (mean age 76 years; 50% men) from the practice of one family physician participated in this usability study. Of these, seven participants (88%) were recruited from the RAQ usability study sample. The mean time to RAQ completion was 4:31 (minutes:seconds) (SD 1:25), and men completed the RAQ almost two minutes faster than did women (Table 4 ).
Usability session worksheet
Data analysis from the semistructured interview identified two broad categories of themes:
1. Participants' perceptions of what they liked about the COPE sheet overall : Most participants (88%) were able to understand and describe specific sections. When asked what they would do with the COPE sheet, all eight participants indicated that they would discuss the information with their physician.
2. Participants' suggestions for improving the COPE sheet : Several content and formatting suggestions were made, including using simpler language ( e.g ., to modify 'Your responses to the questionnaire' to 'This is your answer') and improving the layout so that the table in the COPE sheet extended all the way to the bottom. The COPE sheet was iteratively changed reflecting these suggestions after the first four participants and after the last participant. | Discussion
The three components of the osteoporosis tool were evaluated in individual usability studies to determine how well the prototype met end users' needs, functional goals (features, format, content, navigation), and outcome impact goals ( e.g ., the use of the tool at the point of care). Of the three components of the osteoporosis tool that were tested, the RAQ required the most cycles of iteration to meet the needs of patients at risk for osteoporosis, which may be attributed to several factors. First, the format of the RAQ is complex because it is computer-based and interactive, while the other components are paper-based. Since the RAQ is computer-based, it can also support a system for adapting to evolving evidence about osteoporosis disease management. For example, the decision algorithm of the RAQ was originally programmed according to the 2002 osteoporosis guidelines [ 9 ] but can be easily updated to reflect changing guidelines. Second, the majority of people that would be targeted to use the RAQ are older (age ≥65 years). This is a population that tends to have less experience with computerised systems and may have motor or cognitive impairments or visual deficiencies that may require more attention to interface design ( e.g ., font and tab size and colour), content ( e.g ., wording and amount of information), and ease of navigability.
The think-aloud approach enabled the observation of end users as they carried out relevant tasks while interacting with individual tool components. This process was very helpful for identifying specific problems and to iteratively modify the system accordingly. The transformations of the tool from pre- to post-usability prototype are shown in Figure 2 (selected screenshots of the RAQ), Figure 3 (screenshot of the BestPROMPT sheet), and Figure 4 (screenshot of the COPE sheet), and a demonstration of the tool can be accessed at http://knowledgetranslation.ca/osteo_final/.
Several challenges to point-of-care use of CDSSs in family practice emerged from the findings of the usability studies. It is not surprising that physicians indicated lack of time or resources to use the osteoporosis tool as a major barrier to point-of-care use, as this has been identified in other studies investigating CDSSs [ 20 , 21 ]. However, an unexpected barrier also emerged--the osteoporosis tool might unintentionally disrupt the real reason for the visit. Although evidence indicates that providing CDSSs at the point of care may improve clinical practice [ 21 ], there are challenges to designing such tools for family practice settings because the physician-patient encounter can be disrupted. Although we achieved the goal of designing a quick and easy tool ( i.e ., the last eight patients completed the RAQ in a mean 4:31 minutes and the last six initiated the questionnaire without assistance), physicians suggested that the provision of osteoporosis information at the point of care could interfere with their usual practice in other ways. First, the practice visit agenda may be disrupted because the experience of working through the RAQ may prompt patients to ask questions about osteoporosis during the visit. Second, the introduction of either the BestPROMPT or COPE sheets can facilitate the transmission of osteoporosis knowledge between provider and patient, but this has to be weighed carefully against the cost of interrupting or halting the discussion of more urgent aspects of the patient's intended visit agenda ( e.g ., chest pain) or health status ( e.g ., diabetes). This finding should be an important consideration when designing point-of-care tools and highlights the need for a flexible and pragmatic approach when planning how such tools should be implemented and used in family practice. Interventions that are adapted to their local settings and are tailored to the specific needs of physicians should be considered for systems to better fit the real practice workflow [ 24 - 26 , 39 ]. It might also be useful to provide physicians with a choice to either act on or defer the use of point-of-care information, depending on the context of the patient visit. Physicians are more likely to adopt CDSSs if they have some control over the way it is used, without giving up complete autonomy of their clinical decision-making [ 26 , 40 ]. In the case of the osteoporosis tool, this would enable physicians to use information about osteoporosis at their discretion without having to compromise the well-being of their patients or care agenda.
Limitations
There are a number of limitations to the usability studies. First, although we exceeded our target sample sizes, it is possible that the inclusion of more participants may have uncovered more information or problems or have shown significant differences between comparison groups for time to RAQ completion. Second, we recruited all 19 patients from the patient population of one family physician, and more than half of physicians were recruited from the same inner-city center family practice unit, which may not be representative of other family physicians and their patients or settings. However, given the demographics of the participants, they appear similar to other patients with osteoporosis. Third, we excluded the System Usability Scale questionnaire from patient usability testing, so it was not possible to calculate an overall usability score for either the RAQ or COPE components of the tool. We wanted to optimise the balance between getting feedback about the usability of these tool components without exhausting the mostly elderly participants. Additionally, the recruitment process restricted the opportunity to extend sessions to include the System Usability Scale since most patients were recruited immediately after their family physician appointment, when many patients were too tired, weak, or ill to participate in a study lasting more than 30 minutes. Lastly, control for selection bias was difficult because patients who tested the RAQ and COPE sheet were selected from the same practice setting ( i.e ., the St. Michael's Hospital Family Practice Unit). However, their inclusion was also useful because they were able to see two components of the tool. | Conclusions
Results from the three usability studies were used to make informed modifications and refinements to the osteoporosis tool prototype. Major challenges to point-of-care use of the tool were physicians' lack of time and that the tool might unintentionally disrupt the real reason for the visit. These challenges indicate that implementation of such tools in family practice requires a flexible and pragmatic approach. The findings also confirm the importance of usability testing of interactive clinical decision support applications and information systems on all end users to reduce problems and errors, particularly if the future goal is to implement such systems in a clinical practice setting. The findings of the usability studies also highlight the need to include usability evaluation as an important step in the development process of knowledge translation interventions. | Background
Osteoporosis affects over 200 million people worldwide at a high cost to healthcare systems. Although guidelines are available, patients are not receiving appropriate diagnostic testing or treatment. Findings from a systematic review of osteoporosis interventions and a series of focus groups were used to develop a functional multifaceted tool that can support clinical decision-making in osteoporosis disease management at the point of care. The objective of our study was to assess how well the prototype met functional goals and usability needs.
Methods
We conducted a usability study for each component of the tool--the Best Practice Recommendation Prompt (BestPROMPT), the Risk Assessment Questionnaire (RAQ), and the Customised Osteoporosis Education (COPE) sheet--using the framework described by Kushniruk and Patel. All studies consisted of one-on-one sessions with a moderator using a standardised worksheet. Sessions were audio- and video-taped and transcribed verbatim. Data analysis consisted of a combination of qualitative and quantitative analyses.
Results
In study 1, physicians liked that the BestPROMPT can provide customised recommendations based on risk factors identified from the RAQ. Barriers included lack of time to use the tool, the need to alter clinic workflow to enable point-of-care use, and that the tool may disrupt the real reason for the visit. In study 2, patients completed the RAQ in a mean of 6 minutes, 35 seconds. Of the 42 critical incidents, 60% were navigational and most occurred when the first nine participants were using the stylus pen; no critical incidents were observed with the last six participants that used the touch screen. Patients thought that the RAQ questions were easy to read and understand, but they found it difficult to initiate the questionnaire. Suggestions for improvement included improving aspects of the interface and navigation. The results of study 3 showed that most patients were able to understand and describe sections of the COPE sheet, and all considered discussing the information with their physicians. Suggestions for improvement included simplifying the language and improving the layout.
Conclusions
Findings from the three studies informed changes to the tool and confirmed the importance of usability testing on all end users to reduce errors, and as an important step in the development process of knowledge translation interventions. | Competing interests
The authors declare that they have no competing interests.
Authors' contributions
All authors participated in the design of the study. MK and CM conducted the usability testing sessions. MK, CM, and SES performed the analysis. MK drafted the manuscript, and all authors read and approved the final manuscript. | CC BY | no | 2022-01-12 15:21:46 | Implement Sci. 2010 Dec 10; 5:96 | oa_package/6d/64/PMC3016442.tar.gz |
|
PMC3016470 | 21234139 | INTRODUCTION
The treatment of acute appendicitis remained essentially unchanged since its first description by Charles Mc’burney in 1889 before the New York surgical society.[ 1 ] Appendicectomy by Mc’burneys incision remained the procedure of choice for nearly a century until 1983 when Kurt Semm offered an alternative, “laparoscopic appendicectomy”.[ 2 ] But as McBurney”s operation is well tolerated with less co-morbidity the benefits of laparoscopic appendicectomy have been difficult to establish. The putative advantages of the laparoscopic approach are quicker and less painful recovery, fewer postoperative complications and better cosmesis.[ 2 ] It allows better assessment of other intra-abdominal pathologies. But because the validity of these point’s remains unconvincing and also because of shortage of laparoscopic sets in some hospitals, laparoscopic appendicectomy is not practiced widely. Twenty years later laparoscopic appendicectomy is all set to become the choice of therapeutic modality.[ 1 ] This prospective study highlights the advantages of this procedure and proves it to be superior over open appendicectomy. | MATERIAL AND METHODS
This prospective study was carried out over a twelve-month period and included patients with the clinical diagnosis of acute appendicitis. The patients were divided into two groups. The first group included patients undergoing laparoscopic appendicectomy (LA) and the second group included patients undergoing open appendicectomy (OA). The groups were divided based on the choice of the operative procedure the patients preferred. The patients were explained in details about both the operative procedures and were asked to choose which operative procedure the patient liked to undergo. For patients below fifteen years of age, patient’s relatives were explained about the operative procedure and their choice was considered. In the true sense it is not a randomized control trial, because patients voluntarily opting for the method were chosen.
Those patients who chose to undergo laparoscopic appendicectomy but, had contraindications, i.e ASA IV and physiologically compromised, having to creation of carbon dioxide were excluded from the study.
One hundred patients consented to undergo laparoscopic appendicectomy and one hundred seventy nine patients preferred open appendicectomy. A thorough history regarding onset of pain, radiation, anorexia, vomiting and fever was noted. In females of child bearing age (14 to 44 years) a detailed account of menstrual history was noted to exclude pelvic inflammatory disease. All male patients with right iliac fossa pain; a history of burning sensation during micturation and or hamaturia was noted to exclude the diagnosis of ureteric colic. General survey was performed with special emphasis on recording of pulse, temperature and blood pressure. Abdominal examination to note Mc’burneys tenderness, psoas test, obturator test, cough sign, pain on straight leg rising, localized rigidity of right iliac fossa and rebound tenderness was performed. Per rectal examination was mandatory in all the patients. Other systems were examined to note signs of sepsis. After having diagnosed the patient provisionally as a case of appendicitis further examination to confirm the diagnosis included total count to note leucocytosis, biochemical examination to note blood sugar, urea and creatinine, straight X ray abdomen and ultra sonography. A final decision regarding operative intervention was made for all cases of appendicitis. The patients were explained in details about the operative modalities (laparoscopic and open appendicectomies). They were then given the choice of the operative procedure they would like to undergo.
Open appendicectomy was performed through a Mc’ Burney’s muscle splitting incision. The base of the appendix was left uninvaginated.
For laparoscopic appendicectomy, two-hand laparoscopic appendicectomy using three ports, umbilical (10 mm), suprapubic (5 mm) and right iliac fossa (5 mm) was performed. The appendicular artery was dissected and divided between haemostatic clips. The appendix was secured at the base with three loop ligatures, divided between the two distal ligatures, and removed through the 10 mm umbilical port. The base of the appendix was not invaginated.
Laparoscopy was converted to open appendicectomy if technical difficulties, uncertain anatomy or bleeding was encountered. Peroperatively a note was made as to the macroscopical nature of the appendix. The resected appendix was routinely sent for histopathological examination. In patients with perforation and peritonitis, drainage with No 14 F Ryle’s tube was inserted through the right iliac fossa port.
Post operatively intravenous fluids (IVF) were continued for four hours in patients with uncomplicated appendicitis and normal diet instituted thereafter. For complicated cases (patients with perforation and peritonitis) IVF was continued till normal bowel function returned (return of bowel sounds and passage of flatus). Antibiotic prophylaxis included a single dose of third generation cephalosporin for uncomplicated cases. For complicated cases a third generation cephalosporin along with metronidazole preoperatively at induction and another after twelve hours was given.
Analgesics in the form of Diclofenac sodium injections were given for twenty-four hours. Further analgesics were given based on patients’ perception of pain. Drain was removed when drainage was less than 30 ml in twenty-four hours.
As in comparable series the operating time, length of hospital stay, analgesic requirements, return to full activity and operative cost was recorded. Patients in both the study group were discharged as soon as possible, i.e. when fully mobilized without the need for assistance from attendants to secure personal hygiene was no longer required. They were encouraged to resume normal activity and work as soon as they felt fit. Normal activity was defined as return to usual activity of domestic and social life at the discretion of the patient.
The patients were followed-up monthly for three months three monthly for six months and yearly for two years. Any patients having complications were admitted through emergency and investigated with hematological examination and ultrasonography of the abdomen.
Differences between laparoscopic and open procedures with respect to predictor variables were tested using Z-test after determining the Mann-Whitney statistics (T) in each variable. Hypothesis of interest is H 0 : Mx ≥My against H1: Mx < My, where Mx is the median of a population of LA patients and My is the median of a population of OA patients.
The Mann-Whitney statistic is T= S- n 1 (n 1 +1)/2,
The test criterion is Z= T-n 1 n 2 /2_______,
√n 1 n 2 (n 1 +n 2 +1)/12
S= Rank sum of LA patients data, n 1 = total sample of LA patient and n 2 = total sample of OA patients. | RESULTS
Two hundred seventy nine patients were included in the study of which one hundred patients underwent laparoscopic appendicectomy (35.8%) while one hundred seventy- nine patients underwent open appendicectomy (64.1%).
Eight patients were excluded from the study because of contraindication to creation of carbon dioxide pneumo-peritoneum. Of these 4 patients were ASA grade IV, 2 patients suffering from COPD with decreased TLC, VC and FEV 1 and 2 patients with persistent hypertension (systolic > 200 mm Hg and diastolic > 110 mm Hg) even on antihypertensive treatment.
Patients were on average 24.7 years old and ranged from 6 years to 77 years. Patients who underwent LA were older (LA: 25.1 years, OA: 24.3 years) and more likely male (LA: 61% male, OA: 58.1% male). A large percentage (30.1%) of the lower and middle-income group preferred laparoscopic appendicectomy contrary to the popular belief that laparoscopic surgery was a delicacy for the higher income group (5.7%). ( Table 1 )
Laparoscopic findings of the hundred patients undergoing laparoscopic appendicectomy are depicted in Table 2 . Of the hundred patients subjected to laparoscopy the procedure was successfully completed in 94 patients, while six patients (6%) had the procedure converted to open surgery. Reasons for conversion were difficulty in visualization and dissection of appendix in 3 patients, peritonitis, abscess and perforation contributed in 3 patients.
In the patients who had undergone laparoscopic appendicectomy, inflamed appendix was detected in 89 patients whereas in 5 patients the appendix appeared normal laparoscopically despite severe symptoms. In all these patients the adenexa was normal.
Appendicular lump found in 5 patients in the laparoscopic group were early lumps, not apparent on clinical examination either preoperatively or under anesthesia. In all these patients the appendix was gently dissected with the tip of the sucker nozzle and by hydro-dissection. Caution was also taken during ligation of the base which was friable and tended to cut through.
Fifteen patients in the open group had appendicular lump. Twelve of these were apparent on clinical examination and the criteria for operation were: (a) lump less than 4 cms, (b) mobile lump confined to the right iliac fossa, and (c) time period less than 48 hrs. For the rest 3 patients lump was detected per-operatively and the same principles as for laparoscopic approach were followed.
Length of hospital stay ranged from 2 days to 9 days. The length of stay was significantly shorter after LA (3 days after LA, 5 days after OA, P < 0.0001).
The rate of infections and overall complications (LA: 15%, OA: 31.8%, P < 0.0001) were significantly lower in patients undergoing LA ( Table 3 ).
There was an insignificantly shorter operating time {OA: 25 min (median), LA: 30 mins (median), 0.05> P > 0.01} in patients undergoing open appendicectomy compared to laparoscopic appendicectomy.
Histopathology revealed normal appendix in 5 patients (5%) in the LA group and 27 patients (15.08%) in the OA group. For the other patients in both the groups histopathology was suggestive of acute appendicitis.
After comparing other covariates ( Table 4 ), LA remained associated with fewer days return to general diet, shorter duration of parenteral analgesia, fewer milligrams of oral analgesia, a shorter postoperative hospital stays and earlier return to full activity.
However the billed charges and direct costs appeared to be marginally higher for laparoscopic appendicectomy but the total cost was less given the shorter hospital stay and abbreviated recuperative period. | DISCUSSION
The results of the present study are in keeping with several previous studies where laparoscopic appendicectomy has been shown to be both feasible and safe in comparison with open appendicectomy.[ 1 – 12 ] In addition to improved diagnostic accuracy, laparoscopic appendecectomy confers advantages in terms of fewer wound infections,[ 3 ] less pain, faster recovery and earlier return to work.[ 4 ] However laparoscopic appendicectomy is time consuming.[ 5 6 ] It is also argued that the advantages of laparoscopic appendicectomy are marginal compared to open appendicectomy performed by an experienced surgeon through a short, cosmetically acceptable incision with minimal complication and shorter hospital stay.[ 2 – 7 ]
The question of whether laparoscopic appendicectomy decreases the length of hospitalization has been a matter of great debate over the past decade.[ 5 – 9 ] The literature provides contradictory results. Most studies report a median hospital stay of 2-5 days irrespective of laparoscopic or open procedure. Although some recent retrospective cohort studies or chart reviews found laparoscopic appendicectomy associated with significantly shorter hospital stay,[ 10 – 12 ] Other retrospective investigations reported nonsignificant differences.[ 13 – 15 ] Similarly, some randomized controlled trials associated laparoscopic appendicectomy with decreased hospital stay.[ 7 9 10 – 15 ] However, others report no significant difference between laparoscopic appendicectomy and open appendicectomy.[ 14 15 ] Even meta-analyses report controversial findings. Sauerland and associates summarized the results of 28 randomized controlled trials and almost 3000 patients and reported a significant decrease in length of hospital stay in patients undergoing LA.[ 14 ] Similar results were found by Golub and colleagues,[ 14 ] whereas another meta-analysis failed to show a statistically significant difference in length of hospital stay between LA and OA.[ 12 – 14 ] The heterogeneity of published results regarding length of hospital stay may be caused by a variety of factors: The current literature describes that the difference may be affected by hospital factors[ 14 15 ] or social habits,[ 15 ] rather than reflecting differences resulting from the operative technique itself. Moreover, further discrepancies may arise from diverse health care policies in different countries. The present study revealed a significantly shorter hospital stay for patients undergoing laparoscopic appendicectomy.
Significant variation in operating time was noted in various controlled studies.[ 8 10 14 ] Some studies noted a shorter operating time for patients undergoing open appendicectomy while others revealed no difference. In the present study more operating time (0.05> P > 0.01) was noted for laparoscopic appendicectomy. This was because of the learning curve during the earlier phase of our study. Level of surgical experience, patient selection and increased conversion rate in the earlier stages accounted for increased operative time. Later phase of our study revealed a more or less similar operating time for open and laparoscopic appendicectomy.
In accordance with other studies there were significantly fewer wound infections in the laparoscopy group.[ 4 – 10 ] A reduction in wound infection can be achieved by extraction of the specimen through a port or with the use of an endobag, or leaving a non-inflamed appendix in place. This has been confirmed in the present study.
At a glance the median operative cost of laparoscopic appendicectomy seems to be marginally greater (LA:→OA Rs. 925/-: Rs. 923/-) compared to open appendicectomy. But considering the total cost of the disease when cost of accommodation, operation and time of work, daily cost of inpatient unit, hourly cost of operating room- recovery ward and the patients consumption is taken into account laparoscopic appendicectomy provides a clinical comfort and economic benefit in all patients. With the government providing most of the equipment and infrastuctural facility thus curtailing one time operative cost, the economic significance and implications definitely favor laparoscopic appendicectomy. Literatures detailing cost analysis are conflicting and vary according to the standpoint of the disease, the patient, the surgeon, the treatment center, industry and society.[ 8 – 15 ] The cost surplus of the laparoscopic procedure and recovery after surgery were evaluated in these studies, to determine the costs and effects of laparoscopic appendicectomy compared with those of open appendicectomy. A shorter hospital stay, resulting in a marginal difference in itemized total costs between the two procedures, offset the increased operative expenses. The studies concluded that laparoscopic appendicectomy was slightly more expensive, but it allowed earlier return to normal daily activities than open appendicectomy.[ 1 – 15 ]
Because of the competition of laparoscopic and open appendecectomy, open appendecectomy has improved greatly. More and more questions are being raised as to the benefit of laparoscopic appendecectomy. A few recent randomized controlled trials have even gone to the extent questioning the benefits and efficacy of laparoscopic appendicectomy.[ 14 15 ] Some authors have concluded once and for all that laparoscopic appendicectomy is out.[ 15 ]
But going by our study we definitely find an over all advantage of laparoscopic appendicectomy. Since studies on laparoscopic appendectomies from our country are few, questions remain: Can it be improved any more? Is there a place and need for further randomized controlled trials? | Background:
Laparoscopic appendicectomy though widely practiced has not gained universal approval. Laparoscopic appendicectomy in India is relatively new and the literature is scant. This study was aimed to compare laparoscopic with open appendicectomy and ascertain the therapeutic benefit, if any, in the overall management of acute appendicitis.
Materials and Methods:
The study group consisted of two hundred and seventy nine patients suffering from acute appendicitis. One hundred patients underwent laparoscopic appendicectomy (LA) and one hundred seventy nine patients underwent open appendecectomy (OA). Comparison was based on length of hospital stay, operating time, postoperative morbidity, duration of convalescence and operative cost in terms of their medians. The Mann-Whitney statistics (T) were calculated and because of large samples, the normal deviate test (Z) was used.
Results:
Of the hundred patients, six patients (6%) had the procedure converted to open surgery. The rate of infections and overall complications (LA: 15%, OA: 31.8%, P < 0.001) were significantly lower in patients undergoing LA. The median length of stay was significantly shorter after LA (3 days after LA, 5 days after OA, P < 0.0001) than after OA. The operating time was shorter {OA: 25 min (median), LA: 28 min (median), 0.01< P < 0.05} in patients undergoing open appendicectomy compared to laparoscopic appendicectomy.
Conclusion:
Hospital stay for LA is significantly shorter and the one-time operative charges appear to be almost the same. LA is also associated with increased clinical comfort in terms of fewer wound infections, faster recovery, earlier return to work and improved cosmesis. | Help of Dr B. B. Mukhopadhyay, MSc, Phd, Sr. Lecturer, Dept. of Community Medicine, BMC&H, Burdwan in carrying out the data base analysis is acknowledged. | CC BY | no | 2022-01-12 15:21:46 | J Minim Access Surg. 2005 Mar; 1(1):15-20 | oa_package/32/c4/PMC3016470.tar.gz |
||
PMC3016471 | 21234140 | INTRODUCTION
Gastro-Esophageal Reflux Disease (GERD) is one of the most common chronic disorders of the gastrointestinal tract. An estimated thirty per cent of the general population experience heartburn or acid regurgitation at least once a month.[ 1 ] Approximately 10-20% of these patients present with complications of GERD which include esophageal ulceration, stricture, and intestinal metaplasia (Barrett’s oesophagus).[ 2 3 ]
Barrett’s oesophagus is recognized as a strong risk factor for esophageal adenocarcinoma,[ 4 ] a malignancy that has nearly quadrupled in frequency during the past 2-3 decades.[ 4 5 ] The true prevalence of Barrett’s oesophagus is unknown; any estimate depends on the definition of Barrett’s that is used. Its prevalence in normal population is estimated to be 4 per 1000, and up to 20% in patients with chronic GERD.[ 2 6 ] It has been suggested that for every known patient with Barrett’s, there might be 20 more unrecognized ones in the general population.[ 2 7 ]
While modern medical and surgical anti-reflux therapies are highly effective in controlling GERD symptoms, there is scant evidence that any treatment prevents progression of Barrett’s to dysplasia or adenocarcinoma. In fact, a recent study suggested that medical acid suppression may be associated with increased risk of cancer in patients with severe GERD.[ 8 ] This has been attributed to continued bile reflux, a major factor in development of esophageal intestinal metaplasia. Anti-reflux surgery, which controls both acid and bile reflux, may on the other hand prevent progression and cause regression of Barrett’s mucosa.[ 9 10 ] DeMeester and colleagues have shown complete loss of intestinal metaplasia in 73% of the patients when the intestinal metaplasia was limited to the cardia compared with 4.4% of the patients with visible Barrett’s oesophagus after laparoscopic and open anti-reflux surgery.[ 11 ] Recently, Bowers and his colleagues reported complete loss of intestinal metaplasia in 47% of the patients with Barrett’s oesophagus after laparoscopic anti-reflux surgery.[ 12 ] Parrilla et al reported that open Nissen fundoplication caused low-grade dysplasia to disappeared in all patients who had low-grade dysplasia Barrett’s oesophagus.[ 13 ]
The aim of this study was to review the experience at our institution with non-dysplastic Barrett’s oesophagus in patients treated with laparoscopic Nissen fundoplication, and examines the relative influence of length of Barrett’s mucosa in changes after surgery. | MATERIAL AND METHODS
Study population
Twenty-one consecutive patients with documented GERD and non-dysplastic Barrett’s oesophagus who underwent Laparoscopic Nissen Fundoplication (LNF) between 1993 and 2000 were reviewed. A diagnosis of GERD was based on endoscopy, 24hr pH study and manometry findings. Barrett’s oesophagus was diagnosed based on a histopathologic finding of specialized intestinal metaplasia (IM) in endoscopic biopsies. The extent of Barrett’s oesophagus was defined as the distance from the gastro-esophageal junction (the point where the tubular oesophagus joined the most proximal gastric rugal folds) to the location of the highest point of the squamocolumnar junction (the site where the pale white esophageal mucosa met the pink columnar mucosa).
Symptom score evaluation
Symptom score evaluation was carried out by an independent observer on all patients preoperatively, and 6 months postoperatively using a validated symptom score.[ 14 ] Six specific symptoms of GERD, namely heartburn, regurgitation, epigastric or chest pain, epigastric fullness, dysphagia and cough (postprandial or supine), were scored as a product of severity (0-3) and frequency (0-4).
Quality of life questionnaire
We assessed quality of life using the SF-36 questionnaire. This is a general quality of life instrument with 4 domains of physical health and 4 of mental health. These 8 domains can be summarized into a Physical Health Component score (PCS) and Mental Health Component score (MCS). We used the published scoring algorithms and validated them with test data sets from the publishers of the questionnaire.[ 15 ]
Esophageal manometry and 24-hr pH study
Esophageal manometry was performed with a seven-lumen sleeve-sidehole catheter (Dent Sleeve, Adelaide, Australia). After topical anaesthesia of the nostril, the catheter was introduced nasally and positioned with the sleeve straddling the LES; the patient was then allowed to accommodate to the tube for 20 min. The basal LES pressure was measured by the sleeve sensor, in relation to the gastric pressure. Pressures were measured in the supine position. Drugs that might affect esophageal motility were discontinued for 24h before the study. Manometry was performed after 4-6 h fast and, in smokers, after abstinence from smoking for at least 6 h.
Ambulatory Digitrapper (Synectics, Stockholm, Sweden) was used to perform 24-hr pH testing. The pH probe was positioned 5 cm above the LES, as determined by manometry. Gastroesophageal reflux was defined as a drop in esophageal pH below 4, and the percentage reflux in 24 hours was calculated for each patient. All patients were asked to stop antireflux medication for 5 days before 24-hr pH testing.
Upper endoscopy and histology
All patients underwent preoperative upper GI endoscopy by a single surgeon (MA) with 4 quadrant biopsies every 2 cm starting at the top of the rugal folds. At endoscopy, the gastroesophageal junction was defined as the point at which the tubular oesophagus ended and the gastric rugal folds began. A columnar-lined oesophagus was identified when the squamocolumnar junction (SCJ) or any part of its circumference extended above the gastroesophageal junction. Patients with an irregular squamocolumnar junction had biopsy samples obtained from glandular mucosal tongues extending into the oesophagus.
Assessment of progression or regression of Barrett’s mucosa
Standard criteria for endoscopic and histologic assessment were applied. Progression was defined as an increase in the length of the endoscopically visible segment of Barrett’s mucosa, and/or the development of dysplastic changes. Regression was defined as absence of the documented intestinal metaplasia on repeated biopsies with decrease in the length of the endoscopically visible segment.
Surgery
All patients underwent LNF. The technique has been previously described.[ 16 ] The procedure was performed through five cannulae (two 10 mm and three 5 mm). If hiatal hernia is present, the content would be reduced and the sac would be mobilized and would be excised. The oesophagus was mobilized to achieve an intra-abdominal esophageal length of 4-5 cm, partially through the hiatus. No esophageal lengthening procedure was needed. The vagi were identified and protected. Minimal dissection was used behind the oesophagus to create a window large enough to accommodate the fundic pull-through. Short gastric vessels were divided only as necessary to allow a loose wrap without placing tension on the spleen. The superior pole of the fundus was pulled behind the oesophagus through the window created, and three interrupted 2-0 silk sutures were tied intracorporeally to fashion a standard Nissen fundoplication over a 42-52Fr bougie. The sutures were positioned 1-1.25 cm apart, creating a 2.0-3.0 cm wrap, with the most superior suture incorporating a bite of the oesophagus.
Post-operative care
A water-soluble contrast study was performed on post-operative day 1 to check wrap integrity, rule out leakage, and assess esophageal clearance. Patients were then started on fluid diet and, if tolerating, discharged home on the second postoperative day. Instructions were given to slowly change their intake from pureed to normal food over the ensuing 3 weeks. They were allowed to resume full activity on discharge.
Follow-up investigations
All patients were invited to undergo 24-hr pH testing, esophageal manometry, GERD symptom score and SF-36 quality of life questionnaire 6 months after the operation. Upper GI endoscope with biopsy was done by the same senior author 6 months after operation and at 2 years and 5 years after surgery.
Statistical analysis
Statistical analysis was performed using Statview 4.5 (SAS Institute Inc, Cary, NC, USA). All values are expressed as mean (± standard error of the mean). Paired values were compared using Student’s t-test and statistical significance was set at the 0.05 level. | RESULTS
Indication for surgery
The 21 subjects included 13 males and 8 females with mean age of 46.8±3.18 years.
The mean duration of GERD symptoms prior to surgery was 8.4 ± 1.54 years. The mean esophageal acid exposure was 26.5 ± 3.91%. Upper GI endoscopy showed 15 patients with Barrett’s segment ≥3 cm and 6 patients with Barrett’s segment <3 cm. 18 patients had hiatal hernia with range of 3-8 cm. 7 patients had erosive esophagitis. No esophageal stricture was identified. Indications for surgery in our sample included failure of medical therapy in 10 patients, pulmonary complications of GERD in one patient, and in the remaining 10 patients, surgery was chosen over long term PPI therapy despite good symptom control.
Effect of surgery on reflux indices
All patients underwent Laparoscopic Nissen fundoplication (LNF) by a single surgeon with mean operative time of 63.3 ± 6.14 minutes. There was no conversions and the mean hospital stay was 2.9 ± 2.46 days. Mean follow-up was 39 ± 6.32 months. Reflux symptom score (off antisecretory medications) at 6 months was significantly improved from 37.5 ± 3.98 to a mean of 8.7 ± 2.46, P < 0.0001 ( Figure 1 ). This was supported objectively by a significant drop in the esophageal acid exposure time from mean of 26.5 ± 3.9% to a value of 2.1 ± 0.84%, P < 0.0001 ( Figure 2 ). Esophageal manometry showed significant augmentation of the lower esophageal sphincter from mean of 3.71 ± 1.08 mmHg to a mean basal pressure of 12.29 ± 1.34 mmHg, P <0.0053 ( Figure 3 ). Both the physical and mental components of SF- 36 were improved from means of 44.8 ± 1.70 and 50.17 ± 2.5 respectively to means of 50.32 ± 2.43 and 55.81 ± 1.45 points respectively, P <0.0673 and P <0.0751 (Figure 4a – 4b ).
Effect of surgery on Barrett’s mucosa
Of the 21 patients, none have developed dysplastic changes nor exhibited progress of the length of intestinal metaplasia segment of Barrett’s. In 9 patients we have observed regression of Barrett’s mucosa, of these 8 patients have shown complete loss of documented intestinal metaplasia on the most recent biopsies after a median follow-up of 3 years ( Table 1 ) ( Figure 5 ). These 8 patients included 6 patients with Barrett’s <3 cm and 2 patients with Barrett’s ≤4 cm. The one patient who exhibited partial regression had a reduction in visible length of intestinal metaplasia segment from 12 cm to 5 cm on biopsy after 5 years of follow-up. | DISCUSSION
The goal of medical and surgical treatment of Barrett’s oesophagus is to provide long-term relief of GERD symptoms, control esophagitis, and hopefully prevent progression of Barrett’s mucosa that may lead to esophageal adenocarcinoma.
Clinical studies have shown that PPI therapy prevents acid reflux and fasting bile reflux but does not prevent postprandial bile reflux,[ 17 ] and there is only equivocal evidence that it works to prevent progression of Barrett’s mucosa.[ 15 18 19 ] In fact, there is evidence to suggest that acid suppression therapy may contribute to cell proliferation and adenocarcinoma development.[ 20 ] This observation has been attributed to the carcinogenic effects of unconjugated bile acids, which are most injurious to the esophageal mucosa at alkaline pH.[ 21 ] Acid suppression therapy promotes an alkaline environment for these unconjugated bile acids which can reflux in to the oesophagus. Surgical therapy on the other hand, is effective in preventing both acid and bile reflux,[ 22 ] and hence minimizes exposure of the esophageal mucosa to these harmful bile acids. Thus, theoretically it should be more effective than PPI therapy in preventing progression and even causing regression of Barrett’s. There is increasing evidence in favor of this theory.
Effect of surgical therapy on causing regression of Barrett’s
There is increasing evidence in favor of surgical therapy causing regression of Barrett’s. Pope et al were the first to describe a complete regression of intestinal metaplasia in 4 of 10 patients with Barrett’s who underwent successful open antireflux surgery.[ 23 ] Low et al also reported complete regression of Barrett’s oesophagus in 2 of 14 patients, partial regression in 10 of 14 patients, and disappearance of dysplasia in 4 of 14 patients, who were followed up for a mean of 25 months after open antireflux surgery.[ 24 ] DeMeester recently demonstrated that both 73% of patients with intestinal metaplasia at the gastroesophageal junction and 4.4% of patients with an endoscopically visible segment of Barrett’s had complete regression and loss of intestinal metaplasia after laparoscopic antireflux surgery.[ 11 ] The only randomized study during open antireflux surgery period was reported by Ortiz and colleagues.[ 19 ] They randomized 27 patients to the medical arm with 4 year follow-up and 32 patients to the surgical arm with 5 year follow-up. Regressions were documented in 2 of 27 patients of medical arm and in 8 of 32 patients of surgical arm. Hofstetter et al reported the outcome of 97 patients with Barrett’s after antireflux surgery. Fifty patients underwent a laparoscopic antireflux surgery. 85 patients completed median follow-up of 5 years, intestinal metaplasia regressed to cardiac mucosa in 9 of 63 (14%) patients and low-grade dysplasia regressed to nondysplastic Barrett’s in 7 of 16 (44%) patients.[ 3 ] Bowers et al reported the results of 104 patients with Barrett’s oesophagus, in whom 97 patients underwent laparoscopic antireflux surgery, followed for a mean of 4.6 years. Of the 66 patients who remained on the surveillance protocol, 31 (47%) patients documented loss of intestinal metaplasia in tubular oesophagus.[ 12 ] Recently, Parrilla and his colleagues showed that, open Nissen fundoplication in 57 patients who had Barrett’s oesophagus without esophageal stricture, complete disappearance of low-grade dysplasia after successful anti-reflux surgery.[ 13 ] Our study confirms the finding of these studies in that laparoscopic Nissen fundoplication resulted in complete loss of intestinal metaplasia in 8 of 21 (38%) patients and partial regression in one patient. This can be attributed to the prevention of esophageal exposure to carcinogenic bile acid. This effect seems to be most pronounced in those patients with segment of Barrett’s ≤4 cm, suggesting that surgical therapy may be considered as a first line approach in patients with a short segment of Barrett’s.
The evidence in support of surgery in reducing the risk of cancer development is even scarcer. McCallum et al [ 9 ] enrolled 338 patients with Barrett’s, with 256 patients were non-dysplastic Barrett’s on initial entry. 40 patients had open anti-reflux surgery and 216 patients continued on medical therapy. Only 29 patients of the surgical group had mean follow-up of 62 months and none developed cancer. 152 patients of medical group had mean follow-up of 49 months and 2 patients developed esophageal adenocarcinoma. Similarly, Katz et al [ 10 ] retrospectively followed 102 patients with Barrett’s for a mean of 4.8 years. Dysplasia had developed in approximately 8% of the medically treated patients compare to none of the 15 patients group, whom were treated by antireflux surgery and were matched in this study, developed dysplasia nor cancer. DeMeester et al [ 11 ] followed 60 patients with intestinal metaplasia of the oesophagus or cardia who had antireflux surgery. 15 patients had only intestinal metaplasia of the cardia and 45 patients had columnar epithelium with intestinal metaplasia visible within the oesophagus. After a median follow-up of 25 months in each group, no patient progressed to high-grade dysplasia or cancer. Hofstetter et al [ 3 ] followed 85 patients for a median of 5 years. In this study 50 patients underwent a laparoscopic procedure, 20 had a transthoracic procedure, 3 had abdominal Nissen operations, 9 had Collis-Belsey, and 3 had other partial wrap. No patient developed high-grade dysplasia or cancer in 410 patient-years of follow-up. Bowers et al followed 104 patients for a mean of 4.6 years. In this study 96 patients underwent a laparoscopic antireflux procedure. There was no progression of intestinal metaplasia to adenocarcinoma or high-grade dysplasia in any of the patients, for a total of 337 patients-years of follow-up.[ 12 ] Krska et al prospectively followed 75 patients with GERD. 8 patients had Barrett’s oesophagus and underwent LNF. They did not observe any one case of Barrett’s segment progression.[ 25 ]
None of our patients developed dysplasia or cancer during a mean follow-up of more than 3 years. This finding supports the previous reports that high-grade dysplasia and adenocarcinoma can be prevented. However, as with all other studies, our series is relatively small and the folloup is just over three years. A larger study, preferably a randomized controlled trial with a medical arm over a long followup period, may help confirm the recent evidence in favour of surgery in the treatment of Barrett’s.
There are few studies that document the symptomatic outcome in patients with Barrett’s oesophagus after antireflux surgery. McDonald et al retrospectively reviewed 113 patients with Barrett’s oesophagus underwent open antireflux surgery. 82.2% of the patients had excellent to good control of symptomatic outcome during median follow-up of 6.5 years after the surgery.[ 26 ] Parrilla et al divided 177 patients into two groups: 57 patients had Barrett’s oesophagus (BE) and 120 patients had no Barrett’s oesophagus. Both groups underwent open antireflux surgery. 92% of patients with BE had excellent to good clinical response during mean follow-up of 5 years after surgery compare to 90% of patients without BE who had same response during mean follow-up of 6 years after surgery.[ 13 ] Farrell and his colleagues compared symptom scores and reoperation rates in 570 patients who had GERD (74 patients had Barrett’s oesophagus and 496 controls). All patients underwent fundoplication. During mean follow-up of 4 years after the surgery, the heartburn, regurgitation, and dysphagia severity scores of the Barrett’s group was significantly improved and was identical to the controls.[ 27 ] The report by Williamson et al [ 28 ] of 37 patients with median follow-up of 5 years showed symptomatic relief of esophagitis in 92% of patients. Yau et al [ 29 ] published the outcome of a prospective series of 81 patients who had Barrett’s oesophagus undergoing laparoscopic antireflux surgery compared with 676 patients who did not have Barrett’s oesophagus. After a median follow-up of 2 years, 6 (7%) of the patients underwent a subsequent operations and the outcomes in regard to symptom score, dysphagia score, and satisfaction index were all favorable, with no statistical significance compared with individuals who did not have Barrett’s oesophagus. In the study done by Hofstetter et al ,[ 15 ] reflux symptoms were absent in 67 of 85 (79%) patients at a median follow-up of 5 years after the surgery. Bowers et al followed a total of 104 patients with Barrett’s oesophagus who underwent fundoplication (97 patients underwent laparoscopic anti-reflux surgery). During mean follow-up of 4.6 years, only 27% of the patients reported moderate to severe reflux symptoms after the surgery.[ 12 ]
We documented a significant improvement of reflux symptom score after surgery which supports earlier findings that the result of laparoscopic antireflux surgery in patients with Barrett’s is favourable. We will continue to follow the patients as more long term follow up will be necessary.
Assessing quality of life in GERD patients with Barrett’s following fundoplication is rarely reported or studied. Very few studies evaluated this point. Bowers et al reported the clinical results of 104 patients with Barrett’s oesophagus who underwent fundoplication. At mean follow-up of 4.6 years after surgery, 45 patients reported statistically improvement in seven of eight quality of life fields of SF-36 questionnaire. Comparing them to age-matched controls from the general population, 45 patients group were statistically identical to the match group.[ 12 ] Kamolz et al [ 30 ] prospectively studied 249 patients (75 patients had Barrett’s oesophagus and 174 patients had no Barrett’s) who underwent laparoscopic anti-reflux surgery. He evaluated quality of life, using Gastrointestinal Quality of Life Index (GIQLI), for both groups (BE vs Non BE) preoperatively, 3months, 1 year, and 3 years after surgery and compared the data with general population. GIOLI was significant improved in both group after surgery and there were no differences in comparing with general population. He reported that laparoscopic anti-reflux surgery significantly improve the quality of life in GERD patients with Barrett’s oesophagus. In our study, both the physical and mental components of SF-36 were improved after LNF which is supporting the previous reports.
There are only a few studies which have evaluated the possible impact of length of Barrett’s on its regression after surgery. DeMeester et al [ 11 ] reported complete loss of intestinal metaplasia in 11 of 15 (73%) patients with intestinal metaplasia of the cardia after antireflux surgery. Complete regression of intestinal metaplasia were also occurred in 2 of 45 (4.4%) patients with Barrett’s but both patients had columnar segments less than 3 cm in length. No patient with a segment of columnar epithelium 3 cm or longer had complete loss of intestinal metaplasia. Low et al [ 24 ] reported a complete regression of intestinal metaplasia in 2 of 14 (14%) patients with short-segment Barrett’s oesophagus (less than 3 cm) after antireflux surgery. 4 of 14 (28%) patients regressed from low-grade dysplasia to nondysplastic Barrett’s. In the study done by Bowers et al , 31 patients with loss of intestinal metaplasia had mean length of preoperative visible columnar-lined epithelium of 2.8 ± 2.4 cm, in comparison to 33 patients with persistent intestinal metaplasia who had mean length of preoperative visible columnar-lined epithelium of 5.5 ± 3.7 cm, ( P value was < 0.01). He reported that patients with short-segment (<3 cm) Barrett’s oesophagus were more likely to have regression of Barrett’s segment after antireflux surgery than those with long-segment (>3 cm) disease.[ 12 ]
Our study confirms only patients with relatively short segments of Barrett’s (<4 cm) may experience complete regression. This observation suggests that the metaplastic process may indeed be reversible if reflux-induced injury is eliminated early in its process.
More studies are required to evaluate this hypothesis and determine the impact of laparoscopic antireflux surgery on long-term survival in patients with Barrett’s oesophagus. | Introduction:
Recent studies have suggested that both laparoscopic and open anti-reflux surgery may produce regression of Barrett’s mucosa.
Material and methods;
We reviewed 21 patients (13M: 8F, mean age 46.7±3.18 years) with documented Gastroesophageal Reflux Disease (GERD) and Non-dysplastic Barrett’s esophagus (15 patients ?3 cm segment, 6 patients < 3 cm segment) on long term proton pump inhibitor therapy who underwent laparoscopic Nissen fundoplication (LNF) between 1993 and 2000. All patients had undergone pre and yearly postoperative upper GI endoscopy with 4 quadrant biopsies every 2 cm. All patients also underwent pre- and 6 months postoperative 24-hr pH study, esophageal manometry, SF36, and GERD symptom score. The mean duration of GERD symptoms was 8.4±1.54 years pre-operative. The mean follow-up after surgery was 39±6.32 months.
Results:
Postoperatively, there was significant improvement in reflux symptom score (37.5 ± 3.98 points versus 8.7 ± 2.46 points, P = 0.0001), % acid reflux in 24 hr (26.5 ± 3.91% versus 2.1 ± 0.84%, P< 0.0001) and an increase in lower esophageal sphincter pressure (3.71 ± 1.08 mmHg versus 12.29 ± 1.34 mmHg, P = 0.0053). Complete or partial regression of Barrett’s mucosa occurred in 9 patients. All patients with complete regression had <4 cm segment of Barrett’s. Progression or cancer transformation was not observed in any of the patients.
Conclusion:
LNF in patients with Barrett’s oesophagus results in significant control of GERD symptoms. LNF can prevent progression of Barrett’s oesophagus and in patients with Barrett’s <4 cm may lead to complete regression. | CC BY | no | 2022-01-12 15:21:46 | J Minim Access Surg. 2005 Mar; 1(1):21-28 | oa_package/cf/6c/PMC3016471.tar.gz |
|||
PMC3016472 | 21234141 | INTRODUCTION
Since the initial successful application of laparoscopic technique to resection of the colon by Jacobs and Fowler, this approach has been used fairly widely by several groups. With the maturity of surgical skills, refinements in instrumentation and availability of newer energy sources laparoscopic colorectal resections have become more successful and are associated with fewer complications. Despite this there some controversies still exist for the use of laparoscopic technique in the treatment of malignant tumour. The debated issues include the oncologic feasibility and safety and the likelihood of the port-site metastasis. This non-randomized, comparative study between laparoscopic surgery and traditional open procedure focussed on the problem of the safety of laparoscopic colorectal resection for malignancy. | MATERIAL AND METHODS
Patient Selection
One hundred and seventy eight patients with colorectal cancer operated between September 2001 and September 2002 by laparoscopic approach or open surgery were included in the study. The inclusion criteria were: 1) age: from 18 to 75 years old; 2) curative resection: 18 cases that were Duke’s D stage were excluded; thus 160 patients were analysed in this study. Among them, 47 patients underwent the laparoscopic colorectal resection (LCR) and 113 cases underwent open resection (OR). The type of operation was elected and determined by the patients themselves. Between the two groups, there were no significant differences in age, tumour location or postoperative staging ( Table 1 ). The operations performed included right hemicolectomy, left hemicolectomy, transverse colectomy, sigmoid colectomy, low anterior resection and abdominoperineal resection. All operations were performed by an experienced surgeon specialized in laparoscopy and open surgery respectively.
Principles and technique of operation
Laparoscopic as well as open operations conformed to the oncological principles of surgery for colorectal cancer: avoidance of excessive tumour handling, protection of incision and adequate lymphatic clearance. For the malignant tumours of rectum, the procedure must incorporate a total mesorectal resection (TME). The laparoscopic group included laparoscopically-assisted colorectal resection and hand-assisted laparoscopic operations.
Collection of CO 2 filtrated fluid
In the laparoscopic group, pneumoperitoneum was established first and maintained at an intraperitoneal pressure of 15 mmHg. After 4 to 5 trocars were inserted, one of the side ports on the trocars was opened and the CO 2 filtrated through 100 ml saline in a hermetic container. This fluid was then collected for analysis.
Collection of the peritoneal washing
In both groups, samples for cytology were collected twice: firstly, immediately after the opening of peritoneal cavity and at time just before the abdominal closure. Upon opening the peritoneal cavity at laparotomy or after insertion of trocars, 100 ml to 200 ml saline was instilled and the fluid was aspirated; this served as a preoperative sample. At the end of operation the wound and instruments were rinsed with 500 ml~1000 ml saline and 100 ml irrigation fluid was collected in the same manner as the postoperative sample of peritoneal washing and the instruments flushing fluid.
Examination of the exfoliated tumour cells
Cytological examination of the fluid samples collected was performed after the process of centrifugation, filtration, sedimentation, smearing, making of cell blocks and hematoxylin and eosin staining. Microscopic examination was performed to detect exfoliated cancer cells, and the positivity between the two groups was compared.
Comparison of oncology results
A single pathologist blinded to the method of resection (LCR or OR) examined the specimens. The diameter of the tumour, length of the distal margin, length of the bowel excised, and the number of lymph nodes harvested were measured. Depending on the anatomic location, the lymph nodes were classified as 3 stations: epicolic and paracolic lymph nodes, intermediate lymph nodes, the principle lymph nodes (at the origins of the superior and inferior mesenteric vessels or colic artery).
Follow up
All patients completed the follow-up. They all had a clinical examination, CEA and CA199 estimation and the chest x-ray. In addition, the ultrasonography of the liver, computerised tomography of the abdomen and colonoscopy were undertaken as needed.
Statistical analysis
SPSS 10.0 software package was used for statistical analysis. Student t -test and Chi-square test were used to compare the categorical and parametric data respectively. P < 0.05 was considered statistically significant. All data were expressed as mean±standard deviation (SD). | RESULTS
Both the groups were well matched for age, location and size of tumour and Duke’s stage ( P > 0.05) ( Table 1 ). In the laparoscopic group, 45 cases were completed totally laparoscopically by a hand-assisted laparoscopic operation. Two cases were converted to open procedure due to the dense adhesions.
In the LCR group, no tumour cells were detected in the CO 2 filtrated fluid. The pre- and postoperative peritoneal fluid cytology was positive in 3/47 (6.38%) in LCR group and 8/113 (7.07%) in the OR group. The incidence of negative preoperative peritoneal cytology with a positive postoperative cytology was 2/47 (4.26%) in the LCR group and 5/113 (4.42%) in the OR group. There was no significant difference between the two groups. Similar results were observed in the instrument flushing fluid ( Table 2 ).
The mean length of colon resected in the LCR group and OR group were (19.38±7.47) cm and (18.60±8.40) cm respectively. The distal margin to the rectal tumour were (4.19±2.52) cm and (4.16±2.00) cm and there was no positive margin in either group. Also, there was no difference between the two groups as far as the lymph node yield was concerned (LCR: 13.71±9.29; OR: 12.10±9.74, P =0.51). The rate of regional lymph node involvement was equivalent between both the groups. Although the number of the principle lymph nodes in laparoscopic group was higher than that obtained in the open group, the difference did not reach statistical significance ( Table 3 ).
All the cases were available for follow-up for a median period of 13.8 months (range, 5-20 months). The local recurrence rate of the laparoscopic group was 2.13% (1/47), 1.77% (2/113) of the open group ( P =0.88). The incidence of distant metastasis were 6.38% (3/47) and 6.19% (7/113) respectively ( P =0.96). | DISCUSSION
The advantages of laparoscopic surgery such as smaller incisions, reduced bleeding, quicker recovery and fewer complications have been well established. With the application of ultrasonic shears, endoscopic staplers and other instruments as also with maturing of the surgical skills minimally invasive surgery is being widely utilized in the treatment gastrointestinal cancers, particularly for the treatment of colorectal malignancy. However, laparoscopic curative colorectal resection has yet not gained widespread approval and most surgeons still view it with some caution due to concerns about its oncological feasibility and safety. Issues such as whether laparoscopic colorectal surgery for malignancy can conform to the oncological principles, whether the short-term outcomes and long-term survivals are comparable to open surgery and the problem of port-site metastases still remain controversial.
Since the first report of the tumour cells implantation at the port-site after laparoscopic surgery in 1993, this has become an important issue that hinders the application of laparoscopic surgery for treatment of colorectal cancer. The incidence of early port-site recurrence was 1.4%-21.0%. In the last 5 years, many experienced surgeons from have reported an incidence between 0-0.2%, 1 2 which is not significantly different compared to that achieved by laparotomy. Thus it was concluded that the main reason for early reported port-site metastasis was the improper handling of the tumour causing dissemination of cells in the peritoneal cavity and seeding at the incision or port-site.
In 1993, we successfully undertook the first laparoscopic colorectal resection in our country. Since then, we have steadily gained more experience and standardized the operative procedure. In all or patients we strictly followed the “no-touch’ technique: avoiding direct manipulation of the tumour, limiting the instruments inserted and protecting the mini laparotomy with a plastic sleeve during the extraction of specimen. All these measures are suggested to prevent the implantation of tumour cells at port-sites. In our study, the incidence of tumour cells detected in the instrument flushing fluid in the two groups was not no significantly different. Similarly, the incidence of positive cytology rates in the peritoneal washing after operation were similar in both groups. We concluded that the laparoscopic surgery technique is unlikely to increase the risk of tumour cells dissemination in the abdominal cavity and the metastasis during the instrument exchanges. During the short-term of follow-up, no port-site metastasis or port-site or incision recurrence was observed in either group in our study.
Some other studies considered that the port-site implantation was caused by the so called “Chimney effect” of the pneumoperitoneum as presented by Tseng. 3 This referred to the floating of the tumour cells in the abdominal cavity under the effect of CO 2 pneumoperitoneum. During the operation with a reduction in the intraperitoneal pressure these cells could escape from the trocar’s outlets. Some of the tumour cells then adhered to the port-site or incision and the margins around them producing port-site implantation. There have been many reports challenging the hypothesis of the “Chimney Effect”. Iwanaka 4 demonstrated that laparoscopic biopsy of tumour both under the condition of pneumoperitoneum and the gasless system. By comparing the port-site recurrence (PSR) rates between them, he concluded that CO 2 pneumoperitoneum was not found to be essential for the development of PSR. Another animal experiment from Hao Wang 5 showed that only under the consistently high pneumoperitoneum pressure and certain concentration of the tumour cells (1.6FNx01107/ml) port-site and visceral metastasis could develop. The experiment of Wittich 2 also confirmed that the routine procedure undertaken during a laparoscopic operation could not result in development of port-site metastasis. The “Chimney Effect” did not occur in our experiment. None of the patients who had positive postoperative peritoneal washings subsequently develop port-site metastasis.
Besides the advantage of minimal invasion, laparoscopically performed colorectal resections can be oncological equally radical as compared to traditional operations. The laparoscopic procedure does not deviate from the steps of the traditional radical excision as it also includes the high ligation of the vessels, adequate length of the distal margin from tumour, adequate lymphadenectomy and mesenteric resection. The majority of clinical reports have confirmed these findings in terms of oncological radicality achieved by laparoscopic surgery for colorectal cancers as compared to laparotomy. Kockerling 6 in a retrospective series of 116 patients with colon cancer undergoing laparoscopic resection reported adequate tumour resection. Franklin 7 reported a randomized comparative study between 191 laparoscopic and 224 open colorectal resections for the malignant tumour. No significant differences were observed between the two groups in terms of the extent of lymph node harvest, the length of specimen, the distance of proximal or distal margin from the tumour.
In our series, all the pathological parameters including the length of bowel resection, distal margin from the inferior edge of the rectal tumour, the number of lymph node harvested and the positive rate for each station were not significantly different between the LCR and OR groups. Studies reported from overseas 8 9 and our series indicate the extent of the lymph nodes harvest of laparoscopic operation is comparable to that after an open procedure. The short-term outcome shows the local recurrence rate and the survival between the two groups to be equivalent.
In conclusion, laparoscopic colorectal resection for the treatment of malignant cancer can be performed safely and effectively. | Aim:
This study aimed to assess the safety and feasibility of laparoscopic curative resection for colorectal cancer through the clinical practice and basic research.
Material and Methods:
From September 2001 to September 2002, 47 patients with colorectal cancer were treated using laparoscopic approach, compared with 113 patients underwent traditional operation. The length of intestinal segment excised, size of tumour, clearance of lymph nodes, local recurrence and distant metastasis rate during the period of follow-up in both groups were compared. The other part of the study involved the detection of exfoliated tumour cells in the peritoneal washing before and after surgery; flushing of the instruments was performed in both groups and the results compared. For the laparoscopic cases, the filtrated liquid of CO 2 pneumoperitoneum was also checked for tumour cells.
Results:
No significant differences existed in tumour size, operative site and manner between the two groups. The exfoliated tumour cell was not detected in the CO 2 filtrated liquid. Between both groups there was no difference in the incidence of exfoliated tumour cells in peritoneal washing before and after surgery as well as in the fluid used for flushing the instruments. The total number of lymph nodes harvested was 13.71±9.57 for the laparoscopic group and 12.10±9.74 for the traditional procedure. Similar length of colon was excised in both groups; this was (19.38±7.47) cm in the laparoscopic and (18.60±8.40) cm in the traditional groups. The distal margins of resection for rectal cancer were (4.19±2.52) cm and (4.16±2.00) cm respectively. The local recurrence rate was 2.13% (1/47) and 1.77% (2/113) with the distant metastasis rate 6.38% (3/47) and 6.19% (7/113) respectively. Both the statistics were comparable between the laparoscopic and traditional surgery for the colorectal cancer.
Conclusion:
Laparoscopic curative resection for colorectal cancer can be performed safely and effectively. In the treatment of colorectal malignancy, laparoscopic resection can achieve similar radicalilty as compared to the traditional laparotomy. | CC BY | no | 2022-01-12 15:21:46 | J Minim Access Surg. 2005 Mar; 1(1):29-33 | oa_package/c7/63/PMC3016472.tar.gz |
|||
PMC3016473 | 21234136 | Is there any rationale or even requirement for yet another journal devoted to Minimal Access Surgery (MAS)? MAS is snowballing with the volume and speed of an avalanche and is already well served by excellent, established Journals whose monthly contribution to MAS is so diverse and extensive, that any surgeon would be hard-pressed to go through all that is already being written on this subject. Why then the need for this Journal of Minimal Access Surgery?
The Asia-Pacific region is a vast ocean of humanity, which after decades is reaching for its economic and political potential. In similar vein surgery in this vast area is taking giant steps forward and these steps are best seen in the phenomenal growth of MAS When the first hesitant steps of laparoscopic surgery were taken in India in early 1990, the first in the developing world, “surgical activists” came down heavily on proponents of this patient-friendly surgery on grounds that it was “inappropriate” for poor countries, was a surrender to the West, would distort the values of surgeons in India and bring into question the ethics of those who promote this surgery.[ 1 2 3 ] The phenomenal growth of MAS throughout the Asia-Pacific region and its spread to small towns and the less privileged is a fitting response to those who would thwart surgical progress on grounds of poverty. There can be no doubt that vast discrepancies exist in the scope and quality of surgical care in all developing countries and great efforts need to be made to rectify the enormous gap between the rich and the poor, economically as also surgically,[ 4 ] but if surgery is a humanitarian science it should be our unrelanting endeavour to spread the benefits of surgical advance to all people in all places.[ 5 ] This then is the rationale for the JMAS. Having witnessed over the last fifteen years the phenomenal growth, vast numbers of patients treated and the incredibly impressive quality of MAS all over the Asia-Pacific region, the Indian Association of Gastrointestinal Endo-Surgeons felt that this volume and quality of MAS, as specifically practised in an economically deprived mileu, needed documentation, dissemination and should be “show-cased” in its own Jounral.
The flame of MAS was first lit in the developed world. The U.S. and countries in Europe have greatly helped nurture and spread the growth of MAS in poor countries by spreading education and funding equipment in the early years. Perhaps more, these countries have set a bench-mark in standards for self-appraisal and evaluation, in the quality of their surgical literature and in the leadership and thrust their two major Societies, the Society of American Gastrointestinal Endoscopic Surgeons [SAGES] and later the European Association for Endoscopic Surgery [EAES] have had on the growth and quality of MAS world wide. It is the anticipation of the JMAS that these countries will continue their support to the developing world as also to JMAS with their quality articles, input and suggestions, an anticipation well reflected by the composition of this Editorial Board.
However, the raison d’etre of this Journal is primarily to give scientific expression to the work done in this region as also other developing countries as also in Latin America and Africa. It is a sad fact that perhaps as a result of the pressure of the large volume of day to day work, or a lack of appreciation of their own quality and results the vast majority of surgeons here do not give structured, statistically significant expression to their surgical work. Just as fifteen years back we accepted the challenge to spread MAS in our part of the world, we now urge the vast army of surgeons active in MAS in all developing countries to give substance and expression to our extensive patient load and experience, share our cost-saving expertise, our innumberable innovations born out of stark necessity, our problems and solutions, and by encouraging more and more hesitant colleagues, help spread our mission to all corners.
One reads in MAS literature articles devoted to successful outcomes and positive evaluation of procedures. Rarely does one read of mishaps or complications, these are so conveniently swept under the carpet. Usually these unfortunate situations are isolated cases but if reported would very possibly be of greater practical and educational value than many of the success stories. It is the the intent of this Journal to publish as many such reports / papers and give total encouragement to those brave and positively motivated surgeons who have the courage and integrity to share their problem cases.
The world is fast becoming smaller and countries and continents rapidly coming closer. The International Federation Of Societies Of Endoscopic Surgeons (IFSES), a Federation of ten large Socieites of endoscopic surgeons which work in unision to supplement each other and help improve the quality and spread of MAS has four of its Member Societies in the Asia Pacific Region – the Japan Society for Endoscopic Surgery [JSES], the Society of Endoscopic and Laparoscopic Surgeons of Asia [ELSA], the Chinese Society for Laparo-Endoscopic Surgery [CSLES], the Indian Association of Gastrointestinal Endo-Surgeons[IAGES], and two in Latin America the Federation Latinoamericana de Cirugia[FELAC], the Association Latinoamaericana de Cirujanos Endoscopistas [ALACE]. The IAGES stretches out in support and co-operation to all these Societies, welcomes their contributions to JMAS. Most of these Societies have had and continue to have the support of the two major Societies of the developed world, the Society of American Gastrointestinal Endoscopic Surgeons and European Association for Endoscopic Surgery in their bid to narrow the gap between the priviledged and the deprived.
The Editorial Board is fully aware of the hurdles and problems it faces in establishing and indexing this Journal. On the other hand this Editorial Board stands committed to its conviction that this Journal will be an important vehicle in improving the quality and spread of MAS in this region. In this cause we confidently anticipate the total support not only of the entire surgical community in the Asia-Pacific region but from all over the world. If success is defined not so much by the position reached, as by the determination to overcome obstacles while trying to succeed, this Journal has charted its road-map in the correct direction. | CC BY | no | 2022-01-12 15:21:46 | J Minim Access Surg. 2005 Mar; 1(1):3-4 | oa_package/6c/fd/PMC3016473.tar.gz |
|||||||
PMC3016474 | 21234142 | INTRODUCTION
The incidence of retained or recurrent calculi in the biliary tree following cholecystectomy is between 1.1% and 7%.[ 1 ] Management depends on the location of the calculi within the biliary tree, and retained calculi in the cystic duct remnant are highly unusual. A cystic duct remnant calculus causing Mirizzi’s syndrome is exceedingly rare, with only 3 cases reported to date.[ 2 – 4 ] We present the management of a unique case of a cystic duct remnant calculus causing Mirizzi syndrome, only the fourth such case of its kind. | DISCUSSION
In 1948 Mirizzi described how calculi in the cystic duct or gall bladder infundibulum cause extrinsic compression of the common hepatic duct.[ 5 ] The eponymous syndrome has since been identified in 0.7-1.4% of all patients undergoing cholecystectomy.[ 6 ]
Gallstone impaction in Hartmann’s pouch or the cystic duct causes an inflammatory reaction, resulting in biliary obstruction. Calculi in a long parallel cystic duct predisposes to Mirizzi Type 1, where inflammation causes extrinsic bile duct compression. If inflammation persists, the gall bladder can adhere to the bile duct, causing pressure necrosis and fistula formation.[ 7 ] A cholecystobiliary fistula occluding one-third of the duct is a Mirizzi Type 2 abnormality, whereas occlusion of two-thirds of the duct or complete occlusion are classified as Mirizzi Type 3 and 4 respectively. ERCP is the primary method of diagnosing fistulae, and also has an important therapeutic role, including stone retrieval and stent placement.
An extensive review of 219 patients with Mirizzi syndrome showed that the vast majority of cases were Type 2 (41%) or Type 3 (44%), whereas Type 1 lesions occurred in only 11%.[ 7 ] Surgical management is controversial; however there is consensus that partial cholecystectomy and hepaticojejunostomy are the most appropriate procedures for Type 1 and 4 lesions respectively.[ 6 7 ] Various strategies have been used for Type 2 and 3 lesions, including percutaneous extraction, aiming to avoid bile duct stenosis. Whether fistulae closure is best achieved around a t-tube following cholecystectomy[ 6 ] or via choledochoplasty[ 7 ] is controversial, with most evidence coming from small case series.
Laparoscopic management of Mirizzi syndrome depends on the ability to delineate structures within Calot’s triangle, which can be difficult when a large impacted stone or fistula is present. Laparoscopic management of Type 1 cases is technically feasible and safe, whereas laparoscopic management of fistulae requires considerable skill and experience, and may be associated with significant morbidity and mortality due to retained CBD stones.
Three previous cases reports of Mirizzi syndrome due to cystic duct remnant calculi were successfully managed via open cholecystectomy[ 2 4 ] and endoscopically.[ 5 ] To our knowledge this is the first case of its occurrence in association with an anomalous cholecystohepatic duct— a rare anomaly. Although an anomalous cholecystohepatic duct may predispose to calculus formation, the short interval between cholecystectomy and readmission in this case (1 year) suggests that the calculus was retained rather than formed de novo. Furthermore, gall bladder histology demonstrated multiple gallstone fragments, which may have been present in the cystic duct remnant. Biliary anatomical variations are frequently encountered during cholecystectomy; awareness of these variations is a vital prerequisite before attempting biliary surgery. | CONCLUSIONS
Retained cystic duct remnant stones are a rare cause of Mirizzi syndrome. Although it was fortunate in this case that endoscopic management was successful, cystic duct remnant calculi are probably best managed via an open procedure. In this case it was fortuitous that endoscopic retrieval was possible. Post-cholecystectomy Mirizzi syndrome should be considered as a cause of biliary obstruction once more common etiologies have been excluded. | Various strategies have been proposed for the management of retained calculi within the biliary tree following cholecystectomy. We present a unique case of a cystic duct remnant calculus causing Mirizzi syndrome, only the fourth such case of its kind. An open procedure was planned, however the calculus was eventually extracted endoscopically. The pathophysiology and management of Mirizzi syndrome and retained calculi within the cystic duct remnant are discussed along with the merits of a minimally invasive approach. | CASE REPORT
A 36-year-old woman presented with a two-day history of epigastric pain, nausea and vomiting. One year previously she underwent laparoscopic cholecystectomy for recurrent biliary pain. Gall bladder histology demonstrated chronic inflammatory changes and fragmented gallstones, but no complete calculi.
Physical examination revealed she was afebrile but tender in the epigastrium, with no palpable masses. Liver function tests (LFTs) showed normal bilirubin and albumin, however liver enzymes were deranged (normal ranges): alkaline phosphatase 125 iU/L (30-120 iU/L), gamma glutamyl-transpeptidase (GGT) 138 iU/L (10-35 iU/L) and alanine aminotransferase (ALT) 154 iU/L (0-40 iU/L). Amylase, urea and electrolytes, full blood count, and coagulation studies were all normal. Magnetic resonance cholangio-pancreatography (MRCP) showed a normal common bile duct (CBD) with no evidence of calculi, however the cystic duct remnant was dilated distal to a round filling defect, suggesting a calculus within the remnant.
Endoscopic retrograde cholangio-pancreatography (ERCP) demonstrated focal narrowing of the proximal CBD. The ducts of the left liver lobe and right upper lobe segments were patent but lower segments of the right lobe did not fill with contrast ( Figure 1 ). Comparing the MRCP and ERCP suggested that the right lower lobe segments drained into the cystic duct remnant (an aberrant cholecystohepatic duct), which was occluded by a calculus compressing the CBD.
Elective laparotomy with exploration of the cystic duct remnant and CBD was planned; however, 24 hours later the patient developed worsening epigastric pain. She had developed icterus, with pale stools and dark urine. On examination she was jaundiced but her abdomen was non-tender. LFTs were consistent with biliary obstruction: bilirubin 134 umol/L (3-21 umol/L), ALP 281 iU/L, GGT 381 iU/L and ALT 435 iU/L. Repeat MRCP demonstrated CBD dilatation (10 mm) secondary to a calculus at its distal end ( Figure 2 ). Presumably the calculus had migrated from the cystic duct remnant. The calculus was subsequently retrieved during ERCP. The patient made an uneventful recovery and has been symptom-free after 3 months follow-up. | CC BY | no | 2022-01-12 15:21:46 | J Minim Access Surg. 2005 Mar; 1(1):34-36 | oa_package/49/94/PMC3016474.tar.gz |
|||
PMC3016475 | 21234143 | INTRODUCTION
Laparoscopy is used as minimal access surgery for colorectal cancer and liver tumours.[ 1 ] Herein, we describe a case of colon cancer and synchronous liver metastasis in which both tumours were treated laparoscopically. | DISCUSSION
Laparoscopic surgery has become a standard technique for the treatment of benign diseases. Further development of instruments and techniques has made it possible to apply laparoscopic surgery to malignant diseases.[ 3 ] To our knowledge, this is the first report of laparoscopic resection of both primary tumour and metastatic liver tumour. In comparison to conventional surgery, laparoscopic surgery is beneficial with respect to short-term outcome, including earlier recovery and less pain. Our research in a murine model has shown that laparoscopic surgery is advantageous for gastrointestinal malignancies due to reduced impairment of systemic and intraperitoneal cell-mediated immune responses.[ 4 ] Although a comparison of long-term outcomes between laparoscopic and conventional surgeries for advanced colon cancer has been recently published,[ 5 ] there have been no reports of randomized trials of laparoscopic hepatectomy in patients with metastatic liver tumours. Future study is needed to further evaluate the usefulness of laparoscopic hepatectomy. | We report herein the case of 70-year-old woman in whom colon cancer and a synchronous metastatic liver tumour were successfully resected laparoscopically. The tumours were treated in two stages. Both postoperative courses were uneventful, and there has been no recurrence during the 8 months since the second procedure. | CASE REPORT
A 70-year-old woman was admitted to our hospital with a diagnosis of cancer of the sigmoid colon. Barium enema showed an encircling mass in the sigmoid colon ( Figure 1 ). Computed tomography (CT) revealed a low-density area in the left lateral segment of the liver ( Figure 2 ). Liver function was normal, and the preoperative carcinoembryonic antigen level was 6.2 ng/ml (normal range, < 5 ng/ml). The patient had no history of abdominal surgery. We planned a two-stage procedure for the patient because it was considered to be a highly invasive treatment for this elder patient when both sigmoid colectomy and hepatectomy were performed simultaneously. In addition, interval hepatic resection for synchronous metastases of colorectal cancer, with a routine waiting period of 4-6 months, was recommended to improve the patient selection.[ 2 ] Therefore, laparoscopic sigmoid colectomy was performed first. With the patient in the supine position, pneumoperitoneum of 8 mmHg was established, and trocars were placed in the right upper and lower abdomen ( Figure 1 ). After the sigmoid colon was mobilized from the surrounding tissues, a skin incision (6 cm) was made in the left lower abdomen. The sigmoid colon was exteriorized and resected through the skin incision after appropriate barrier protection of the wound edges was ensured. Operation time was 372 min, and estimated blood loss was 70 ml. The postoperative course was uneventful, first flatus was recognized on day 2, solid diet was started on day 3, the patient was discharged and directly went home on postoperative day 11.
5 months after the first operation, CT scan revealed that the slight enlargement of the liver tumour (25 mm to 37 mm in diameter), but new lesions were not observed ( Figure 2 ). We performed laparoscopic partial S2/3 hepatectomy. After CO 2 insufflation with a pressure of 8 mmHg, inspection of the peritoneal cavity revealed no remarkable adhesion. Four trocars were placed in the upper abdomen, as shown in Figure 2 . The falciform, left triangular, and coronary ligaments were dissected, and the left hepatic lobe was mobilized. Hepatic resection was performed with an endoscopic autosuture stapler (EndoGIAII, US Surgical, Norwalk, CT, USA) under lower pneumoperitoneum pressure to prevent gas embolism. The resected specimen was removed from the port site, which was enlarged to 3 cm in the supraumbilical area with an EndoCatchII device (US Surgical). Operation time was 167 min, and blood loss was 20 ml. The postoperative course was uneventful and the patient was discharged and directly went home on day 14. She is currently doing well, with no evidence of disease recurrence during the 8 months since the procedure. | CC BY | no | 2022-01-12 15:21:46 | J Minim Access Surg. 2005 Mar; 1(1):37-38 | oa_package/04/a2/PMC3016475.tar.gz |
||||
PMC3016476 | 21234144 | A 40-year-old female presented with recurrent upper abdominal pain immediately after taking food. Upper gastrointestinal endoscopy revealed a gastric diverticulum (GD) measuring 3 cm × 3 cm on the anterior wall near the pylorus presenting as a ‘double pylorus’ ( Figure 1 ). GD are one of the most rare and controversial gastrointestinal pathologies; very few cases are reported in the literature. Usually they are asymptomatic, single, saccular in shape, 1-4 cm in size and predominantly encountered in the 5 th or 6 th decade of life. Surgical intervention is required only in symptomatic patients and complicated cases. The most common diagnostic dilemma is that a GD simulates left adrenal massradiologically.[ 1 ] A GD should be differentiated from a gastroduodenal fistula, or a double-channel pylorus, which is caused by a penetrating ulcer in the distal antrum that erodes directly into the base of the duodenal cap or into the bulb. In this condition two channels communicate between the antrum and pylorus:the true pyloric canal and the fistula. | CC BY | no | 2022-01-12 15:21:46 | J Minim Access Surg. 2005 Mar; 1(1):39 | oa_package/e2/69/PMC3016476.tar.gz |
|||||||
PMC3016477 | 21234137 | There has been an alarming increase in the incidence of oesophageal adenocarcinoma in Europe and America and a similar epidemiological trend is likely in Asia. A well-defined morphological sequence in the pathogenesis of oesophageal carcinoma has been recognised, with the serial development of specialised intestinal metaplasia (Barrett’s oesophagus, BE) and progressive grades of dysplasia. 1 BE is readily recognised by a salmon-pink appearance at endoscopy, with characteristic goblet cells at standard haematoxylin-eosin staining of biopsy specimens, and identifies individuals at increased risk of oesophageal adenocarcinoma.
The predominant aetiological factor for BE is gastroesophageal reflux disease (GORD), wherein the oesophageal mucosa is abnormally exposed to secretions from the stomach (hydrochloric acid) and the duodenum (bile salts). Long-standing GORD is believed to switch the differentiation of pleuripotential oesophageal epithelial stem cells to a BE lineage with subsequent clonal expansion of the novel cell population. 1 The relative injurious potential of the gastric and duodenal components is vigorously debated and there is no conclusive evidence that one is any worse than the other. It is also argued that bile reflux is more damaging in the alkaline environment that is created by selective control of gastric acid 2 but there is equal evidence to the contrary. 3
An important aim in treatment of BE patients is to reduce the risk of oesophageal carcinoma. Proton pump inhibitor (PPI) therapy is highly effective in controlling GORD symptoms for BE patients. However, is symptom-control the optimal clinical end-point for carcinoma-risk reduction? Abnormal acid reflux (oesophageal pH < 4 for longer than 4.5% of monitoring period) or abnormal bile reflux (Bilitec, spectrophotometric absorbance > 0.14 for longer than 1.8% of the monitoring period) persist in up to 50% of patients with long-segment BE, despite good symptom-control with PPI treatment. 4 Laboratory evidence and limited clinical data suggest that that mere symptom-control is insufficient and objectively confirmed normalisation of acid and bile reflux is important to inhibit carcinogenesis. For example, in BE biopsy specimens, cellular proliferation is significantly reduced and cellular differentiation is increased following normalisation of acid reflux by PPI therapy. In contrast, there is no change in cellular events in patients with persisting abnormal acid reflux. 5 Eradication of acid reflux can be achieved by serially increasing the PPI dose but this does require repeated pHmetry and is laborious. 6 PPI therapy does also decrease bile reflux, probably by decreasing the volume of the refluxate, but eradication is unreliable. 4
Proponents of laparoscopic anti-reflux surgery (LARS) argue superiority over PPI therapy because an operation can restore the physiology of the gastroesophageal junction and provide highly effective control of both acid and bile reflux. Many GORD patients have good symptom-control with PPI but prefer LARS in order to discontinue life-long medication or to obtain relief from intermittent recurrence of heartburn due to inadvertently missing a dose or nocturnal acid breakthrough. Incomplete symptom-control with PPI, pulmonary aspiration and large-volume regurgitation are some other good indications for LARS. Large series of LARS from specialised centres report excellent results and the functional benefits for BE patients appear similar to those for uncomplicated GORD. 7 The critical issue in promoting and popularising LARS is that the operation is technically demanding as well as uniquely surgeon-dependant and, unlike PPI therapy, the results of major studies cannot be simply generalised. Mature, advanced laparoscopic skills are essential and a definite learning curve for LARS has been recognised. 8 Peri-operative complications are likely under-reported in the literature and the durability of effectiveness of LARS has been questioned. 9 Finally, the modified Nissen fundoplication is generally considered to be the standard LARS but experts continue to disagree about several technical issues: What is the place of the posterior partial fundoplication (Toupet) and the anterior partial fundoplication (Watson)? Should the short gastric vessels be routinely divided? How is the short oesophagus recognised pre-operatively and does it require extended trans-mediastinal dissection or gastroplasty? Should prosthesis be used for hiatal closure? Morbid obesity is increasingly prevalent and is an independent risk factor for oesophageal adenocarcinoma. Should a laparoscopic bariatric procedure be the preferred anti-reflux operation for the morbidly obese?
In this issue of JMAS , Bamehriz et al report a series of 22 patients with BE and laparoscopic Nissen fundoplication. 10 The authors are to be commended on meticulous post-operative follow-up by pHmetry and quality of life scores, with excellent functional results. There was complete regression of BE in all cases with BE length < 4cm and it is suggested that surgical therapy may be considered a first-line approach for short-segment BE. Such a recommendation is contentious. It is difficult to definitively evaluate any therapy for cancer-risk reduction in BE because of the paucity of data regarding the final outcome of adenocarcinoma. Changes in length of the BE segment are commonly used as a clinical surrogate for the risk of adenocarcinoma but the biological accuracy of such an approach is undefined. Some other confounding issues are inter-endoscopy and intra-endoscopy variability in length, squamous regeneration over “buried” BE and “pseudo-regression” due to repositioning of the lower oesophagus at anti-reflux surgery. Long-term data are, therefore, important and a recently published, five-year (median) follow-up of a randomised trial of medical therapy versus anti-reflux surgery is intriguing. 11 Only those patients with successful anti-reflux surgery (normal post-operative pHmetry and Bilitec results) remained free of adenocarcinoma and such data strongly support a well-performed LARS for BE. It is possible that other trials, without post-operative reflux monitoring, may not have reported superiority of surgery over PPI therapy because of contamination with unsuccessful operations.
So, what is the role of LARS for patients with BE? LARS provides excellent symptom-control and patients with BE should be subject to the same selection criteria as those with uncomplicated GORD. There is a resurgence of interest in anti-neoplastic potential of LARS following the recent, unexpected withdrawal of COX-2 inhibitor drugs, which showed promise as chemo-preventive agents in BE. 12 A novel clinical trial of PPI therapy versus LARS, with intent to achieve normal pH and Bilitec measurements in both arms and serial evaluation of molecular genetic changes in BE biopsies, has been proposed. 13 However, until further data are available, cancer-risk reduction should not be the primary indication for LARS in BE. | CC BY | no | 2022-01-12 15:21:46 | J Minim Access Surg. 2005 Mar; 1(1):5-7 | oa_package/b4/2f/PMC3016477.tar.gz |
|||||||
PMC3016478 | 21234138 | INTRODUCTION
Thoracoscopy in children initially was proposed as a method of obtaining pulmonary biopsy specimens in immunocompromised patients. With further refinements in the technique and development of better instrumentation, the scope has widened tremendously with more complicated procedures like PDA ligation, thymectomy, Heller’s myotomy, congenital diaphragmatic hernia repair etc. being performed with the help of thoracoscopy.
Anaesthesia for paediatric thoracoscopy[ 1 2 ] is very challenging as the paediatric anaesthesiologist has to be well versed in the various techniques of providing one lung anaesthesia and manage the intra and postoperative complications. Utmost vigilance is needed as one encounters arrhythmias (such as ventricular tachycardia, atrial fibrillation, supraventricular extrasystoles etc.), mediastinal shift, hypertension or hypotension and hypercapnia. Pulmonary complications include hypoxemia, hypercarbia, impaired hypoxic pulmonary vasoconstriction, re-expansion pulmonary oedema, atelectasis and pneumonia. There is always the possibility of some major vessel injury and torrential bleed. It is difficult to assess the blood loss during thoracoscopy. | Methods to improve oxygenation
High FiO 2 —first line of therapy in c/o hypoxemia Tidal volume—8-12 ml/kg to the ventilated lung prevents atelectasis Airway pressure if too high—Tidal volume may be decreased and respiratory rate increased. CPAP to the operative lung PEEP to the ventilated lung[ 6 ] High frequency jet ventilation at low driving pressures 10-12psi to the operative lung Ipsilateral pulmonary artery can be clamped Re-inflation of non-ventilated lung (prophylactically every 5 mins) Maintain cardiac output
CO 2 insufflation[ 7 ] into the operative hemithorax is used as a technique to facilitate collapse of the lung on the operative side. This is particularly useful in smaller patients where lung isolation is not possible and there is inadequate separation of the two lungs with overflow ventilation into the operative side. Meticulous cardiopulmonary monitoring is mandatory as displacement of intrathoracic contents and creation of an excessive pneumothorax can lead to significant cardiovascular compromise from decreased venous return or high left ventricular afterload. The effects of the artificial pneumothorax can be minimized by slowly adding the CO 2 (flow rate 1 L/min) and limiting the inflating pressure to 4 to 6 mmHg. Direct insufflation of CO 2 into the lung parenchyma can cause sudden rise in End Tidal CO 2 . Subcutaneous emphysema and CO 2 embolism can occur. Detection techniques for gas embolism that can identify the problem prior to the onset of cardiovascular changes include transoesophageal Echo (0.1 ml of gas), precordial doppler (0.5 ml) and capnometric end tidal Nitrogen monitoring. The combination of standard tracheal intubation with prone position and CO 2 insufflation may provide good exposure in some cases.[ 8 ]
Once successful separation of the non-operative and operative lung has been accomplished, anaesthesia is maintained with a combination of intravenous and inhalational anaesthetics. Isoflurane (MAC limited to 0.5-1.0 MAC) preserves HPV. Fentanyl, Ketamine, Benzodiazepines and Barbiturates have little or no effect on HPV. Any non-specific vasodilation (eg. Terbutaline, Albuterol, Isoproterenol, Dobutamine, Nitroglycerine and Sodium nitroprusside) can impair HPV and affect oxygenation during one lung ventilation. | CONCLUSION
In conclusion, minimal access surgery does not mean minimally invasive anaesthesia. A thorough knowledge of the physiology of one lung ventilation, meticulous planning, continuous vigilance to detect any untoward event at the earliest and good communication between the anaesthesiology and surgical teams all contribute to a safe and successful surgery. | Anaesthetic care during thoracic surgical procedures in children combines components of the knowledge bases of paediatric anaesthesia with those of thoracic anaesthesia. This article highlights the principles of anaesthesia during thoracoscopic surgery in children including preoperative evaluation, anaesthetic induction techniques, maintenance anaesthesia and options for postoperative analgesia. In addition, given the need to provide optimal surgical visualization during the procedure, one lung ventilation may be required. Techniques to provide one lung ventilation in the paediatric patient and the principles of anaesthesia care during one lung ventilation are discussed. | PRE-OPERATIVE EVALUATION
Patients presenting for thoracoscopic surgery should undergo a similar preoperative evaluation to those presenting for open thoracotomy with special emphasis on the degree of pulmonary and cardiac dysfunction. It is customary to obtain a complete history, physical examination and the following laboratory tests: haemoglobin, haematocrit, liver function tests, serum electrolytes and an X-ray Chest. Additional preoperative evaluations such as pulmonary function test (PFT) and ECG are not routinely indicated but rather obtained based on the patients medical history and associated underlying illness. Preoperative CT scan of the chest is useful in children with an anterior mediastinal mass. Compression of greater than 50% of the cross sectional area of the trachea on CT imaging can be used to identify the high risk population in whom general anaesthesia with loss of spontaneous ventilation can lead to total airway obstruction. Options include preoperative radiation or chemotherapy to shrink the mass or induction of general anaesthesia while maintaining spontaneous ventilation with cardiopulmonary bypass as a backup measure. Simple bedside spirometry—FVC, FEV1 and the ratio FVC/FEV1 may be performed in older children to assess the degree of obstructive lung disease and ensure that the minimum criteria for wedge or lung resection are satisfied.
PRE-OPERATIVE PREPARATION
Chest physiotherapy, good nutrition, bronchodilator/antibiotic therapy, steroid supplementation etc. helps in optimizing the patients condition prior to surgery.
As there is always a possibility of conversion to open thoracotomy, blood should be kept in reserve.
Standard perioperative monitoring includes
ECG Pulse oximetry End tidal CO 2 measurement Noninvasive BP monitoring Continuous temperature monitoring.
The bladder is catheterised and urine output monitored when surgery is prolonged or significant blood loss expected.
Premedication and anaesthesia management
In otherwise healthy patients without airway compromise intranasal Midazolam 0.3 mg/kg in children without intravenous access or rectal or oral Midazolam 0.5-0.75 mg/kg administered 15 to 20 minutes prior to anaesthesia induction provides anxiolysis, easy separation from parents and acceptance of face mask.
Blood loss during a diagnostic thoracoscopy is usually minimal. It is however advisable to have preferably two venous accesses prior to the start of the procedure as the surgery is performed in the lateral decubitus position.
If central venous pressure monitoring is necessary, internal or external jugular monitoring on the side of thoracoscopy is recommended. In patients with severe cardiac instability and where major haemodynamic fluctuations are expected, invasive arterial blood pressure monitoring is used. Atropine is administered as a vagolytic and antisialogogue. Antiemetics and H2 antagonists are administered in patients at risk for aspiration.
Inhalational Sevoflurane or Halothane or intravenous Thiopentone or Propofol induction is followed by a neuromuscular blocking drug to facilitate endotracheal intubation.
Intraoperative analgesic used is generally Fentanyl 1-2 microgram/kg or Pentazocine 0.6 mg/kg,
Anaesthesia is maintained either by using inhalational agents or infusions of Propofol. Patient is maintained on controlled ventilation using short acting muscle relaxants.
The goals of anaesthesia include: minimizing airway reactivity, optimizing gas exchange, maintaining stable cardiovascular function, preventing ventilatory depression and providing adequate pain relief in the postoperative period.
Anaesthesia technique for thoracoscopy
A variety of anaesthesia techniques can be used for thoracoscopy. Older children (> 8 years of age or weight > 30-35 kg) can be managed using most of the techniques used in adults. Special techniques for isolation of the operative lung are suitable for smaller children.
Local anaesthesia- may be possible in older adolescents. This technique is usually reserved for brief procedures without involved intrathoracic surgical manipulation for ill patients with unacceptable risk of perioperative morbidity following general anaesthesia. Following IV sedation, the lateral chest wall and parietal pleura are infiltrated with local anaesthetic to provide anaesthesia for trocar placement. Regional techniques include thoracic epidural anaesthesia, thoracic paravertebral blocade, multiple intercostal blocks or intrapleural analgesia. The stellate ganglion block also temporarily eliminates the cough reflex which can be elicited during manipulation of the pulmonary hilum. Regional anaesthesia techniques and local anaesthesia with sedation offers the advantage of maintaining spontaneous ventilation and interferes less with surgical exposure. However patients with significant pulmonary disease are sometimes unable to compensate for the temporary loss of pulmonary surface area due to partial collapse of the lung on the side of the thoracoscopy. Local and regional techniques are possible only in the older age group. In majority of cases thoracoscopy is always performed under general anaesthesia with lung isolation techniques whenever feasible. General anaesthesia and one lung ventilation: With general anaesthesia and positive pressure ventilation, intrathoracic visualization and surgical access can be impaired by lung movement. To overcome this problem, thoracoscopy is performed using techniques to isolate the lung and provide one lung ventilation. This allows the lung on the operative side to be collapsed and motionless, facilitating exposure and surgical instrumentation, while gas exchange (oxygenation and CO 2 elimination) is maintained by ventilating the non-operative dependant lung.
Techniques for one lung ventilation in children.[ 3 ]( Table 1 )
Selective mainstem intubation Double lumen endotracheal tube (DLT) Bronchial blockers Univent endotracheal tube
Selective mainstem intubation is a simple and quickly achieved means of one lung ventilation in patients whose small size precludes placement of a DLT or Univent tube. The tracheal tube should be one half smaller than usual, based on patient’s age as the diameter of the mainstem bronchus is smaller than that of the trachea. Bronchoscopic guidance or fluoroscopy can aid correct placement. As an uncuffed tracheal tube might not be totally occlusive to avoid soilage and inadvertent ventilation of the operative side, a cuffed tracheal tube is recommended in patients > 2 years of age ( Figure 1 ). Double lumen tube When patient size permits, a DLT is preferable as it has advantages over other techniques. Rapidly and easily separating the lungs Allowing for suctioning of both lungs Providing rapid switch to two lung ventilation as necessary based on patients status. Improving oxygenation by applying CPAP to the operative lung and PEEP to the non-operative lung. In children left sided DLTs are used almost exclusively because they are easier to place and eliminate concern of obstruction of right upper lobe bronchus. However specialized paediatric bronchoscopes are needed to confirm correct placement. As the smallest commercially available DLT is a 26 Fr tube, placement in patients weighing less than 30-35kg or younger than 8 -10 years of age is not feasible ( Figure 2 ). Bronchial Blockers The bronchial blocker eg. Fogarty embolectomy catheter[ 4 ] ( Table 2 ), Swan-Ganz catheter or Arndt bronchial blocker can be placed in the mainstem bronchus of the operative side blindly, using X-ray guidance or under direct vision with a fibreoptic bronchoscope. All the various devices have a balloon at the end that is inflated to occlude the bronchus of the operative lung. With an inflated blocker balloon, the airway is completely sealed, providing more predictable lung collapse and better operating conditions than with an endotracheal tube in the bronchus. Those devices with a central channel provide the advantage of allowing some degree of suctioning through the channel, not to clear the lung of secretions (the channel is too small for that purpose) but rather to deflate the operative lung or for application of continuous positive airway pressure. A potential problem is dislodgement of the blocker balloon into the trachea. The inflated balloon will then block ventilation to both lungs. When closed tip blockers are used, the operative lung cannot be suctioned, the lung may not deflate completely and continuous positive airway pressure cannot be provided to the operated lung if needed.( Figure 3 ) Univent tube Univent tube is a single lumen tracheal tube with a movable bronchial blocker that is incorporated into a channel placed alongside the tube. Advantages of the Univent tube include easy placement, the ability to change intermittently from one to two lung ventilation and a channel through the bronchial blocker that permits oxygen insufflation into the operative lung during one lung ventilation. DLTs merely separate right from the left whereas with the Univent tube, there is a facility to block selectively a lobar or even a segmental bronchus. Since the bronchial blocker is incorporated into the tube, displacement is less likely. Disadvantages: The large amount of cross sectional area occupied by the blocker channel causes high resistance to ventilation. The low volume, high pressure characteristics of the blocker balloon can result in mucosal injury. The Univent tube is marketed in 0.5 mm increments from ID 6 mm to 9 mm. Now paediatric sizes 3.5 and 4.5 mm ID are also available. Univent tubes require fibreoptic bronchoscope for successful placement. Paediatric bronchoscopes are 3.5 -4 mm in diameter and slighter shorter than adult bronchoscopes. An ultrathin bronchoscope 2.2 mm in diameter is also available and can be used with the smallest of endotracheal tubes. In units where paediatric fibrescopes are not available, rigid bronchoscopy may be required to accurately position bronchial blockers.( Figure 3 )
Perioperative management
Physiology of One Lung Ventilation (OLV)[ 5 ]
OLV provides excellent surgical conditions and is associated with a low incidence of accidental lung injury. After the endobronchial tube is placed CO2 insufflation augments lung collapse and provides additional lung protection during the insertion of endoscopic instruments.
Following partial lung collapse, hypoxic pulmonary vasoconstriction (HPV) increases pulmonary vascular resistance with consequent re-routing of blood to the well ventilated lung zones. This normal physiologic response to atelectasis decreases ventilation perfusion mismatching and improves arterial oxygenation. However, when more than 70% of the lung is atelectatic (as in OLV), HPV is obliterated.
After collapse of the operative upper lung, all the ventilation passes to the lower lung, but blood flow though reduced persists in the nonventilated lung. This blood flow does not participate in oxygenation and represents a right to left transpulmonary shunt which accounts for the decrease in arterial PO2 seen during OLV. Preexisting pulmonary disease, elevation of the diaphragm, compression of the thoracic cavity from the mediastinum, abdominal contents, rolls and packs used to facilitate positioning of the patient, all contribute to a disparity between ventilation and perfusion in the ventilated lung thus increasing the shunt fraction still further.
The physiologic basis for haemodynamic instability during OLV is multi-factorial. As the chest cavity is closed, rapid and excessive CO2 insufflation can create a tension pneumothorax. The resultant compression of the lungs and great vessels could decrease venous return and stroke volume with resultant hypotension. Gas insufflation can activate pulmonary stretch receptors and increase vagal tone with consequent bradycardia and also can cause mediastinal shifting and cardiac tamponade.
Post-operative complications[ 9 ]
Persistent air leak is the most common complication following VATS which can lead to subcutaneous emphysema, residual pnemothorax or recurrent pneumothorax. Down Lung syndrome is the term for increased secretions and pneumonia that can develop postoperatively in either lung following OLV. Infection ranges from a local wound infection to a pulmonary abcess or empyema. Horner syndrome Dissemination of malignant disease Lung herniation through the chest wall. Recurrent laryngeal nerve injury is seen more with mediastinoscopy. If suspected, possibility of airway obstruction should be kept in mind.
Post-operative pain control[ 10 ]
Thoracoscopic procedures offer the advantage of small incisions without either splitting of the serratus anterior or latissimus dorsi muscles or spreading of the ribs, two techniques which markedly contribute to postoperative pain. In order to minimize pain, patients breathe rapidly with small tidal volumes. This type of breathing promotes atelectasis, retention of secretions, decrease in functional residual capacity and increase in V/Q mismatching all of which contribute to hypoxemia. Good pain relief by any method is mandatory.
Oral: Nonsteroidal anti-inflammatory drugs, Paracetamol etc Rectal suppositories: Paracetamol and Diclofenac provide long lasting analgesia and reduce opioid requirements. Intravenous route: Most patients especially those who have undergone pleural procedures such as decortication, pleurectomy or pleurodesis require potent parenteral opiods for the first 24 hours. IV opiods via PCA pump using Morphine or Fentanyl provide satisfactory analgesia especially in older children who are able to comprehend and express pain. Intercostal nerve blocks or intrapleural installation of Bupivacaine relieves pain from chest tubes or instrument insertion points. Epidural analgesia:The administration of neuraxial opiods or local anesthetics through an epidural catheter is usually unnecessary after a straightforward thoracoscopy but is almost always inserted if the thoracoscopic procedure is converted to an open thoracotomy.
In our setup we have found that rectal suppositories, IV Tramadol (1-2 mg/kg) 6-8hrly along with local infiltration of 0.25% Bupivacaine at the port sites provides satisfactory analgesia following thoracoscopy. | The authors thank Dr. Uday Bhat, Associate Professor, Department of Plastic Surgery for the excellent schematic diagrams. | CC BY | no | 2022-01-12 15:21:46 | J Minim Access Surg. 2005 Mar; 1(1):8-14 | oa_package/59/17/PMC3016478.tar.gz |
||
PMC3016479 | 21234145 | It was a privilege to be able to review the article, “Laparoscopic reversal of Hartmann procedure” prior to its publication. The findings discussed are very similar to our early experience using the laparoscopic approach to reversing a colostomy.
The Hartmann procedure remains a standard operation for left-sided colon pathology that is not amenable to immediate reanastamosis. Reversing the colostomy to reestablish intestinal continuity is a major abdominal operation that historically results in extended recovery, prolonged hospital stays, significant rates of morbidity and mortality, and long-term complications such as hernias. Because of the risks associated with the operation, historically almost 50% of patients choose to forego colostomy reversal and keep their colostomy despite the physical and psychological challenges associated. We recently performed a study comparing our outcomes for 22 laparoscopic versus 22 open colostomy reversals and our results were comparable to those presented in this article. We demonstrated significantly less intraoperative blood loss, fewer postoperative complications, quicker return of bowel function and shorter hospital stays for the laparoscopic group. Based on these findings, we believe that a laparoscopic approach to colostomy reversal may increase the want or willingness of patients with a left-sided colostomy to undergo reanastomosis.
Our surgical technique for the laparoscopic reversal of a left-sided colostomy is similar to that discussed in the article with a few small differences. All patients receive a preoperative bowel preparation and an enema to evacuate the rectal stump. The patients are placed in the modified lithotomy position and a three-way Foley catheter is placed. Our port placement is different from that reported in this article. Port placement is related to the location of the prior abdominal incisions and the stoma. Either the colostomy site is used or an open cut down technique is performed to access the peritoneal cavity. If the prior midline incision extends to the epigastrium, the initial port is placed at the colostomy site. The colostomy is mobilized and the most distal segment of the colostomy is transected with a stapler at the mucocutaneous junction. The stapled colon is dropped back into the abdomen and a 10 mm balloon tipped trocar is placed in the prior colostomy site.
If a lower midline incision is present, initial access to the peritoneal cavity is typically gained with an open technique under direct vision in the left upper quadrant. A 5- or 10 mm port is placed in this incision. Typically, three ports are used in total with the third being located to the left of the midline superiorly. The benefit of these port placements is that the dense adhesions frequently found along the prior midline incision can be avoided and adhesiolysis minimized. Sharp adhesiolysis is performed to mobilize the splenic flexure and left colon. Excessive adhesiolysis of the prior midline incision is avoided. Next, the rectal stump is identified. If polypropylene sutures were placed on the rectum at the time of initial operation, they can greatly aid in the localization of the rectal stump. An additional aid in delineating the rectum is to insert a rectal dilator or rigid sigmoidoscope transanally. Once the rectal stump is visualized, it is dissected as needed to enable a stapled anastomosis. With extensive adhesions in the pelvis and in women who have had a prior hysterectomy, the bladder can be adherent to the rectum. This relationship can be difficult to interpret laparoscopically. At this point, 300-400 ml of saline is instilled through the three-way Foley catheter to ensure that the rectum is safely freed from it. The stapled left colon is then brought out through the prior ostomy site after the 10 mm balloon tip trocar is removed. The anvil for the circular stapler is secured within the left colon lumen. The colon is then returned to the abdomen and the balloon port is replaced. The circular stapled anastomosis is then performed under direct laparoscopic visualization.
Using this technique allows for certain advantages over an open technique and may explain the reduction in morbidity that we experienced in our study. As this article mentions, the laparoscopic approach gives better visualization of the splenic flexure allowing for routine mobilization. Mobilization of the splenic flexure in an open technique requires a larger midline incision, which can lead to an increased risk of postoperative morbidity. Splenic flexure mobilization is required often for this operation to reduce tension on the anastomosis with its resultant anastamotic dehiscence or stricture formation.
Avoiding the previous midline incision by accessing the abdomen laterally can help prevent the possibility of bowel injury. In our cases, the abdomen is always entered at a site remote from prior incisions. Keeping our ports on the left side of the body, the midline is often completely avoided. By doing this we have had no visceral injuries in our series.
There are additional long-term complications which we believe are reduced by using a laparoscopic approach. These include the formation of incisional hernias and, possibly, small bowel obstruction from additional adhesive disease. These have not been fully evaluated yet as longer follow-up periods are needed. The use of the laparoscopic technique for reversal of colostomies appears to offer distinct advantages over the open approach. It should be made clear, however, that this operation does require an experienced laparoscopic surgeon. Our conversion rate was 9%, which is similar to the rate reported in this article. Reasons for conversion include dense adhesions or inability to mobilize the rectal stump adequately; both of these can be determined with diagnostic laparoscopy and often does not require an extended period of time. Our experience and findings are in agreement with those put forth in this article. With less morbidity, shorter hospital stays and quicker recovery periods, laparoscopic colostomy closure should be attempted by surgeons with advanced laparoscopic skills. | CC BY | no | 2022-01-12 15:21:46 | J Minim Access Surg. 2006 Dec; 2(4):201-202 | oa_package/09/bb/PMC3016479.tar.gz |
|||||||
PMC3016480 | 21234146 | Minimally invasive surgery has been one of the most important surgical innovations over the last 15 years. In 1990, not only was the first laparoscopic sigmoid resection described, but also the first laparoscopic anterior rectal resection, the first laparoscopic right hemicolectomy and the first laparoscopic stoma creation. Since then, laparoscopic colorectal surgery has found broad acceptance in the world.
The laparoscopic Hartmann reanastomosis procedure - first described in 1993 - however, only played a marginal role at that time. The international literature currently offers only a few publications, mostly case reports and small series with 10 to 20 patients. Thus, the results of a total of approximately 100 patients who underwent laparoscopic Hartman reanastomosis have been published by now.[ 1 ]
The small number of patients included in our own analyses emphasized the minor importance of laparoscopic reanastomosis after the Hartmann operation. The prospective multicenter study of the Laparoscopic Colorectal Surgery Study Group is one of the biggest worldwide registers for laparoscopically operated colorectal patients and includes more than 5000 patients in Central Europe who were registered between 1995 and 2005. Only 71 patients were included in this study who underwent a laparoscopic Hartmann reversal. This patient group represents only 1.4% of the total laparoscopic operations and therefore plays an unimportant role.
The morbidity for these cases was 16.9% and the mortality 1.4%; both aspects showed no significant complication rates. In contrast, a reoperation rate of 8.4% was found. The mean operating time is 164 min (range 60-410 min) and thus is longer than for elective sigmoid resections with primary anastomosis.[ 2 ]
The complication rate of 14.1% and the conversion rate of 12.7% were significantly increased, generally due to extensive intraabdominal adhesions. Since in Europe, the indication for laparoscopic reanastomosis after Hartmann operation is almost solely a sigmoid diverticulitis with peritonitis, strong adhesions have to be expected in such cases. Although a conversion to laparotomy is no primary complication, it may be important to mention that a conversion is associated with a significant poorer outcome. On the other hand, patient selection in order to avoid high conversion rates is almost impossible.
In the subsequent work, V. Golash completes the already available results with his experiences with 12 patients who underwent a laparoscopic Hartman reversal. Particularly with regard to mortality and conversion his results are comparable with those from the available literature and with our own experiences. He shows that this laparoscopic procedure is safe and efficient.
Despite the potential low postoperative morbidity and mortality, a short hospital stay, a reduced postoperative pain and an accelerated convalescence, this procedure is technically challenging and should therefore be reserved to experienced minimally invasive surgeons. Reason for that is a relatively long operating time as well as a significant higher conversion and intraoperative complication rate.
Regardless of the fact that specialized centers are able to achieve outstanding results with this procedure, we would like to emphasize that a laparoscopic Hartmann reversal currently cannot be recommended in general. | CC BY | no | 2022-01-12 15:21:46 | J Minim Access Surg. 2006 Dec; 2(4):203-204 | oa_package/40/b5/PMC3016480.tar.gz |
|||||||
PMC3016481 | 21234147 | RESULTS
Extent of oncological resection in laparoscopy versus open surgery
The most important aspect if the feasibility of laparoscopy has to be assessed in case of colorectal malignancies is to unconditionally prove that the oncological resection, i.e., the margins of resection - proximal, distal and circumferential and the number of nodes harvested are comparable, if not better than in open surgery.
Numerous studies[ 2 3 13 ] in the 1990s had shown that the number of lymph nodes harvested was comparable between open and laparoscopic surgery. Melotti et al also concluded that the distance of the tumor from resection margins and the number of lymph nodes harvested with the operative specimen did not vary from those obtained in open surgery.[ 14 ] A recent meta-analysis[ 15 ] showed that the number of lymph nodes harvested was higher in the laparoscopic group although the differences did not attain statistical significance. Korolija et al , however, noted that the difference in distal resection margins (4.6 cm in the lap group versus 5.3 cm in the open group) were statistically significant in favor of open surgery. On the basis of correlation they concluded that laparoscopic surgery was as adequate as the conventional approach. Bretagnol et al , have shown that R0 resections could be achieved in 93% of patients undergoing laparoscopic low colorectal and coloanal surgeries.[ 16 ]
Safety and complications
The introduction of laparoscopy into the armamentarium of surgery for colorectal cancer has brought, along with the novel idea of minimal access, a novel set of complications associated with the creation of pneumoperitoneum, port placement and diathermy use that require a considerable amount of skill as well as specialized training if they have to be prevented. Table 2 lists a few of the specific complications associated with laparoscopic colorectal surgery.[ 17 ] Six randomized controlled trials[ 18 – 23 ] comparing complication rates did not find any significant difference between laparoscopic and open surgery. In fact, a few did show a lower morbidity in favor of the laparoscopy group. Lacy et al [ 18 ] concluded that while the operative time was consistently longer in the laparoscopic arm, the peri-operative blood loss as well as morbidity were significantly lower in the laparoscopic arm. Conversion rates vary from 1%[ 24 ] to 29%.[ 25 ] The conclusion is clear - laparoscopic surgery, if performed by a trained, skillful surgeon, will produce results comparable to open surgery. Poor technique is responsible for the complications encountered and does not reflect an inherent errant potential in laparoscopic surgery. Conversion is not a complication and must be resorted to whenever required.[ 17 ]
Short-term outcomes
Table 3 shows a comparison between various randomized trials comparing short-term outcomes viz, blood loss, analgesic requirement and operative time.
Braga et al ,[ 29 ] in their randomized study on short-term outcomes in laparoscopic and open surgery, while analyzing parameters such as wound infection rates, anastomotic leak rates, operative time, blood loss, lymphocyte proliferation in response to Candida albicans and phytohemagglutinin and return to full activity, found that the laparoscopic arm had a significant decrease in the 30-day postoperative morbidity rates compared with the open arm. They also concluded that laparoscopic-assisted colorectal surgeries are associated with better preservation of lymphocyte proliferation indices and gut oxygen tension. Tang et al ,[ 30 ] have shown that there is no difference in the systemic immune response of patients having laparoscopically assisted colectomy and those undergoing conventional open surgery for colorectal cancer. The Cochrane review[ 31 ] on the short-term benefits for laparoscopic colorectal resections which analyzed 22 trials and 2965 participants, concluded that whilst the results available favored laparoscopic colorectal resection, only seven of the trials had more than 100 patients. The reviewers believed that the final verdict could only be given after the multicenter trials viz, COLOR, MRC CLASICC and LAPKON II (Germany). The results of the COLOR study[ 23 ] have been summarized in Table 3 , while the MRC CLASICC[ 25 ] has concluded that laparoscopic-assisted surgery for colon cancer is as effective as open surgery and is likely to produce similar long-term outcomes. However, impaired short-term outcomes after laparoscopic-assisted anterior resection (including CRM positivity) for cancer of the rectum do not justify its routine use.
Long-term outcomes
Laparoscopic colorectal surgeries have constantly been under scrutiny with respect to the long-term outcomes - survival data and recurrence rates. The data available[ 32 – 35 ] has shown long-term survival comparable to conventional open surgery. Local recurrence rates vary from 0-6.6%.[ 18 36 37 ] Capusotti et al ,[ 33 ] have even found a better outcome for node - positive patients treated by laparoscopy. Jacob et al ,[ 34 ] have in fact shown better results in the patients undergoing laparoscopic resections. A recent systematic review[ 38 ] has shown no difference between laparoscopy and open surgery with regard to the long-term outcome. Other, less powered studies,[ 39 40 ] have shown a better outcome for the laparoscopy arm patients. However, multicenter randomized trials are needed to confirm or refute these results.
Port site recurrence
After the first reported port site metastasis in 1978,[ 41 ] numerous studies have been carried out to determine whether laparoscopy is actually associated with an increased incidence of port site recurrences / metastasis. Initial reports showed a high incidence of port site recurrence.[ 5 – 9 ] The possible mechanisms which lead to port site metastasis have been summarized in Table 4 .[ 42 ] In the review published in 1998, Neuhaus et al [ 42 ] had strongly suggested an increased incidence of port site metastases due to laparoscopic surgery, warning that in view of the findings, laparcoscopic surgery for colorectal malignancies should occur only within the context of clinical trials. In a prospective randomized controlled trial, Lacy et al ,[ 43 ] found no port site recurrences in the 91 patients studied by them, neither in the laparoscopic nor open surgical arms. Many studies[ 17 44 – 46 ] found a lesser prevalence of port site recurrence than previously shown and realized that the incidence corresponds with wound recurrence seen in open surgery. According to Melotti et al ,[ 14 ] the incidence of port site metastases varies from 0, in the recent studies, to 21.4% in other limited series.[ 7 ] Data in support of laparoscopic surgery have steadily increased blaming the initial reports of poor outcome on poor surgical technique. Some policies advised are to avoid contact between laparoscopic instruments and the tumor by bagging and the use of “no - touch” isolation technique suggested by RB Turnbull Jr,[ 14 ] meticulous lavage of all wounds with a cytocidal agent,[ 2 ] widening the port of extraction of the specimen and use of wound protectors.[ 17 ]
Quality of life issues
While the operative time for laparoscopic surgery is obviously more than that for open surgery, there are several beneficial outcomes resulting directly from the use of laparoscopy as compared with open surgery. As there is no large abdominal incision, the corresponding postoperative pain and the ensuing need for analgesia is reduced.[ 20 23 27 47 48 ] As the wound is smaller, the likelihood of wound infection is less.[ 29 49 ] This attains significance when the patient is a candidate for adjuvant chemotherapy at which time, a wound infection can delay institution of the chemotherapy. The COST study has shown better short-term quality of life. The recurrence and survival rates were equivalent for both groups and for all tumor stages.[ 27 ] The median hospital stay and the need for parenteral antibiotics were also shown to be lower in the laparoscopy group. The validity of this shortened hospital stay, though, has been questioned in the light of the stay also being affected by the presence of an anastomosis and the age of the patient.
The incidence of small bowel related problems postoperatively including adhesive obstruction and the incidence of postoperative ventral hernias have also been seen to be on the lower side in the laparoscopically resected group of patients.[ 50 ]
The incidence of injuries to the pelvic autonomic nerves during dissection in rectal cancers has been associated with bladder and sexual dysfunction. An increased rate has been noted in some studies.[ 51 – 53 ] This has been attributed possibly to the higher proportion of complete TMEs done by laparoscopy. More trials are required in this aspect.
Laparoscopy in early lower GI cancers
Laparoscopy has been shown to have an important role in early colonic carcinomas viz, laparoscopic-assisted colonoscopic polypectomy, laparoscopic wedge resection and laparoscopic colostomies with a 67-100% success rate for avoiding a formal bowel resection. This can be achieved by preoperative colonoscopic tattooing for localization.[ 54 ] | CONCLUSION
According to the data available, laparoscopic surgery definitely appears to have a role in colonic malignancies. The short-term and long-term outcomes clearly favor this approach. It should be realized, though, that the benefits of laparoscopic surgery can be obtained only when performed by persons trained in the art of laparoscopy so as to avoid unnecessary morbidity and even the risk of mortality. As for rectal cancer, the present data on the role of laparoscopy is not mature enough, especially for anterior resections. At present, laparoscopic anterior resection should only be considered within the context of clinical trials. | Laparoscopy was introduced more than 15 years ago into clinical practice. However, its role in colorectal surgery was not well established for want of better skills and technology. This coupled with high incidences of port site recurrences, prevented laparoscopic surgery from being incorporated into mainstream colorectal cancer surgery. A recent increase in the number of reports, retrospective analyses, randomized trials and multicentric trials has now provided sufficient data to support the role of laparoscopy in colorectal cancer surgery. We, thus, present a review of the published data on the feasibility, safety, short - and long-term outcomes following laparoscopic surgery for colorectal cancers. While the data available strongly favors the use of laparoscopic surgery in colonic cancer, larger well powered studies are required to prove or disprove its role in rectal cancer. | The morbidity and mortality associated with surgical procedures performed on the colon and rectum have plagued surgeons since time immemorial. As surgeons began to perform open colorectal surgery with increasing confidence these very same problems assumed high significance until the principles of antisepsis and the refinements in surgical technique began to make a considerable dent in these negative outcomes. At this same time, surgeons had already explored the role of minimal invasiveness in treating colorectal problems, viz. the use of the sigmoidoscope to deflate a sigmoid volvulus, perineal procedures for prolapse in elderly persons, etc. The laparoscopic cholecystectomy in 1987 in France, performed by Philipe Mouret for the first time during a laparoscopic gynecologic surgery, rapidly established its role in place of open surgery.[ 1 ] However, laparoscopic large bowel surgery did not, for a long time, receive the same degree of acceptance by the surgical community although a few of the initial series had shown promising results. We thus review the problems that were and are still associated with laparoscopic surgery for colorectal cancer while finally attempting to provide an evidence-based review of literature to suggest as to where laparoscopic surgery stands today in the field of colorectal cancers.
Problems associated with laparoscopic surgery
Laparoscopy for colorectal cancers has not gained universal acceptance for a number of reasons[ 2 3 ]
Questions on feasibility: Laparoscopic large bowel surgery is much more complicated than laparoscopic cholecystectomy with a steeper learning curve. It requires more advanced laparoscopic technique. Early reports of port-site recurrence / implantation in laparoscopic port sites.[ 4 – 8 ] Presence of an anastomosis or stoma prevents rapid discharge from hospital. The efficacy of large bowel laparoscopic surgery for cancer with respect to factors like tumor clearance and the fear that laparoscopic surgery enhances tumor dissemination. Safety - this implies that laparoscopic surgery carries with it not only some of the general complications encountered with surgery on the bowel, but also a set of unique complications that can occur more or sometimes exclusively only in laparoscopic surgery, e.g., pneumothorax, gas embolism, port site hernias, etc.
Historical discoveries in the advancement of laparoscopy for colorectal surgery[ 10 ]
Advances in techniques of laparoscopy that have paved the way for a shift in the role of laparoscopic surgery in colorectal diseases from the initial intended role of an adjunct to open surgery, to an important operative modality have been, firstly, the realization of the benefit of such a procedure in elderly patients where the morbidity of the large incision of open surgery can be overcome by the reduction in pain and overall bacterial contamination. Videolaparoscopic techniques in colorectal surgery were used for the first time in 1990 by Moises Jacobs in Miami, Florida while performing a right hemicolectomy.[ 11 ] The development of a circular stapling device for colostomy closure permitted the first laparoscopic colostomy closure to be performed by in 1990 by Joseph Uddo. The development of a laparoscopic intestinal stapler meant that for the first time, the bowel could be transected intraperitoneally. Dennis Fowler successfully demonstrated this in 1990 when he performed the first laparoscopic sigmoid resection. Subsequent years witnessed more technical innovations that could now make laparoscopic surgeries on the colon and rectum feasible.
Contraindications to laparoscopic colorectal surgery [ Table 1 ][ 12 ] | CC BY | no | 2022-01-12 15:21:46 | J Minim Access Surg. 2006 Dec; 2(4):205-210 | oa_package/1d/40/PMC3016481.tar.gz |
||||
PMC3016482 | 21234148 | INTRODUCTION
The indications for the Hartmann procedure are greatly reduced in recent years due to changing trend in the management of acute diverticular disease by primary anastomosis, the main indications being intra-abdominal sepsis, trauma, volvulus and malignancy. Although done as a temporary procedure, the reversal rate is not very high; nearly 40% of patients will not have the reversal.[ 1 2 ] Reversal of Hartmann procedure is a major undertaking and due to its associated morbidity and mortality, many patients are left with permanent colostomy and many others elect not to have the reversal. The advances in laparoscopy and stapler anastomosis have made the reversal simpler and easier. | MATERIALS AND METHODS
Between July 2002 and October 2005, the reversal of Hartmann procedure was performed in 15 patients. With our increasing experience in advanced laparoscopic surgery, the laparoscopic reversal of Hartmann procedure was attempted in 12 consecutive patients. Among the laparoscopy group, there were 8 male and 4 female patients in the age group of 18-65 years (mean age 40). Ninety-one percent of the Omani population is below the age of 50 years and population over 60 years is 4.9%. Due to the vast majority of population being young, the number of patients seen with cancer of colon and diverticular disease is low, but the incidence of road traffic accidents is high. In 12 patients for laparoscopic closure, the Hartmann procedure was performed for perforated diverticular disease with intra-abdominal sepsis in 6 patients, traumatic rupture of left colon and rectum following road traffic accident in 4 patients, stricture left colon at the site of previous anastomosis in 1 and obstructed cancer of left colon in another patient. Eleven patients had their Hartmann procedure performed by conventional laparotomy and 1 patient by laparoscopy. The diseased bowel was excised at the time of initial surgery. Few long nonabsorbable monofilament sutures were left on the rectal stump for identification later at the time of reversal. The time of reversal varied from 70 to 220 days (mean of 130 days). A routine barium enema was done in all patients with diverticular disease and cancer colon through the colostomy and also through the rectal stump before the reversal to rule out the residual pathology in the proximal colon and to assess the rectal segment. Barium studies showed diverticulae in the segment of distal segment of colon, apparently incomplete excision at previous operation. Colonoscopy/flexible sigmoidoscopy are alternatives to barium studies but until further prospective data are available, no firm recommendation can be made. Barium studies and endoscopy are complementary to each other. The proximal bowel and the rectal stump were prepared by mechanical cleansing prior to surgery. All patients received single dose of ceftrioxone and metronidazole at the time of induction of anesthesia. Further mobilization of the colon was done laparoscopically and through the colostomy wound. The adhesions varied from mild to severe requiring adhesiolysis. The localization and mobilization of rectal stump was straight forward in 5 patients, difficult in 5 patients. In 2 patients - one with obstructed left colon cancer and the other, a morbid obese lady with previous perforated diverticular disease - the adhesions were so severe that it was not possible to access the rectal stump and were converted to open. The operation time varied from 65-180 min (mean 90).
Technique
A diagnostic laparoscopy was done first to assess the feasibility of the procedure. A 10 mm port in right upper quadrant (optical port) and 5 mm port in the right iliac fossa were inserted in the midclavicular line. The adhesiolysis was performed and rectal stump was identified. The sigmoidoscope was inserted per rectally as a bougie to help in localizing the rectal stump [ Figure 1 ]. Generally not much dissection and mobilization was done for the rectal stump; only adhesions were cleared anteriorly for the anastomosis. The long nonabsorbable monofilament sutures helped in localization of the rectal stump [ Figure 1 ]. A 5 mm port was sometimes required in the suprapubic area to facilitate the adhesiolysis, mobilization and anastomosis. The splenic flexure was mobilized in all the patients. The colostomy was fully mobilized extracorporeally and was helped intracorporeally by adhesiolysis around it. The anvil was inserted in the proximal colon through the colostomy and the proximal colon was returned back to the abdominal cavity [ Figure 2 ]. The colostomy wound was closed and this wound was utilized for the 10-mm port [Figure 3 and 4 ]. An end-to-end intracorporeal anastomosis was performed by circular stapler (proximate ILS size 29, Johnson and Johnson). The shaft of the circular stapler was inserted through the rectal stump and engaged into the anvil in the proximal colon [ Figure 5 ]. Finally the pelvic cavity was filled with saline and the air leak test was done by insufflating the air in the rectum while visualizing the anastomosis by the sigmoidoscope at the same time. A through lavage was done on completion of the procedure. | RESULTS
There was no anastomotic dehiscence, postoperative complications and mortality. Two patients in whom the procedure could not be completed laparoscopically were converted to open. No patient required blood transfusion during the surgery. Patients were allowed clear fluids from the second postoperative day. The normal bowel activity was achieved within 3 to 5 days. No patient required a temporary colostomy or an ileostomy. There was less postoperative pain, probably because of minimal wounding. No complications were reported in 6 months to 3 years of follow-up. The hospital stay was from 4-11 days (mean 7 days). | DISCUSSION
The Hartmann procedure is usually done as a temporary emergency procedure for the left-sided colonic pathology when conditions are not ideal for a primary anastomosis. It is a fast and safer operation in adverse general status and local abdominal conditions. A large number of these patients have associated medical comorbidities. Although done mostly for the perforated diverticular disease, it can serve as a permanent colostomy, especially in patients with obstructing rectal carcinoma of middle and upper third. It has also been accepted as a curative procedure in emergency for obstructive and perforated left colonic malignancy. It has the advantage of removing the diseased bowel at the first stage with no risk of primary anastomotic leak, thereby eliminating anastomotic, perineal and functional problems. Role of Hartmann procedure as an emergency surgery is controversial as several therapeutic alternatives have emerged, relegating its use. On table, lavage with primary anastomosis, primary anastomosis with proximal diversion ileostomy and primary anastomosis with proximal diversion colostomy have given better results in terms of morbidity and mortality compared to Hartmann procedure.[ 3 – 5 ] The morbidity and mortality of Hartmann procedure is higher than these therapeutic alternatives due to the fact that it is done on poor-risk patients and by the junior-duty surgeons in the face of diffuse peritonitis. The primary anastomosis and Hartmann procedure are not competing operations but are situation-dependent. There are no prospective randomized studies comparing both therapeutic options and these two therapeutic options fit different groups of patients whose disease differ with respect to etiology, localization and severity. Armbruster et al have analyzed the literature of the past few years and have defined some criteria to decide when to exclude primary anastomosis: MPI>20, APACHE II score >15, preoperative organ insufficiency, Hinchey grade III or IV and ASA score IV.[ 6 ] Despite the documented morbidity and mortality associated with its reversal, HP remains a favored procedure in emergency situations where primary anastomosis is considered unsafe.
The subsequent restoration of intestinal continuity is desirable to improve the quality of life but can be technically challenging. The optimal timing for the reversal is controversial, but the operative difficulties appear to be less after a delay of 15 weeks. Conventionally, the reversal was done by the open method typically requiring a laparotomy and was associated with considerable morbidity and mortality and one-third of the patients were left with permanent colostomy. There is also reluctance on the part of surgeons for this potentially difficult but avoidable operation. The advance in laparoscopy and stapler technology is changing the attitude of surgeons and has made this reversal safer and easier, with an increase in the reversal percentage.
Technical issues
There are several laparoscopic techniques of reversal of Hartmann procedure. The principle common to all techniques is a tension-free intracorporeal stapler anastomosis. This is ensured by the mobilization of the splenic flexure and the division of left colonic vessels, as we have done in all of our patients. In case of colonic resection, the specimen can be brought out by a small left lower quadrant incision or a transverse suprapubic incision. We have done a diagnostic laparoscopy, first, by the introduction of lateral ports to assess the severity of adhesion and to assess the rectal stump. This has helped us in making decision regarding the feasibility of laparoscopic reversal. The introduction of circular stapler in the rectal stump helps in identification and mobilization of the rectal stump. Others have mobilized the colostomy first and have used the colostomy site as a first port or used a standard umbilical port.[ 7 ] Lucarini et al have done the reversal by laparoscopic-assisted method using the Dexterity Pneumo Sleeve device and minilaparotomy for the colostomy mobilization.[ 8 ] Jacob et al have demonstrated in a canine model that it has been possible to reverse the colostomy, 1 week postoperatively, using a dual endoscopic-assisted computer-mediated circular stapling device.[ 9 ] Others have used circular anastomotic device in which the bowel lumen remain closed to minimize the contamination. Bossotti et al have described a novel technique of gasless laparoscopic-assisted colostomy closure, which is safer on elderly patients with cardiovascular diseases; reduces trauma, postoperative pain, bacterial contamination; and with this technique, it is possible to use the traditional instruments with reduction in cost of operation.[ 10 ]
Result
Laparoscopic reversal has shown lesser morbidity and mortality. The published results and literature review have shown that compared to open reversal, there was less intraoperative blood loss, shorter hospital stay, less wound infection rate, less postoperative pain and lower incidence of pelvic abscess, anastomotic leak and incisional hernia. The patient convalescence, the first evacuation and oral feeding are achieved faster.
Merits
Laparoscopically the reversal rate is higher and the procedure is safer and easier. The introduction of circular stapler and advance in technology have made the reversal possible for older patients in high-risk group. The incidence of anastomotic leak is lower than in open procedure. The adhesiolysis and localization of rectal stump are easier laparoscopically than by open method.
Limitations
It is technically challenging and requires an experienced laparoscopic surgeon but offers clear advantages to patients. Main reasons reported for conversion to open were dense abdominal-pelvic adhesions secondary to diffuse peritonitis at the primary operation, short delay before the reconstruction, difficulty in finding the rectal stump and rectal scarring. Leaving long nonabsorbable suture ends at the rectal stump or suturing it to the anterior abdominal wall helps in its localization. Other relative limitation factors could be a large incisional hernia from the previous laparotomy and contraindications to general anesthesia and laparoscopy. | CONCLUSION
The reversal of Hartmann procedure can be difficult due tendency of Hartmann segment to become densely adherent deep in the pelvis. The laparoscopic reversal has made this major operation easier, safe and practical. As a majority of these patients is in the elderly age group, it has the advantage of early mobilization, less pain, short hospital stay and return to normal life. | Background:
The Hartmann procedure is a standard life-saving operation for acute left colonic complications. It is usually performed as a temporary procedure with the intent to reverse it later on. This reversal is associated with considerable morbidity and mortality by open method. The laparoscopic reestablishment of intestinal continuity after Hartmann procedure has shown better results in terms of decrease in morbidity and mortality.
Materials and Methods:
The laparoscopic technique was used consecutively in 12 patients for the reversal of Hartmann procedure in the last 3 years. The adhesiolysis and mobilization of the colon was done under laparoscopic guidance. The colostomy was mobilized and returned to abdominal cavity after tying the anvil in the proximal end. An end-to-end intracorporeal anastomosis was performed between the proximal colon and the rectum using the circular stapler.
Results:
Mean age of the patients was 40 years and the mean time of restoration of intestinal continuity was 130 days. Two patients were converted to open. The mean time of operation was 90 min. There were no postoperative complications and mortality. The mean hospital stay was 5 days.
Conclusion:
Laparoscopic reversal of Hartmann is technically safe and feasible. | CC BY | no | 2022-01-12 15:21:46 | J Minim Access Surg. 2006 Dec; 2(4):211-215 | oa_package/ec/3b/PMC3016482.tar.gz |
||
PMC3016483 | 21234149 | INTRODUCTION
Ever since its entry more than two decades ago, minimal access surgery (MAS) introduced a sweeping revolution in surgical practice. Worldwide, the volume of MAS procedures has rapidly increased in recent years and consequently, recent general surgery graduates are seeking MAS fellowships in record numbers. The field of pediatric surgery is no exception and there are numerous patients who can benefit from this approach.[ 1 2 ]
MAS is a natural extension of traditional surgical treatment, but the techniques and dexterity required to master these procedures are a separate set of skills. Currently, MAS training is incorporated into adult general surgery residency programs and MAS fellowship opportunities are feasible in well-established programs in many centers. In contrast, teaching techniques of pediatric MAS are rarely available as international course lectures (didactic or with animal models) or as broadcasting of specific case demonstrations used as teaching opportunities. Pediatric surgery fellowship programs with incorporated guidelines for MAS training are available only recently in select centers, mostly as “self” established programs. In many other pediatric surgery centers, teaching the “glamour” of MAS is quite dependent on a program director’s vision. | MATERIALS AND METHODS
The global objective of this paper is to review from the “Fellow” perspective, the current status of pediatric minimal access surgery in terms of teaching feasibility, safety and impact on standard practice paradigms of a single institution. The Alberta Children’s Hospital is a pediatric tertiary care facility for patients from the neonatal period, to late adolescence. This report is a retrospective review of all MAS procedures performed in the pediatric general surgery department, from June 2004 to June 2005. The selection of patients and procedures in this series was highly individualized between staff surgeons with consideration regarding patient/parent preference, surgeon’s experience, complexity of the procedure and the patient’s medical condition. All surgeries were performed by the single pediatric surgery fellow, with close supervision by the attending staff surgeon. Absolute contraindications included hemodynamic instability, severe cardiac diseases, pulmonary insufficiency and malignancy. | RESULTS
A total of 314 MAS procedures were performed in 311 patients, of which 56 were thoracoscopic and 258 laparoscopic [ Table 1 ]. The range included 28 different procedures with an overall conversion rate of 3.5% (1.78% in thoracoscopic and 3.87% in laparoscopic procedures).
The great majority (90.32%) of relevant thoracic procedures were performed thoracoscopically. Lung biopsy was the most common performed procedure (18 cases) with excellent results and a zero conversion rate. Minimal access repair of pectus excavatum (nuss operation) was performed in 14 children with good cosmetic results, without complications and conversions. Drainage of empyema with debridement was done in 11 cases and this modality is especially suitable for children where general anesthesia is unavoidable. Thoracoscopic aortopexy was very challenging, but the low volume of these procedures prevents us from reading valid conclusions of this approach. For solitary, simple, cystic lung lesions, the MAS approach seems to be a very comfortable and safe option.
From selected routine abdominal surgery, laparoscopic procedures replaced open surgery in variable percentage (45%). (In total, 258 from 573 procedures.) All 24 cholecystectomies were performed as laparoscopic (100%) and of the 49 fundoplications, 42 were performed laparoscopically (85.71%). These two procedures have already become the “gold standard” in pediatric surgery practice. The insertion of gastrostomy tube only or as part of anti-reflux procedures for patients with feeding difficulties, was done in 31 cases. This technique provided very good visualization and was especially important for choosing “the right place” for a G-tube. Of the 5 splenectomy cases, 4 were done by laparoscopy (80.0%), with 1 open evacuation of the large spleen. During this period of time, 298 appendectomies were performed, of which 56 were done by laparoscopy (18.79%), with 4 conversions (7.14%) for perforated appendicitis with diffuse purulent peritonitis. Selection for laparoscopic appendectomy was focused to a certain group of patients (obesity, female adolescents and patients with uncertain diagnosis) who were especially suitable for such a modality. Laparoscopic cecostomy for bowel management of incontinent or severely constipated patients (mostly spina bifida group) were performed in 6 cases. This approach is very well-established at this hospital, with long term follow-up and documentation of improved quality of life. From six laparoscopic bowel resections (4 for Crohn’s disease and 2 total colectomies for familial adenomatous polyposis syndrome) with extracorporal anastomosis, we had conversion in 2 cases (33.3%) due to insufficient visualization and probably lack of experience. However, the general departmental approach is to continue with performing these procedures in selected, suitable patients, to overcome pitfalls and maintain progress in MAS. Laparoscopic pull-through for imperforate anus was done in two cases, after diverting colostomy was performed in the postnatal period. One case was converted after a time- consuming attempt to release severe adhesions, in a child who previously had a V-P shunt for hydrocephalus. The other child had an uneventful surgery and this result encouraged us to continue with this modality in suitable cases of high imperforate anus with fistula. Laparoscopic high ligation for varicocele (12 cases) and Fowler-Stephens operation (18 cases) for undescended, intra-abdominal testicle, were routine MAS procedures that replaced almost all “open” approaches for such pathology. From the so-called “gynecological procedures”, all cases were done as “urgent” cases of acute abdominal pain with peritoneal signs and uncertain diagnosis. Nine cases of ovarian cysts (ruptured or hemorrhagic) and one case of ovarian torsion were successfully treated in a timely manner. Laparoscopic hand-assisted bilateral nephrectomy performed for persistent post transplant polyuria in 2 patients with juvenile nephronophthisis was safe, was tolerated very well and allowed out-patient follow-up of these patients who were otherwise dependent on intravenous infusion. | DISCUSSION
Various workshops have been developed in adult MAS training programs. Computer-generated virtual reality systems allow sensory interaction and provide “hand-eye coordination” models which are especially useful for self-assessment in simulation-based surgical skills training.[ 3 4 ] The reliability and validity of this modality in teaching has been confirmed in numerous studies.[ 5 – 8 ] MAS in animal laboratories has been recognized as well as a method of teaching, developing and refining surgical techniques which contribute to a beneficial outcome in patients.[ 9 ] However, pediatric MAS workshops are quite rare and attendance at a 1 or 2 day workshop does not translate into expert practice and is not sufficient to be considered as credentialing activity. So the only way to reach competency in pediatric MAS, is the acquisition and safe performance of sufficient volume and a range of relevant procedures. An operation can be reduced to its component steps, which can be learned and mastered over a number of separate operations on different patients.[ 10 ] Supervised operating with structured objective assessment and feedback remains an essential part of surgical training, because it includes all of the variables encountered in surgery.[ 11 12 ]
The intent of this discussion is to acquaint how important it is to teach trainees, what can and should be done via MAS in the pediatric population. Today’s parents are bringing their children in for surgical consultation after profound “search” on the internet for a specific problem. They are well informed about therapeutic options and require the best possible medical care. An important issue that must be kept in mind, is that just because a procedure can be done technically, does not mean that it is better for the patient. The MAS technique must be at the very least and as safe and effective as the procedure it purports to replace.
The surgeon must be fluent with conventional surgical methods in situations when the MAS approach is not feasible or results in problems that require rapid conversion to “open” operation. Nevertheless, there is a learning curve and the potential for a higher complication rate is one of the most uncomfortable facts that teaching staff have to face, especially when this is judged against the excellent results of many pediatric procedures achieved by open surgery. Some senior pediatric surgeons are reluctant to promote MAS skills when they are already highly proficient in open surgery.
Without open-minded and supporting staff/consultants as leaders, surgical residents/fellows are unable to gain skills and make any progress, unless every potential MAS procedure is evaluated. MAS should be appraised not merely on its feasibility or by the enthusiasm or euphoria of personal ego or achievement, but rather as a pragmatic clinical teaching process, as it applies to the pediatric population.
The scene is fast moving and only encouraging programs with established training of MAS techniques will “manufacture” competent, contemporary trainees who are under close supervision of their teachers, through education and research, constantly challenging the order of criterions for MAS procedures in modern surgery. This is especially important in the pediatric general surgery field where surgeons are dealing with a wide range of pathology that include thoracic, abdominal, urological and gynecological procedures, where the learning curve is slower because of the relatively small volume of patients. Adequate training will pay dividends and national pediatric surgery associations should be responsible for setting criteria that consider MAS for accreditation with maintaining international standards of teaching programs and courses.
Integration of MAS training into the secondary residency/fellowship curriculum of pediatric surgeons is the inevitable goal. Interaction through International Pediatric MAS groups is very welcome among trainees and hopefully multi-institutional prospective studies will allow determination of standard guidelines for MAS teaching in the pediatric population. | The global objective of this paper is to review from the “Fellow” perspective, the current status of pediatric minimal access surgery (MAS) in terms of teaching feasibility, safety and impact on standard practice paradigms. In the pediatric general surgery field, surgeons are dealing with a wide range of pathology that includes thoracic, abdominal, urological and gynecological procedures. The learning curve is slow because of a relatively small volume of patients. However, gradually but steadily, a significant proportion of the procedures traditionally performed, with major open exposures at present, are preferentially performed by minimal access. Currently, minimal access surgery training is incorporated into adult general surgery residency/fellowship programs and teaching techniques of pediatric MAS are available only as seldom international workshops. Pediatric surgery fellowship programs with incorporated guidelines for MAS training are just recently feasible in select centers, mostly as “self” established programs. In many other pediatric surgery centers, teaching the “glamour” of MAS is quite dependent on a program director’s vision. Integration of MAS training into the secondary residency/fellowship curriculum of pediatric surgeons is the inevitable goal. MAS- minded education and research through adequate training will pay dividends and “manufacture” competent, contemporary trainees. National Pediatric Surgery Associations should be responsible for setting criteria that consider the MAS for accreditation with maintaining the international standards of these teaching programs. | CC BY | no | 2022-01-12 15:21:46 | J Minim Access Surg. 2006 Dec; 2(4):216-219 | oa_package/6f/e2/PMC3016483.tar.gz |
|||
PMC3016484 | 21234150 | INTRODUCTION
The situs anomalies are rare congenital defects and may go unrecognized until incidentally detected during imaging for unrelated conditions or during emergency surgery. Laparoscopy is indicated in these patients, as the clinical and imaging findings can be confusing in conjunction with acquired diseases. There have been only two previous case reports of laparoscopic appendectomy in situs inversus.[ 1 ] | DISCUSSION
The incidence of situs anomalies reported in the literature, varies from 0.001 to 0.01% in the general population. The overlapping features of some situs anomalies and the presence of acute acquired diseases may result in confusing imaging findings. Diagnosis of acute disease processes is challenging in these patients due to altered anatomy.[ 2 ] The differential diagnosis in situs inversus patients may not be readily seen in the emergency settings and is often delayed as a result of lack of uniformity in physical signs.[ 3 4 ] Although the viscera are transposed, it is thought that the central nervous system may not share the reverse transposition, leading to confusing symptoms and signs. The pain of the left-sided appendicitis has been reported to the right iliac fossa in about 50% of the patients of situs inversus. The pain and tenderness in the left iliac fossa can also be due to a right-sided, long, dilated appendix located in the left lower quadrant. Laparoscopy helps in identifying and treating acute surgical emergencies quickly and efficiently, when the clinical and imaging studies are difficult to interpret in situs anomalies.
It is our routine practice to remove the appendix in all cases of diagnostic laparoscopy (so as to avoid future diagnostic confusions). Wherever appendectomy was performed, it was accomplished via an assisted two-port method in most cases.[ 5 ] | Situs inversus is often detected incidentally in adults during imaging for a acute surgical emergency. We present a case of acute appendicitis in an adult who was previously unaware about his situs anomaly. A laparoscopic approach is helpful to deal with this condition. A 40 year old man was admitted with history of acute left lower abdominal pain, with uncontrolled diabetic keto-acidosis. Clinically, he was diagnosed as acute diverticulitis with localized peritonitis. Subsequent imaging studies and laparoscopy confirmed the diagnosis of situs inversus and acute left- sided appendicitis.
He successfully underwent laparoscopic appendectomy. His postoperative recovery was uneventful. Although technically more challenging because of the reverse laparoscopic view of the anatomy, the laparoscopic diagnosis and management of acute appendicitis is indicated in situs inversus. | CASE REPORT
A 40 year old man was admitted with history of left lower abdominal pain, fever and vomiting for the past two days. On examination, he was febrile with a temperature of 38.3°C and moderately dehydrated. There was tenderness with rebound tenderness in the left iliac fossa. Clinically he was diagnosed as acute diverticulitis. His subsequent blood investigations data were as follows: Urea 5.67 mmol/L (range 2.1-7.1), Creatinine 96.6 mmol/L (range 62-106), Glucose 22 mmol/L (range 3.6-5.5), Sodium 135, Potassium 5.1 and white cell count 16.1 10^3/μL (range 4.0-11.0). The blood gases analysis was suggestive of metabolic acidosis with a pH of 7.2 and decrease in PCO2. The urine examination showed three plus of ketone. A routine ultrasound examination of abdomen was requested, which revealed the liver on the left side, spleen on the right side and an inflammatory mass in the left iliac fossa. This changed the diagnosis to situs inversus and the inflammatory mass in the left iliac fossa was now thought to be appendicular in origin. The heart sounds were heard over the right chest and subsequent plain X-ray of chest showed dextrocardia. The ECG findings were suggestive of dextrocardia, sinus rhythm and intra-ventricular conduction defect. The CT with contrast confirmed the diagnosis of complete situs inversus with dextrocardia and the appendix was seen as a tubular structure in the left iliac fossa, in front of left psoas muscle with surrounding inflammation [ Figure 1 ]. The patient was unaware of his diabetes mellitus and his situs inversus anomaly. He was managed with intravenous fluids, insulin infusion and parental antibiotics prior to his surgery.
At laparoscopy, the situs inversus findings were confirmed. The cecum and ascending colon was on the left side. An inflammatory mass was seen in left iliac fossa, covered with omentum. Another 10 mm port was inserted in the left iliac fossa and a 5 mm port in the suprapubic region. The omentum was separated from the mass; the appendix was laying partially retrocecal and acutely inflamed, but not perforated. The appendicular pedicle was ligated intracorporeally. The mesentery of the appendix was thick and bulky; the intracorporeal ligation of the appendicular pedicle helped in mobilizing the appendix and its delivery through the left iliac port. The appendectomy was performed extracorporeally [ Figure 2 ]. | CC BY | no | 2022-01-12 15:21:46 | J Minim Access Surg. 2006 Dec; 2(4):220-221 | oa_package/05/03/PMC3016484.tar.gz |
||||
PMC3016485 | 21234151 | INTRODUCTION
Surgeons are aware of the adverse physiological effects of a prolonged retro- and pneumo-peritoneum for minimally invasive procedures. However, far less is known about the clinical implications of compromised mesenteric circulation, especially in patients with severe atheromatosis.
We report the case of an elderly man who died of massive intestinal infarction following an elective retroperitoneoscopic right sympathectomy. After a review of the literature, we conclude that this is the first such case to be described. | DISCUSSION
Experimental studies have shown that the elevation of intra-abdominal pressure by gas insufflation leads to hemodynamic alteration of the peritoneal viscera and may produce splanchnic ischemia. Although the adverse physiological effects of pneumo-peritoneum are well understood, the clinical implications of compromised mesenteric circulation through several mechanical and physiological mechanisms are not as well recognized. The effects of retro-pneumo-peritoneum are even less known. Anyway, it is well known that pneumo-peritoneum during laparoscopic surgery, produces a significant decrease in hepatic microcirculation.[ 1 – 3 ] A study using nasogastric tonometry demonstrated that even a pneumo-peritoneum pressure of 12-15 mmHg can cause significant splanchnic mucosal ischemia.[ 4 5 ] The severe arteriopathy, with prolonged lateral and Trendelenburg’s position, may have played a determinant role in the pathogenesis of the massive intestinal infarction in our patient.
There is a case report of “nonspecific ulcerated jejunitis” that developed a few days after laparoscopic cholecystectomy and was attributed to ischemia-reperfusion injury following an ischemic period caused by the pneumo-peritoneum,[ 1 ] but similar cases following retro-pneumoperitoneum have not been documented in the literature. It has been reported that certain preexisting conditions such as hypercoagulable states in a patient undergoing laparoscopic surgery, as well as multi-vessel splanchnic atheromatosis can increase the risk of splanchnic vessel thrombosis.[ 4 ] The risk is particularly high when the laparoscopic procedure is lengthy, as in our case. A Medline search revealed seven cases of small bowel ischemia following laparoscopic cholecystectomy.[ 2 3 ]
We believe that the presence of a multi-vessel visceral arteriopathy and the length of the retro-pneumoperitoneum led to the massive intestinal infarction. Postoperative abdominal pain, intense leucocytosis and elevated LDH as a marker of tissue breakdown, may be relevant data even in a lack of a diagnostic imaging work up (i.e., abdominal computed tomography). We strongly suggest that prolonged retro-pneumoperitoneum is probably best avoided in patients with ASA III and in those who have a diffuse arteriopathy. | The adverse physiological effects of pneumo and retro-peritoneum are relatively well known. However, the clinical implications of compromised mesenteric circulation through several mechanical and physiological mechanisms are not as well recognized. We describe a fatal case of intestinal infarction following an elective retroperitoneoscopic right sympathectomy. The patient was a 88-year-old man who died 30 hours after an uneventful anesthesia and right endoscopic lumbar sympathectomy. An emergency explorative laparotomy revealed a massive intestinal infarction due to thrombosis of the superior mesenteric artery. We reviewed the literature on laparoscopic procedures and mesenteric ischemia. To our knowledge, this is the first reported case of intestinal infarction following retro-pneumoperitoneum. We conclude that the presence of a severe multidistrectual? arteriopathy may represent a major risk factor in retroperitoneoscopic procedures. | CASE REPORT
An 88-year-old man was admitted to the hospital for an elective right retroperitoneoscopic lumbar sympathectomy for a right limb stage III (Leriche-Fontaine) arteriopathy. Two years before, the patient had been operated for a bilateral carotid endoarteriotomy and for an over knee right arterial bypass.
He used to smoke 20 cigarettes a day. His ASA grade was III. Apart from mild hypertension controlled with ACE inhibitor, he was medically fit and well. The anesthesia and surgery were uneventful. The pressure of the retro-pneumoperitoneum was 15 mmHg and the total duration of the surgery was 120 minutes. Three hours after the surgical procedure, he complained of a right abdominal pain which was unresponsive to parenteral analgesics (morphine 30 mg). The abdomen was soft on palpation and there was no distension. Though his vital signs were stable, the abdominal pain persisted. Minimal tenderness was localized in the right lower quadrant of his abdomen. His hematology and biochemistry results were all within normal limits, except for leucocytosis (WBC: 20,000/mm 3 ) and LDH (880 IU/lt). His chest radiograph was normal and a supine abdominal film showed a gas-filled transverse colon without any unusual features. The worsening of the symptomatology, prompted us to undertake an exploratory laparotomy on the postoperative day 1.
Laparotomy revealed a massive intestinal infarction extending to the right and transverse colon and the small bowel; the atheromatous superior mesenteric artery lumen was completely empty. The patient died a few hours after the exploratory laparotomy. | CC BY | no | 2022-01-12 15:21:46 | J Minim Access Surg. 2006 Dec; 2(4):222-223 | oa_package/d1/93/PMC3016485.tar.gz |
||||
PMC3016486 | 21234152 | INTRODUCTION
Heterotopic pancreas (HP) is defined as pancreatic tissue found outside the ectopic pancreas without any anatomic or vascular connections between them.[ 1 ] Gastric antrum and prepyloric region form common sites for HP. The gastric heterotopic pancreas is often asymptomatic. Rarely, it may cause recurrent epigastric pain or manifest with upper gastrointestinal bleeding. Treatment is required for symptomatic gastric HP and lesions larger than 3 cm in size.[ 2 ] Laparoscopic excision forms an attractive treatment option in these patients, the extent of which is generally guided by an intraoperative histologic examination.[ 3 ] For asymptomatic small lesions less than 2 cm, follow-up may be considered after histopathological confirmation of their benign nature.
We report two patients with chronic recurrent abdominal pain and upper gastrointestinal bleeding caused by HP and their successful laparoscopic management. | DISCUSSION
The estimated incidence of HP is 1 per 500 upper abdominal operations and between 0.6 and 14.0% in autopsy cases.[ 1 2 ] More frequently, these lesions are detected accidentally during routine endoscopic study or exploratory laparotomy. Gastric lesions are located in the gastric antrum (85 to 95%), along the greater curvature of the stomach or in the prepyloric area.[ 3 ] HP is a congenital disorder that consists in the presence of normal pancreatic tissue located outside the pancreatic frame.[ 4 ] It originates from the submucosa in two-thirds of the cases and in the remainder from the muscularis mucosa or the subserosal layer.[ 3 ] Histologically, these lesions have been classified into three categories depending on the degree of resemblance to normal pancreatic tissue. The most common group (type I) is the pancreatic tissue that is easily identified with ducts, acini and endocrine islets [ Figure 4 ]. In the second group (type II) there are a few acini and many ducts, while in the last common category (type III) only ducts are seen. Smooth muscle is commonly found throughout these lesions.[ 5 ]
HP often remains asymptomatic throughout life but may sometimes cause symptoms including recurrent epigastric pain, upper gastrointestinal bleeding, gastric ulceration, gastric outlet obstruction or pancreatitis. In a few cases, complications such as pancreatic pseudocyst or cyst formation supervene. Association with insulinoma, adenoma and malignant transformation has been reported.[ 2 ] All benign or malignant pathologic processes described in the pancreas may be expected to occur in ectopic pancreas.[ 1 ]
The differential diagnosis includes a number of benign and malignant gastric wall tumors, intramural vessels and extrinsic compression from extramural structures.[ 2 ] The classic endoscopic appearance is that of a small well-circumscribed submucosal protrusion with a normal overlying mucosa.[ 3 ] In less than half the cases, umbilication may be present; this may be the site of ductal drainage to the mucosal surface. Usually these lesions are firm, round sessile nodules bulging into the lumen and can bleed with endoscopic manipulation.[ 2 ] The characteristic radiographic appearance of HP in the stomach has been described as a small broad-based submucosal mass in the antrum, with a central umbilication that represents a rudimentary pancreatic duct; the mass resembles leiomyoma or other submucosal tumors such as carcinoid or intramural metastasis.[ 4 ] Once in a while, the HP appears as a mass with an irregular surface indistinguishable from an adenomatous polyp or a polypoid carcinoma.[ 5 ]
As can be seen from the case reports of both our patients, HP are eminently suitable for laparoscopic excision if situated in a suitable location. An intraoperative biopsy is recommended in order to prevent unnecessary extensive surgery. The application of laparoscopic surgery to HP in the stomach appears to be a technically feasible, safe and effective treatment for symptomatic patients. It should be considered a viable alternative to open surgery. | Heterotopic pancreas is a rare condition and its diagnosis is often difficult. Traditionally the condition is treated by open surgery. We report two young women with symptomatic heterotopic pancreas located in the prepyloric region. In the first patient, upper gastrointestinal endoscopy identified a round sessile lesion with a central umbilication of the mucosa without bleeding and in the second endoscopy showed a lesion with intraluminal protrusion. In both cases, a diagnostic laparoscopy identified masses amenable to laparoscopic excision. Intraoperative histology confirmed ectopic pancreatic tissue in both. In the treatment of heterotopic pancreas, laparoscopic excision provides a feasible, safe and effective treatment option. | CASE REPORTS
Case 1
A 28-year-old woman with past history of recurrent attacks of epigastric pain and progressive dyspeptic symptoms was admitted with a bout of hematemesis. General and abdominal examinations were unremarkable. Blood count and liver function tests were normal. An upper gastrointestinal endoscopy revealed a nonbleeding, firm, round sessile nodular lesion with central umbilication in the gastric antrum [ Figure 1 ]. A surface biopsy revealed normal mucosa. Abdominal computerized tomography was normal. The patient was offered laparoscopic surgery, which was performed under general anesthesia with endotracheal intubation. A nasogastric tube and indwelling urinary catheter were inserted. The patient was placed in a lithotomy position and the operation table tilted head-up by 30°. Pneumoperitoneum was established and a 10 mm port was inserted 5 cm above the umbilicus in the midline. Two additional ports were inserted under direct vision - a 10 mm port in the left and 5 mm port in the right subcostal area. Laparoscopy revealed a 2 cm mass in the prepyloric area 3 cm proximal to the pylorus. A laparoscopic excision was performed. An intraoperative histology study revealed ectopic pancreatic tissue. The postoperative course was uneventful and the patient was discharged 4 days after surgery. The patient remains asymptomatic 60 months after surgery and a follow-up endoscopy study was normal.
Case 2
A 32-year-old female with a 2 year history of recurrent upper abdominal pain was admitted with progressively increasing epigastric pain radiating laterally. She was in good health and her vital signs, cardiovascular and respiratory systems were normal. Abdominal palpation revealed epigastric tenderness. Blood count, liver function tests, amylase and lipase tests were normal. Abdominal ultrasound and computerized tomography were unremarkable. An upper gastrointestinal endoscopy revealed an intraluminal protrusion in the prepyloric area [ Figure 2 ]; the mucosa over the mass was normal both visually and on biopsy. An exploratory laparoscopy revealed a solid 2.5 cm mass located in the anterior wall of the gastric antrum within 2 cm of the pyloric canal. A laparoscopic excision was performed. An intraoperative frozen section showed gastric glands and multiple ducts and islands of pancreatic tissue [ Figure 3 ]. The gastric defect was closed by intracorporeal interrupted nonabsorbable sutures reinforced by an omental patch. Her recovery was uneventful and postoperative water-soluble contrast media study was normal. The patient was discharged 5 days after surgery and remains asymptomatic during the 48 months of follow-up period. The last upper gastrointestinal endoscopy was normal. | CC BY | no | 2022-01-12 15:21:46 | J Minim Access Surg. 2006 Dec; 2(4):224-226 | oa_package/dd/15/PMC3016486.tar.gz |
||||
PMC3016487 | 21234153 | Originally this was to be an article about a single surgeon’s experience of minimally invasive thoracic surgery; a balanced mixture of an essay and a short state-of-the-art review was planned. It was intended to have a ‘ quo vadis - where to go from here’ - red tag. Experience obtained from a series of endless postgraduate surgical training project planning meetings in different European countries was to be condensed. The main questions to be answered were as follows: 1) What are the established methods of video-assisted thoracic surgery (VATS) today? 2) How and when are these to be performed? This was thus to provide a perspective of our present practice with special emphasis on telesurgery - to make it short, I warned myself - a proper shopping list with an eye on the market and on the future.
I tried to resist the temptation of writing a hagiography on VATS. Let me cite the closing remarks of the final draft: ‘there is no such thing as a minimally invasive surgery. This mantra of ours is misleading. The level of surgical agressivity is not an independent factor. VATS has no right to be an aim of its own. What we should pursue, instead, is an optimally invasive operation, which of course should involve an adequate exposure commensurate to the patho-anatomical situation requirements. We should expect to pay a price for the proportional collateral damage in terms of pain, loss of function and scar - but at the lowest possible level. History of surgery teaches us that the patient and the disease are the Ka’aba stone around which everything else revolves in order to find their proper place. The biology of the pathological process and the rest of the body surrounding the diseased focus are the independent variables and by no means the surgical technicalities. It leads us nowhere if in focusing on the sharpeness of our knives the minimal arch of the blade is discussed. Having said that, no responsible surgeon takes the technical details lightly. Any self-respecting doctor considers that his/her paramount duty lies in obtaining the right balance between achieving maximum effectiveness without exposing the patient to undue harm or risk. Optimization of the agressiveness of surgery is the enigma of the art of surgery.[ 1 ] Walking the fine line between the Scylla of unwanted damage and the Charybdis of unnecessary compromises in surgical (oncological) correctness, we should define the proper place for VATS. Can we neglect the socioeconomic surroundings of the patient in this process? Definitely not.’ Expanding the horizons of the discourse beyond the limits of the surgical profession, I was ready to complete the final version; or so, I thought.
One week prior to the promised submission date, I was asked to join a rapid response catastrophe aid team (NGO) as a surgeon (nobody seemed to care too much about my credentials in chest surgery and, in particular, VATS methods) at a short notice. Having returned from the mission, I had no choice but to rethink and consequently rewrite the whole article.
In the refugee camps, even the very basic necessities were frequently lacking. A doctor’s duty was to provide relief to patients with possibly the simplest and commonest diseases as the cure was out of reach in many a cases. From that point of view, it seemed laughable and required a tunnelled vision to accept the importance and to discuss the number and length of port incisions. As it happened, not too much of my surgical skill was needed in Beirut/Lebanon, where we wandered around makeshift refugee camps and converted schools. Rural medicine, county surgery, basic pediatrics, scants of psychiatry and endless hours of public health medicine - that was our job in caring for these displaced people. I realized that in the same manner as the refugees benefited from my surgical experience within the whole spectrum of delivered medical care, the individual patient undergoing chest surgery too benefits from properly planned and executed procedures like VATS.
The quest for the role of VATS is emerging. Before proceeding beyond answering the questions that emerged - let me reflect briefly on the present situation.
Standards of VATS (laparoscopic surgery takes its roots in thoracic procedures)[ 2 ] are well established. Even the Veress needle comes from the pre-streptomycin tuberculosis era and was originally used for creation of pneumoperitoneum with therapeutical intent.[ 3 ] Exploration and biopsy of the mediastinum (collar, extended, Chamberlain mediastinoscopies), resection of solitary tumors and thymectomy, manipulations within the pleural space (pleural biopsy,[ 4 ] resection, pleurodesis and debridement) via VATS are all accepted and standardized procedures.
More and more procedures on the adjacent structures (spine, diaphragm: resection, suture; pericardium: exploration, windows)[ 5 6 ] involve video assistance. The limitations are mainly biological in that the patient should tolerate single-lung ventilation. The Nuss procedure and its modifications radically changed pectus surgery, leaving only the real challanges for those who are preferring the open access.
Diseases of the pulmonary parenchyma are totally different in this respect. In benign conditions, VATS parenchymal resections for diagnostic purposes and with therapeutic intent are considered justifiable, provided the patient can tolerate one-lung ventilation and a free intrapleural space can be found (frequently missing in post-inflammatory cases).
But there are two fields where serious concerns are emerging and these are beyond the mere technical feasibility. In fact, a well-trained chest surgeon is able to perform procedures identical to their open counterparts via thoracic ports using endotools - provided limitless theater time, budget and anesthesia are available. The question however is, Does it make any sense?
In malignant lung diseases, the obligatory hilar and mediastinal lymphadenectomy (lung cancer surgery) is hardly performable using VATS methods as safely, completely and quickly as via open access. Till unselected VATS lobar/sublobar resection with effective neoadjuvant therapy does not provide the same outcomes as standard lobectomy/ pneumonectomy using open thoracotomy, the limited access methods cannot expect full acceptance. More correct universally affordable and reliable imaging for proper preoperative staging is the other crucial obstacle[ 7 ] to overcome before the question of VATS as a standard in lung cancer surgery can be seriously considered.
Chest trauma is the other field where the controversies come to light. The problem is rooted in the taxonomy. Limiting the definition to the treatment of acute chest cases, the question that emerges is, Can we overcome the principal problems of trauma care - racing against time (the golden hour) and the inherent limitation of resources?
VATS is a time-consuming and multidisciplinary specialty - for example, a detailed preoperative checkup is performed by a pneumonologist (assessment of the functional reserve of the opposite lung) and the anesthetist has to be highly trained for single-lung ventilation via a double-lumen intubation. In chest injury, we are often faced with a patient the quality of whose lung parenchyma is unknown and who has a narrow threshold so far as the safe securing of airways is considered. The least desirable outcome is a surprise collapse of the ventilation on the table. It is hard to justify VATS in acute cases without violating the very norms of basic patient safety.
However, if we extend the definition of chest trauma surgery to include elective cases such as late pneumothorax and hemothorax, secondary empyema thoracis,[ 8 ] new vistas open up for the application of VATS techniques. But not respecting the borders or freely trespassing the limits just for the sake of using a given method on a new territory causes more harm than good.
This is where we are now. That is what we ought to teach. I cannot resist the temptation to compare ourselves (medical educators, teachers, doctors, professors) to generals of the military. Those generals are said to be excellent in preparing and training brilliantly their armies to fight and win the previous war. Training junior surgeons appears to be quite similar. How should we train our junior surgeons to perform procedures that are not yet fully discovered? How should we teach and practice unborn methods?
What we are able to transfer is the proper surgical attitude and the way of critical thinking. We can teach techniques, modern VATS methods included. But to show the residents when and when not to perform VATS is akin to guiding them in finding the proper path. Controlled mass experience - aka publication - is a pillar of wisdom full of bias. ’Publication fever’ masks the fact that frequently only a few enthusiastic centers (concentration of experts and resources) are genuinly able to present convincing and reproducible results. The main reasons are the extreme technology and resource dependencies of this field. Nobody seems to be keen to publish high failure rates or even the complication rates. Neither tumor surgery nor trauma care is a field where double-blind randomized trials can be carried out as they presently exist.
Quest for the best procedure in minimal access thoracic surgery should involve the answer to the question ‘ cui prodest ,- to whom do we serve?’ In this beautiful new world of consumer-satisfaction, guided medicine offering the best procedure is the mantra . It sounds quite obvious till one does not dare ask, What do we mean by ‘best’? Then we are back to square one - another problem with terminology.
Is surgery a service? The word derives from the Latin servus , meaning servant. Ego sum servus tuus - I am your servant. Sounds quite politically incorrect. But as surgeons, whom do we serve? Employees are serving their firm. The aim is to achieve the highest possible profit. The human beings they engage are called clients. Professionals are serving patients and their own profession. We should answer the recently presented question: Are we, as surgeons, employees or professionals?[ 9 ]
Where can we find surgeons - in the garden of consumer satisfaction - and who are the inhabitants of this supposed paradise? Are they the patients of the present? Are they the doctors? Or are they the invisible men sitting behind the till? The managers? Or are they those who pay their contribution to whatever funds or Health Insurance Company? One-day surgery is a typical case. There is a tremendous pressure to shorten the hospital stay.[ 10 11 ] The previously mentioned extra-medical factors are coming into the picture. Economy and politics are interfering with surgery in general and they have a heavy hand in minimal access surgery - a highly lucrative field for profit. Hospitals and doctors are expected to follow and copy the patterns of the industry - our art is frequently getting referred to as ‘health industry.’
We are frequently reminded of a constant race for excellence. But one must ask, Is it really for excellence? What do we mean by excellence and how do we measure it? How wide is the gap between patient satisfaction and real value of care by means of outcomes and who is bearing the burden of cost? Our anesthetist colleagues are using more and more advanced high technology anesthesia machines - with functions rarely fully understood and used. I was told that less than one-third of the capacity is utilized. There is no need to go so far - to make excursions on the other side of the isolation screen. On our side of the screen, what proportion of functions of an expensive and sophisticated electrodiathermy device do we actually use?
Perpetually demanding further resources is a sin shared by all the participants. The ‘tool spiral’ is a more than worrying example. Stapler prices, single-use tools are draining the budgets. Who remembers the good old Russian staplers - thoracic equivalents of the Petz gastric resection machine[ 12 ] with their reloadable magazines/cartridges? Are we really better with all our brand-new gadgets? Are all of them really necessary and if so at what cost? Is there a (golden) middle path avoiding extremes? Did we resist the temptations or just accepted the push for more advanced, more sophisticated machines, created by a market with a huge suction force, without caring for the consequences? Did we do our share of job by adjusting our needs to the economical capacities of our countries and societies?
Where do we go from here? No one can ignore the influence of the surrounding world. Nobody expects us to stand up and defend the trenches till the last endostapler is fired. We must listen to the message of our past.[ 13 ] Never to bow to extra-professional pressures should remain one of our chief commandments. If morale, philosophy and theory construct always, if subliminally, the basics of practice of surgery, we should be able to remain responsible professionals in a globalized world, which at the time tends to de-professionalize more and more territories. It is only by strongly committing to our patients as individuals and as surgeons concerned with optimization of the level of aggressiveness can we protect them and ourselves. | CC BY | no | 2022-01-12 15:21:46 | J Minim Access Surg. 2006 Dec; 2(4):227-230 | oa_package/c2/5e/PMC3016487.tar.gz |
|||||||
PMC3016496 | 20339179 | MATERIALS AND METHODS
In this study the files of 336 patients who underwent LC at the King Hussein Medical Center (KHMC) over four years between January 2002 and December 2006 were collected and analyzed. The cases were evenly distributed over the four years. The patients’ ages ranged from 20 to 60 years; 84 were males and 252 were females. Operating surgeons were divided into two groups; consultants and junior surgeons (those who worked under supervision and were in the residency program). All patients had chronic cholecystitis (CC) and had been symptomatic for at least six months. In all patients the presence of gall stones had been established by ultrasonography. Standard Ethicon ® instruments were used for all laparoscopic procedures. Complications were divided into major [extrahepatic common bile duct (CBD) injury which required hepatojejunostomy] and minor [wound infection, minor biliary leak, transient jaundice and ileus). In 45 of these patients a decision had been made to convert the operation intra-operatively from laparoscopic to open, for reasons that include previously undetected presence of adhesions and a difficult anatomy. In two of the 45 patients there were major iatrogenic extrahepatic biliary duct injuries which required immediate hepatojejunostomy: One was a 25-year-old male patient and the other was a 30-year-old female. No long-term follow-up was available to assess late complications. | RESULTS
Of the 336 patients, 252 (75%) were women and 84 (25%) were men [ Table 1 ].
Complications occurred in 22 (6.5%) patients; two (9.1%) had major complications and 20 (88.9%) patients had minor complications [ Table 2 ], the latter included twelve minor wound infections (54.5% - all female), biliary leaks in 13.7% (three patients; one male and two females), all of whom were treated conservatively; transient jaundice in (13.7%) (three patients, all females); and ileus in (18.2%) (four patients) [ Table 2 ]. All major complications occurred in the age-group of 20–30 years, whereas minor complications occurred mainly in the older patients (>41 years) [ Table 3 ]. Minor complications occurred mainly in 16 male patients (72.7%) and only 6 female patients (27.3%). It was not possible to include patients' body mass index, as the data was unavailable. Planned laparoscopic procedures were converted intra-operatively to open procedures in 45 patients (13.4%); two of whom (one male and one female) had open cholecystectomies with CBD repair [ Table 1 ]. All cases of conversion and major complications and (72.7%) ( n =16) of minor complications, occurred at the hands of consultants. Only (27.3%) ( n =6) of minor complications occurred at the hands of junior surgeons [ Table 2 ]. There were no mortalities in our study. | DISCUSSION
Carl Langenbuch performed the first open cholecystectomy in 1882.[ 5 ] As surgeons gained more experience and open biliary operations became standardized, the incidence of bile duct injuries reduced to approximately 0.125%.[ 6 7 ] Open cholecystectomy remained the gold standard for treatment of cholelithiasis until the late 1980s when LC was introduced.[ 8 ] It gained widespread acceptance and became the new gold standard for the management of gall stone diseases. During the surgical learning curve for this new technique there was an initial rise in the reports of bile duct injuries,[ 1 ] resulting mainly from the surgeons’ inexperience and misinterpretation of anatomy. Though the reported figures of operative bile duct injuries are much lower than the actual incidence, a recent audit of 1522 LCs performed in Thailand revealed a bile duct injury rate of 0.59%,[ 3 ] i.e., about four times the incidence reported for open cholecystectomy; this injury rate is similar to that found in our study (0.6%). In Jordan in 2001, of 791 patients with CC and 207 with acute cholecystitis (AC) who underwent LC, extrahepatic bile duct injuries were reported in only three cases.[ 6 ] After 1995, a median incidence rate of 0.3% was documented in data from both retrospective and prospective series.[ 6 7 ] The single most important factor responsible for bile duct injuries is misinterpretation of the patient's anatomy. Compared to the open operation, injuries sustained during LC are more often severe (e.g., excision of a segment of the CBD) and generally extend to higher levels. The majority (70–85%) of these injuries are not recognized during the operation; however, both cases of major complications in our series were recognized during surgery and both were iatrogenic injuries as a result of anatomical anomalies. Combined bile duct and hepatic arterial (right hepatic artery or common hepatic artery) injuries carry a particularly bad prognosis, with higher postoperative morbidity and mortality and poorer outcomes after remedial surgery.[ 9 ] Bile duct injuries, substantially increase the economic burden on the patient, hospital, and community. Repair of a bile duct injury costs 4.5 to 26 times the cost of an uncomplicated LC and carries the risk of complications and even death.[ 9 ] Though the initial spike in the incidence of complications settled down as surgeons became more experienced, reports of major bile duct injuries, even in the hands of senior surgeons, continue to surface, suggesting that bile duct injuries following cholecystectomy will always remain a significant problem. However, early recognition (during operation or in the early post-operative period) improves the outcome and reduces the costs.[ 10 ] In our study, all major complications occurred at the hands of consultants and were recognized intra-operatively; only 20% of minor complications occurred at the hands of junior surgeons. The two cases of major complications in our study were due to anatomical anomalies of the cystic duct only. The first (a male patient), underwent a hepatojejunostomy; he had a long and large-calibre cystic duct which was mistaken for the common hepatic duct (CHD). The second case (a female patient) had a spiral and very long cystic duct opening into the medial aspect of the CHD. Other common problems responsible for bile duct injuries are anomalies of the right hepatic duct (RHD) (e.g., low insertion on to the CHD), right anterior and posterior sections of the right hepatic duct, anomalies of the right hepatic artery and aberrant vessels coursing along the CBD.[ 9 ] All major extrahepatic CBD injuries in our series were recognized intra- operatively, whereas literature reports indicate that only 29% of the injuries are generally recognized intra-operatively.[ 9 ] The injuries of the bile duct may include partial tear, laceration, transection and even excision of a portion of the duct. These injuries are seen irrespective of the type of cholecystectomy and result in biliary stricture, which is undoubtedly the most serious complication following cholecystectomy. The severity of the complication depends on the type of injury, the delay in presentation, and on whether the patient requires a revision of an initial attempt at repair. Injuries identified and repaired at the time of the first operation afford good results.[ 3 ]
In our study, 38 patients’ (84.4%) operations were converted due to anatomical difficulties encountered intra-operatively, five due to the discovery of adhesions (11.2%) and two patients’ operations were converted due to common bile duct injuries (4.4%). It is clear from the literature that bile duct injuries occur even in the hands of experienced and competent surgeons but, obviously, inexperience increases the risks. A casual attitude towards a “simple” gallbladder (resection) may result in a catastrophe. A Swedish study[ 11 ] has shown that a surgeon is most likely to injure the bile duct when undertaking between his 25 th and 100 th operative cholangiogram. Acute inflammation around Calot's triangle makes the tissue friable and difficult to grasp. Dissection in such conditions leads to excessive blood being present. This, together with the distorted anatomy, increases the risk of bile duct injuries during LC.[ 10 ] On the other hand, extensive fibrosis around Calot's triangle in cases of chronically inflamed and fibrosed gallbladders may make them extremely difficult to dissect. The cystic duct and biliary tree system may be injured if the surgeon moves from the gallbladder down into the region of the bile duct to try and separate it and in doing so causes a diathermy injury to the bile duct, which can result in a leak. Occasionally, the CBD is dissected out and divided in the belief it is a cystic duct. In such cases, partial cholecystectomy is justified, as otherwise there remains a high risk of bile duct injuries. The probability of complications in the 1254 patients who underwent LC was significantly higher in those patients diagnosed with complicated gallstone disease.[ 12 ] Overzealous use of electrocautery near Calot's triangle and extensive dissection around the CBD may damage its axial blood flow, leading to ischemic damage to the duct and late stricture formation.[ 13 ] Excessive traction leading to tenting of the CBD is another factor predisposing to clipping and ligation of the bile duct, especially when performing an open cholecystectomy. An unnecessary attempt to demonstrate the junction of the cystic duct and the CBD can be potentially dangerous. Obesity and excessive fat in the porta hepatic area also poses technical difficulties and can predispose to bile duct injuries.[ 9 ]
Some authors have described the mechanism of “classic” laparoscopic injury in the presence of “normal” anatomy of the biliary tree; this pattern, occurs when the gallbladder is retracted superiorly. Surgeons believe they can see where the cystic duct is and dissect directly on to it, rather than dissecting on to the gallbladder. It is possible to follow what is believed to be the cystic duct down and then the CBD can be dissected out, clipped and then divided as the cystic duct.[ 13 ] Dissection proceeds upwards along the medial aspect of the CBD and the CHD until damage to the right hepatic artery results.
Other authors[ 14 ] have described a variation of this sequence of events, where faulty anterior and medial traction on the Hartmann pouch fails to open up Calot's triangle causing the cystic duct and the CHD to be mistaken. The CHD common hepatic duct junction is pulled up into the cystic duct and then clipped and divided. This can result in distal obstruction of the CBD and a fistula through the open cystic duct remnant. Clearly if the surgeon is aware of the existence of a short cystic duct then particular care needs to be taken when clipping it.
The factor responsible for the occurrence of such complications (as in our study) is the difficulty interpreting the two dimensional images seen in laparoscopic surgery. This is more likely to occur when no OC is performed. This mistaken interpretation and identification can be so compelling that the surgeon does not recognize an error has been made. Even when irregularities were identified intra-operatively, corrective action was rarely undertaken given the difficulty theatre staff may have in challenging firmly held assumptions of other staff.[ 15 ] Even when mistakes were identified post-operatively (e.g., in the presence of obvious jaundice) it has been found that appropriate feedback to relevant surgeons rarely occurred.[ 15 ] A review of 74 patients referred with bile duct injuries sustained during LC done at the Vanderbilt University Medical Center, Nashville, suggested that these injuries are frequently severe and are related to cautery and high clip ligation, and the level of injury was almost evenly divided between Bismuth type 3, 4, and 5 vs Bismuth type 1 and 2.[ 16 17 ]
Among the minor complications in our study, wound infection was the most common, followed by ileus, transient jaundice, and minor biliary leak. Minor biliary leaks were relatively unusual in our study (three of 22); two were due to the presence of an accessory duct and we were able to treat these cases conservatively; one case was due to the clips applied to the cystic duct slipping and this case had to be managed by percutaneous drainage and ERCP.
According to the literature, the leak may be minor,[ 18 ] arising from a small, accessory bile duct[ 19 ] and clinically insignificant. Such cases should be treated with percutaneous drainage. Injuries to the accessory bile duct are the most common cause of postoperative complications.[ 19 ] On the other hand, a major leak[ 3 18 ] due to injury to a main duct or a retained stone in the CBD[ 20 ] may result in biliary fistula, peritonitis, or biloma. Biliary fistula following LC is a common outcome in many studies; however, we had only one case in our series. Mostly, this complication results from improper application of clips or the clips slipping.[ 20 ] Use of diathermy to divide the cystic duct may cause charring of the tissue and failure of the clip to hold. ERCP helps in diagnosis and removes any doubts regarding possible major ductal injuries. The condition resolves spontaneously[ 20 ] provided there is no distal obstruction; the process may be hastened by the placement of a stent endoscopically.
In bile duct excision, a portion of the bile duct is lost and simple repair, as may be done in transection and laceration, is not possible.[ 21 ] This is the reason why both the cases with major extrahepatic biliary duct injures in this study underwent hepatojejunostomy. The chances of late stricture[ 21 ] are greater in bile duct transection than in bile duct laceration, as the axial vascular supply of the CBD is damaged in transection. Biliary reconstruction in the presence of peritonitis, combined vascular and bile duct injuries or injuries at or above the level of the biliary bifurcation were significant independent predictors of poor outcome.[ 21 ] In our study, all patients had excellent recovery and were discharged in a good condition within 10 days of surgery; however, long-term follow-up was not available. Strictures may develop early (within days or weeks) or may take years to develop and vary in both diameter and length.[ 22 ] Early strictures may develop due to intra-operative procedures such as clamping, ligation or clipping of the duct or thermal injury. Local infection may also result in both early and delayed stricture formation. Thermal injury and occult malignancy are important causes of delayed stricture formation. A thorough knowledge of the anatomy of the region, including possible anomalies, is important in preventing iatrogenic bile duct injuries. Both open cholecystectomy and LC are based on similar operative principles. Proper exposure and visualization, careful dissection, adequate haemostasis, careful placement of ligatures and clips, and division of structures only after proper identification are essential for safe cholecystectomy. Fundus-first cholecystectomy is well recognized as a safe technique during open cholecystectomy as well as during LC, because it minimizes the risk of injuries to the biliary structures at the Calot's triangle.[ 23 ]
Further specialised training to heighten awareness of the possible problems relating to the anatomy of the Calot's triangle is essential, not only for trainees but also for consultants engaged in this field. It is known that errors of judgement can be made relating to the few points within LC where complication-causing errors can occur, for example mistaking the CBD for the cystic duct or dissecting too close to the CHD.[ 15 ] An increased awareness of the possibility of such injuries could lead to a reduction in their occurrence and, further, the early recognition of such an injury (with the advantages of more training) would also lead to fewer injuries going unrecognised. Since injuries occurring at LC are frequently more severe and extend to a higher level than those that occur during open cholecystectomy (Strasberg E3–E5 injuries occur in 31% of LC vs 12% of open cholecystectomy), prevention should always be the aim.[ 24 ] Other authors have stressed the many steps that can be taken to prevent iatrogenic bile duct injuries.[ 9 ] For example, maximum cephalic fundal traction should be applied for better visualization of Calot's triangle; lateral and inferior traction on the Hartmann pouch can open up the angle between the cystic duct and the CHD and avoid problems of mistaken identification of relevant anatomy; Calot's triangle must be freed of fatty and areola tissue; dissection should start near the neck of the gallbladder (the cystic lymph node is an important landmark) and then proceed from the lateral to the medial direction, keeping close to the gallbladder; excessive and unnecessary dissection or use of electrocautery near the CBD must be avoided; cautery should be used at very low power setting in Calot's triangle[ 25 ] because electrocautery on tissues close to metal clips concentrates thermal energy and desiccates the tissue, making the clips less secure and thus predisposing to bleeding and biliary fistula formation. Any bleeding should be controlled only after accurate identification of its source and the neighbouring structures. | CONCLUSION
Bile duct injuries are a rare complication of both open cholecystectomy and LC. It can have devastating effects, turning the individual into a “biliary cripple”. They mainly result from anatomical anomalies and errors of human judgment and are thus preventable to some extent. The costs are reduced and outcome improved if these injuries are diagnosed early (during operation or the early postoperative period). Adding the experience gained from open cholecystectomy on the one hand and the advantages of LC, in terms of visualization and magnification on the other, will help in reducing the incidence of such complications. | Background/Aim:
This study aimed at assessing the outcome of laparoscopic cholecystectomy (LC) by determining the frequency of complications, especially of bile duct injuries.
Materials and Methods:
The case files of all patients undergoing laparoscopic cholecystectomy between 2002 and 2006 (inclusive) at King Hussein Medical Center (KHMC) were retrospectively analyzed. We evaluated the data according to outcome measures, such as bile duct injury, morbidity, mortality and numbers of patients whose resections had to be converted from laparoscopic to open.
Results:
During the four years (January 2002 and December 2006), 336 patients underwent LC for chronic cholecystitis (CC), of whom 22 (6.5%) developed complications. Among those who developed complications, two patients had major bile duct injuries (0.4%); 43 other patients (12.8%) had planned laparoscopic operations converted to open cholecystectomy intra-operatively. None of the patients in this study died as a result of LC.
Conclusion:
Bile duct injury is a major complication of LC. Anatomical anomalies, local pathology, and poor surgical techniques are the main factors responsible. The two patients who had severe common bile duct injury in this study had major anatomical anomalies that were only recognized during surgery. | Operative bile duct injury is one of the serious complications of hepatobiliary surgery. With the introduction of laparoscopic cholecystectomy (LC) there has been an increase in the incidence of such injuries.[ 1 – 4 ] This study was designed to help surgeons in their attempts to prevent undesirable outcomes. | CC BY | no | 2022-01-12 15:22:09 | Saudi J Gastroenterol. 2010 Apr; 16(2):100-104 | oa_package/c2/c8/PMC3016496.tar.gz |
||
PMC3016497 | 20339180 | MATERIALS AND METHODS
We retrospectively analyzed sex, age, clinical presentations, duplication size and site, presence of ectopic tissue, complications, additional associated anomalies and prognosis in seven patients with ED managed in our hospital (2005–2008 inclusive). Radiological work-up of each patient was also analyzed. This is to determine the patterns and the management challenges in these patients. | RESULTS
The patients’ age range varied between 44 hours–10 years at presentation. Five patients were male. There were eight ED, four (50%) of which were ileal duplication [ Table 1 ]. Abdominal swelling and vomiting, a characteristic feature of intestinal obstruction was the most common presentation. The duplication types, age at presentation, treatment and outcome are as shown in Table 1 . Two cases were complicated by volvulus and bowel gangrene. These two patients presented earlier (within 10 days of life) and were the only patients that presented with bilious vomiting [ Table 1 ]. The first was a 44-hour-old infant female neonate with bilious vomiting and progressive abdominal distension. She was delivered by emergency caesarean section on account of preeclampsia in the multiparous mother with no history of polyhydramnios. She passed meconium within six hours of birth but had hyperkalemia (5.9 mmol/L) in spite of adequate urine output and features of small intestinal obstruction on abdominal X-rays. Findings at surgery were turbid purulent fluid with meconium in segment of gangrenous twisted ileum around a cystic duplication. About 64-cm of gangrenous bowel was resected and primary bowel anastomosis done. Baby had a turbulent postoperative period from sepsis and jaundice but was managed successfully and discharged on the 19th day postoperatively. The second case having small bowel duplication with gangrene was 10-days-old at presentation. He had done well in the immediate postnatal period. A few hours after presentation, he developed sudden abdominal swelling, fever, progressively irritable, and vomited several times. Clinical features were in keeping with intestinal obstruction suspected to be malrotation. However, laparotomy confirmed cystic duplication of the midgut twisted around itself. Resection and anastomosis surgery were performed and were uneventful.
Four duplications demonstrated ectopic gastric mucosa while the rest showed mucosal pattern of the adjacent gut most closely related. One parent declined surgery because of fear and lack of funds. Excision surgery and anastomosis gave satisfactory results in all patients, but one died of sepsis and malnutrition three weeks after the surgery. | DISCUSSION
ED, an uncommon malformation of the gastrointestinal tract, may be asymptomatic or presents with vague symptoms mimicking other more common pathologies such as intussusception, volvulus, appendicitis, pelvic abscess, diverticulitis, achalasia, and Hirschsprung's disease.[ 1 2 4 – 17 ] It is most commonly diagnosed when complications such as bleeding, intestinal obstruction or perforation occurred.[ 5 – 10 ] Multiple theories have been proposed to account for ED but no single theory adequately explains all the known duplications. Majority of ED are single, cystic and located on the mesenteric side of the native alimentary tract. All the intraabdominal ED in this study were located at the mesenteric edge of the bowel.[ 4 – 9 ] Symptoms are often related to the location of the duplication. Oral and esophageal lesions may cause respiratory difficulties, whereas lower gastrointestinal lesions may cause nausea, vomiting, bleeding, perforation, or obstruction.[ 4 – 9 13 14 ]
All ED presented in this series had obstructive gastrointestinal symptoms with exception of one who had sublingual cystic duplication. This patient presented with drooling of saliva but had no dysphagia or respiratory difficulty despite the size being approximately 5×6 cm 2 . According to Chen et al. [ 14 ] who reported two cases of sublingual duplication cysts, intraoral ED cyst is rare and has the potential for airway obstruction and respiratory distress at delivery that may necessitate immediate tracheostomy where it presents as a large sublingual mass. Unlike in the Chen et al .[ 14 ] cases, where antenatal diagnosis with ultrasound allowed for the proper preparation of personnel and equipment in the management of those neonates during delivery even before clamping the umbilical cords, our patient with sublingual cyst did not present until four years of age but still complete cyst excision [ Figure 1 ] was possible.
Approximately 75% of ED are intraabdominal and over half of which are ileal duplications. The findings of four ileal, three colonic, and a sublingual ED in our series is consistent with the pattern described previously. The frequency of cystic ED in our series (five patients) was similar to 75% of cystic duplications reported by some authors.[ 1 – 5 ] The spherical or tubular forms of duplication, which occurred in two of our patients, had colonic involvement. Six of our patients (71.4%) were symptomatic in the first year of life, which is similar to over 60% reported by several authors.[ 1 – 5 ]
Some ED may not be symptomatic until at school age or adulthood.[ 1 – 5 ] Therefore, the diagnosis of a duplication cyst is difficult to make clinically or based on conventional radiologic study including barium examinations (swallow, meal, follow through, and enemas) because of the wide spectrum of symptoms and the unspecific signs that may frequently simulate other diseases.[ 3 6 7 9 ] None of our case was suspected clinically and this created a challenge intraoperatively. Nevertheless, two patients had positive barium enema findings, one of which had tubular duplication of both ileum and colon. Again, two of our patients suspected to have Hirschsprung's disease clinically, had their cystic duplications complicated by volvulus and gangrene. Only these two patients presented with bilious vomiting. One of them died three weeks postsurgery.
Heterotopic mucosa of gastric and pancreatic origin is a common finding in ED. Ectopic gastric mucosa was seen in four of our patients. Noteworthy complications such as gastrointestinal ulceration and hemorrhage from ectopic gastric mucosa, bowel perforation and peritonitis, malignant degeneration and intussusceptions have been documented.[ 8 – 12 15 – 17 ] There could be more than one type of heterotopic mucosa in the same duplication. When acid secreting mucosa lines the lesion, hemorrhage or erosion may result. None of these occurred in our patients that demonstrated ectopic gastric mucosa. Although additional malformations (of the genitourinary or vertebra) have been encountered in 16–26% of the cases in some series,[ 4 ] there was none in our series.
Within this series, volvulus complicated by long segment bowel loop gangrene of 64 cm and 45 cm occurred in two patients, respectively. Thus, the signs and symptoms leading to diagnosis varied between duplications, the age of patient, location of the duplication, type of mucosal lining, duration of disease, and presence of complication.[ 4 ]
The diagnosis is rarely made until at surgery because of nonspecificity of symptoms and presentations. Radiological work up with ultrasound, computed tomography (CT), and magnetic resonance (MRI) have been useful. MRI has capability of defining synchronous cyst and spinal cord anomaly but is relatively expensive and scarce.[ 1 3 4 ] Where duplication is tubular, barium examination if not contraindicated may be diagnostic. Both our patients with tubular duplications were thought to have Hirschsprung's disease on clinical evaluation but barium enema examinations done on them were consistent with ED.
Plain thoracic and abdominal x-rays and ultrasonography are the most commonly used diagnostic radiological methods in our center when gastrointestinal obstructive symptom is present. The plain radiographic features were nonspecific in all our patients but were capable of excluding vertebral anomaly. The diagnoses were missed on abdominal ultrasonography probably due to low index of suspicion and limitation imposed by overlying bowel gas because all our patients with intraabdominal ED presented with gastrointestinal obstructive symptoms.
The ultrasonographic and MRI feature suggestive of duplication is identification of a three-layered image representing the duplication cyst, common wall, and outer bowel wall.[ 1 3 4 ] These imaging modalities have assisted in prenatal diagnoses in some cases. In many developing countries, this opportunity may remain elusive over time as many mothers still practice home delivery because of ignorance and poverty, let aside the scarcity of MRI and its cost. The role of multimodality imaging cannot be over-emphasized. Where ED is associated with bleeding, availability of isotopic scan facility, with material such as Technetium 99m pertechnetate, could be useful in demonstrating the bleeding mucosa from the ectopic tissue, as demonstrated for Meckel's diverticulum.[ 10 ]
There have been some cases of mistaken diagnosis leading to wrong management of patients and attendant complication such as persistent perineal fistula, which occurred in two patients thought to have pelvic abscess and drainage instituted in a series reviewed by Iyer et al .[ 9 ]
Treatment of EDs is by surgical means. The main considerations in the management of ED are the age and condition of the patient, location of the lesion, whether it was cystic or tubular and communicating with the true intestinal lumen, and whether it involves one or more anatomic locations. Generally, total excision is preferred, but staged approaches are sometimes necessary.[ 8 14 18 ] Sacrificing a segment of normal intestinal tract is sometimes necessary during the resection of ED because of the often-intimate attachment, but one should avoid long segment resection which may result in a short bowel syndrome, though this may be inevitable in cases of gangrenous bowel. Tubular duplications rarely have an autonomous blood supply and since it has an 80–100% incidence of gastric mucosa, simple drainage of the distal end may cause peptic ulceration of the normal mucosa with bleeding or perforation, and thus should be discouraged. The Wrenn method of mucosa stripping can be used even for long tubular duplication, although multiple incisions like a step ladder may be necessary.[ 19 ] There have been some cases, though very few, treated by laparoscopic-assisted resection.[ 17 – 22 ] Overall, laparotomy is still commonly performed anyway. The results of surgery are generally favorable, with not more than 20% mortality in any reported series to the best of our knowledge. However, some cases of ED have been reported to have developed malignancy.[ 11 12 ] | CONCLUSION
ED exists in our environment and demands a high index of suspicion and careful clinical management. Many cases of nonspecific abdominal pains should be properly reviewed and investigated before patients suffer avoidable complications. | Background/Aim:
Enteric duplication (ED) is a rare congenital anomaly that can occur anywhere along the alimentary tract from the mouth, down to the anus and the nearby organs. This uncommon anomaly may be asymptomatic or presents with vague symptoms mimicking other common pathologies. We aim to present our experience, management challenges and patterns of ED with a review of the literature.
Settings and Design:
The study was carried out at a Nigerian Tertiary Hospital (2005–2008 inclusive).
Materials and Methods:
We retrospectively analyzed seven patients with ED managed in our hospital for sex, age, clinical presentations, duplication size and site, presence of ectopic tissue, complications, associated anomalies, radiological workups, and prognosis. Data was analyzed using SPSS 11.0 for window.
Results:
Seven children between the age range of 44 hours–10 years had ED, one sublingual and six intraabdominal duplications. Midgut volvulus with long segment bowel gangrene complicated two cases. The diagnosis was incidental in all. Three cases were diagnosed following conventional radiological contrast examination and the rest at surgery. Ultrasound was not helpful in making diagnosis in all the six intraabdominal duplications. Though surgery was recommended for all, one of the patients declined. Only one patient had unsuccessful surgery.
Conclusions:
ED requires high index of clinical suspicion and careful management. Many cases of nonspecific abdominal pains should be properly evaluated before patients suffer avoidable complications. | Enteric duplication (ED) is a rare congenital anomaly that can occur anywhere along the length of the alimentary tract from the mouth down to the anus and the nearby organs.[ 1 – 7 ] Overall, ileum is the most common site of occurrence.[ 1 – 7 ]
The pathogenesis is still vaguely understood. However, errors in normal embryologic canalization or embryologic connection between the developing gut and neural tube, as a part of the split notochord syndrome have been postulated. The diagnosis of an ED cyst is difficult to make clinically because the wide spectrum of symptoms and unspecific signs frequently simulate other diseases.[ 3 4 ] The clinical presentations may be vague, diverse, and varied depending on their location.[ 5 ] These include nausea, vomiting, bleeding, pain, swelling, distention, dysphagia, dyspepsia, respiratory distress, chronic constipation, and complications including bleeding, perforation, malignancy, and obstruction of the alimentary tract and vessels.[ 1 2 4 – 16 ] Hence, most cases of ED are diagnosed incidentally, especially, at surgery.
Plain X-rays are of limited use in the diagnosis of ED.Conventional contrast radiographic examinations such as swallow, meal, follow through and enemas could be of value if the ED is tubular. Ultrasound and magnetic resonance imaging (MRI) findings may be diagnostic. Computerized tomography (CT) is useful in delineating surrounding structures.[ 9 ] Nuclear scan may have a role in the detection of bleeding ED although, it may be unable to distinguish it from Meckel's diverticulum except when other features are identified with concomitant investigations.[ 10 ] Once the diagnosis is established, surgical correction is the treatment of choice, preferably complete excision.[ 9 ] The outcome is generally good and mortality did not exceed 20% in any series.
We present our experiences with EDs in seven children between 2005 and 2008 and review the literature. | CC BY | no | 2022-01-12 15:21:46 | Saudi J Gastroenterol. 2010 Apr; 16(2):105-109 | oa_package/b3/23/PMC3016497.tar.gz |
||
PMC3016498 | 20339181 | PATIENTS AND METHODS
We conducted a case-control study on patients being referred to the gastroenterology clinic of Imam Hossein Hospital in Tehran, Iran in 2004. All the patients chosen were being evaluated for H. pylori infection. Patients who had a history of PPI, H2 receptor antagonist, warfarin, fluoxetin, or steroid use within one week before endoscopy or, antibiotic use within four weeks before endoscopy as well as those with severe medical illness, active gastrointestinal bleeding, and history of gastric surgery and H. pylori eradication, were excluded from the study. An informed consent from all patients and approval from the research ethics committee of the faculty of medicine were obtained before proceeding.
Information on age, gender, chief complaint, and type and duration of NSAID use was gathered using a questionnaire. A gastroenterologist performed an upper gastrointestinal (GI) endoscopy using Olympus GIF-100 for all patients and obtained two biopsy specimens from the antrum and two biopsies from the corpus of the stomach. One biopsy from each region was sent for RUT and one for pathologic study using Giemsa staining. The RUT was performed using Chem Enzym Co. kit and read within two hours for all cases. Results of the pathologic studies were considered the gold standard for diagnosis of H. pylori .
Subsequently, patients were divided into case and control groups, based on the result of the RUT test and NSAID use. Patients with a negative RUT and a history of NSAID use were assigned to the case group and those with a negative RUT and no history of NSAID use were assigned to the control group. Our study also included a third group which included patients with positive RUT result regardless of history of NSAID use. Pertinent data was recorded in a data sheet and analyzed using SPSS, χ 2 test. | RESULTS
A total of 210 patients were studied including 70 with a negative RUT test and a history of consumption of a variety of NSAIDs (case group), 70 with a negative RUT test and no history of NSAID use (control group), and 70 with a positive RUT test. Of the 70 patients with a positive RUT, 38 had no history of NSAID use. The case group included 42 (60%) females and 28 (40%) males. The control group included 37 (52.8%) females and 33 (47.1%) males. The groups lacked any significant gender difference ( P =0.4). The group with a positive RUT included 41 females (58.7%) and 29 males (41.4%) and was not significantly different from the other two groups in this regard.
The average age was 55.2±12.9 (range: 27 to 82 years) and 43.3±12.1 (20 to 68 years) years in the case and control groups, respectively. In the RUT positive group, the average age was 54.1±11.9 (range: 25 to 71) years.
Among our patients, the most commonly used NSAID was ASA 100 mg (30%) followed by ASA 80 mg (20%), diclofenac (19%), ibuprofen (16%), piroxicam, (7%), indomethacin (6%), naproxen (1%), and ASA 500 mg (1%). Sixty per cent of our patients had used NSAIDs for more than four weeks (mainly ASA 80 and 100 mg) and 22.4% for less than one week. Using pathology results as the gold standard, we calculated sensitivity, specificity and accuracy rates of RUT. Only one false negative case was observed among 140 RUT results and that single case belonged to the control group. Sensitivity, specificity and accuracy rates of RUT were 100% among NSAID users. In patients without any history of NSAID use, sensitivity, specificity and accuracy rate of RUT were 97.4, 98.5 and 98.1%, respectively. The overall sensitivity, specificity and accuracy rate of RUT were 98.6, 99.29, and 99.04%, respectively. The mentioned findings are summarized in Table 1 . | DISCUSSION
The RUT test is widely used in diagnosis of H. pylori infection around the world due to a number of accompanying advantages, including less expense and more rapid results compared to histology or culture. Further, RUT has been shown to have high sensitivity, specificity and clinical accuracy. Said et al. reported a sensitivity, specificity, positive predictive value, negative predictive value and diagnostic accuracy of 98, 100, 100, 98 and 99%, respectively for RUT (Pronto Dry).[ 9 ]
However, the test has been shown to be less sensitive in case of concurrent use of proton pump inhibitors[ 10 ] bismuth and anti- helicobacter antibiotics. The test is also influenced by the pH of the gastric mucosa.[ 11 ]
Several studies have focused on the effect of medications other than conventional antibiotics on the growth of H. pylori . Shirin et al. reported that certain NSAIDs, including sodium salicylate, ibuprofen, indomethacin, the selective cyclooxygenase-2 inhibitor NS-398 and two derivatives of sulindac sulfoxide, possess in vitro antibacterial activity against H. pylori at therapeutically achievable doses.[ 7 ] In another in vitro study, Wang et al. demonstrated that Aspirin inhibited the growth of H. pylori , suppressed the mutagenic effect of metronidazole, and enhanced the susceptibility of H. pylori to antimicrobial agents.[ 12 ]
These studies have focused on the in vitro effect of NSAIDs on the growth of H. pylori and one cannot extrapolate the results to in vivo settings and test results since organisms are multiplied in the urea broth during the test.
This is supported by the work of Graham and colleagues who reported that H. pylori was not susceptible to aspirin and other NSAIDs, such as indomethacin, ibuprofen, naproxen, or tolmetin in vivo .[ 13 ] Similarly, our results showed that RUT maintains its high sensitivity and specificity even in patients who have used NSAID for a moderate period of time. One of the few studies in this regard is that performed by Caselli et al. in 1989. Caselli et al. reported that Campylobacter-like organisms were detected in a significantly lower proportion ( P <0.001) of rheumatoid arthritis (RA) patients than outpatients (30.6 and 59.0%, respectively).[ 14 ] However, this study was greatly limited in that the authors used only one method for the diagnosis of H. pylori, namely biopsy, and it is reasonably possible that the organisms did not exist in the exact region where the biopsy was taken. The mentioned study is also limited by the selective nature of its sample i.e. RA patients who frequently use a wide range of other medications, that could interfere with test results. Moreover, other studies have demonstrated that there is no meaningful difference in H. pylori prevalence between NSAID users and non-users.[ 15 16 ] The relatively small sample size and absence of long-term NSAID users in our study warrants further research on the subject. | CONCLUSION
Our study results show that NSAID use does not give false negative results in RUT; hence it is not necessary to stop NSAIDs before RUT. | Background/Aim:
This study aimed to determine the effect of nonsteroidal anti-inflammatory drugs (NSAIDs) on the results of rapid urease test (RUT).
Patients and Methods:
The study evaluated 210 consecutive patients for the diagnosis of Helicobacter pylori (H. pylori) infection. They were divided into case and control groups based on history of NSAID use (n=70 each). Two biopsy specimens were collected from antrum and corpus of stomach during endoscopy and sent for rapid urease testing and histopathology. Sensitivity, specificity, and accuracy rate of RUT test were compared against histology.
Results:
The average age was 55.2±12.9 and 43.3±12.1 years in the case and control groups, respectively. Among NSAID users, RUT sensitivity, specificity and accuracy rate were all 100%. The sensitivity, specificity and accuracy rate of RUT in patients without history of NSAID use were 97.37, 98.57 and 98.14%, respectively. The overall sensitivity, specificity and accuracy rate of RUT were 98.57, 99.29, and 99.04%, respectively.
Conclusion:
Our study shows that sensitivity, specificity and accuracy rate of RUT are not affected by NSAID use. Rapid urease test remains a reliable test for diagnosis of H. pylori in patients on NSAIDs. | Helicobacter pylori ( H. pylori ) is a spiral, gram negative, microaerophilic, urease producing organism that lies in the interface between the gastric epithelial cell surface and overlying mucus gel.
The prevalence of H. pylori varies greatly in different parts of the world. It ranges from 8.9 to 72.8% among children from developed and developing countries, respectively, with the re-infection rate also being significantly higher in the latter.[ 1 ]
In a recent study in south of Iran, the prevalence rates were 82%, 98%, 88%, 89%, and 57% in age groups of nine months, and two, six, 10, and 15 years, respectively.[ 2 ] In another recent study in the north of Iran, prevalence of H. pylori infection in a similarly aged population, was reported as 57.8% by histology, rapid urease test (RUT) and serology.[ 3 ]
Inadequate sanitation, low socioeconomic status and overcrowding seem to be related to a higher prevalence of H. pylori infection. No predominant route of transmission has been identified, and possibilities include the fecal-oral, oro-oral and gastro-oral routes.[ 4 ]
Essentially, all H. pylori infections lead to chronic gastric inflammation, but this condition is in itself asymptomatic. Symptoms are usually related to illnesses like peptic ulcer disease and gastric adenocarcinoma. The presence of H. pylori is also strongly related to gastric lymphoma. Low- grade B-cell mucosa associated lymphoid tissue (MALT) lymphomas which are antigen driven often regress following eradication of H. pylori .[ 5 ]
Tests for H. pylori can be divided into two groups: invasive tests, which require upper gastrointestinal endoscopy and analysis of gastric biopsy specimens, and noninvasive tests. One of the rather widely used invasive tests is the RUT. It is based on the principle that abundant urease enzyme produced by H. pylori hydrolyses urea to ammonia. The consequent rise in pH of the medium is detected by phenol red indicator.[ 6 ] Several modifications of Christensen's original urea medium have been developed to obtain quick results and improve sensitivity and specificity. Although sensitivity and specificity of RUT are generally above 90%, the test results seem to be influenced by the consumption of several drugs including proton pump inhibitors (PPIs), antibiotics, H2 receptor antagonists, and bismuth.[ 7 ] Further, there have been reports of anti H. pylori effects of salicylates and sulindac; their effects on the result of RUT are, however, not yet clear.[ 8 ]
In the light of the prevalence of H. pylori infection, cost of treatment and need for correct diagnosis prior to initiation of eradication treatment, we conducted this study to determine if nonsteroidal anti-inflammatory drugs (NSAID) use adversely influenced RUT results. | The authors would like to thank Farzan Institute for Research and Technology for technical assistance. | CC BY | no | 2022-01-12 15:21:46 | Saudi J Gastroenterol. 2010 Apr; 16(2):110-112 | oa_package/72/1b/PMC3016498.tar.gz |
|
PMC3016499 | 20339182 | DISCUSSION
Mycobacterium fortuitum is a nontuberculous mycobacterium that is classified according to the Runyon grouping as rapidly growing mycobacteria (Runyon group IV). As the name implies, these types of organisms can be grown in culture and identified in less than seven days.[ 10 ] It was first described by Cruz who named it so because of its fortuitous isolation from the pus of a subcutaneous abscess.[ 11 ] It is an opportunist pathogen which ordinarily does not cause tuberculosis and occurs most often with abscess formation or indolent lung disease. Infection is often chronic, does not respond to conventional antituberculous treatment and occasionally remits spontaneously.
The organism is ubiquitous and is commonly found in the air, tap water, distilled water for dialysis and for preparing surgical solutions. It is, however, not a clear source of infection in the previous hospital where the patient had the operation and no other patients in the referring hospital were subsequently identified with a similar infection. The isolation of the organism in the gastric biopsy during endoscopy of our patient may suggest that the organism is available in the environment and might have contaminated the skin and or the equipment during surgery. The onset of symptoms shortly after surgery supports this hypothesis. Other possible cause is dissemination of the mycobacterium into the peritoneal cavity from gastrointestinal tract injury with leak during laparoscopy, although gastrografin study did not support this possibility. Nosocomial outbreak of infection by this organism has been reported in the surgical setting[ 12 ] and as respiratory tract colonization the latter being traced to a water line supplying showers in a ward.[ 13 ] Kobayashi et al. were able to isolate mycobacterial species from colonic contents during colonoscopy[ 14 ] confirming the normal colonization of the gastrointestinal tract by the organism.
According to Marks,[ 15 ] the organism is likely to be significant as an etiologic agent when isolated from the skin and subcutaneous tissue, and seldom so from other sources. Our ability to culture the organism in pure growth from both the ascitic fluid and the tissue around the gastric band, and the resolution on treatment, indicate that the organism is etiologically related to the clinical features of the patient.
Mycobacterium fortuitum is generally resistant to most antituberculous drugs. These drugs were, however, used widely for treatment until the 1990s when aminoglycosides alone and later in combination with macrolides (particularly clarithromycin), imipenem, and fluorinated quinolones were shown to be effective.[ 16 17 ] The national committee for the clinical laboratory standards (NCCLS) recommended broth microdilution technique to be the method for susceptibility testing of the Rapidly growing mycobacterium with minimum inhibitory concentration (MIC) determination and resistance breakpoint similar to those used for other bacterial species.[ 18 ] The antimicrobials used are selected bacterial agents and the recommendation for testing the rapidly growing mycobacteria (RGM) include clarithromycin (used as a class representative agent for the new macrolides), amikacin, cefoxitin, imipenem, doxycycline, ciprofloxacin, and sulfonamide. The current practice is to combine two or more of: amikacin, imipenem, cefoxitin, fluorinated quinolones, and macrolides in the treatment. The efficacy of this approach has been demonstrated in our case. No standard duration of therapy is reported but treatment lasting 4-6 months is not unusual. In patients with valve replacement and endocarditis due to this complex in which response was inadequate, removal of the infected prostheses led to resolution of infection.[ 19 ] Some patients in the same review, however, recovered without removal of infected valves. Another study has demonstrated higher case-fatality ratio in patients with foreign material, which may make it difficult to eradicate the organism once it is acquired.[ 9 ] The decision to remove the source of infection should therefore be taken on an individual case basis. Our patient demonstrated no response to therapy until the band was removed.
Death from Mycobacterium fortuitum infection is infrequent partly because most infections involve nonvital organs such as skin and soft tissues, and that the disease is generally self-limiting. Where death occurs, it is usually primarily due to the underlying disease with the Mycobacterium fortuitum infection playing a secondary role. | CONCLUSION
There is a need to consider Mycobacterium fortuitum as a cause of postsurgical peritonitis and not just as a contaminant, do appropriate investigations and institute the right treatment. Current guidelines recommend susceptibility testing of all isolates, with use of empirical antibiotics suggested until drug sensitivities are known. | Mycobacterium fortuitum is a rapid growing nontuberculous organism that has rarely been associated with peritonitis in an otherwise healthy host. We describe a patient who developed peritonitis due to the organism after gastric banding operation, which resolved after removal of the gastric band and institution of appropriate antibiotic therapy. | Mycobacterium fortuitum is a ubiquitous, rapidly growing organism that is readily cultured from soil, tap water, dust, and hospital environment.[ 1 2 ] When isolated from human sources (especially from the upper respiratory tract) usually reflects transient colonization rather than infection,[ 3 ] but it can cause skin and soft tissue infections following penetrating trauma and or even punch biopsy.[ 4 5 ] It has, however, been implicated in a number of postsurgical infections including infections of the sternum, mediastinum, and endocarditis, following heart surgery,[ 6 ] keratitis, and mastitis.[ 7 ] Peritonitis has been reported previously in many patients undergoing chronic peritoneal dialysis,[ 8 ] but has only been reported once before following abdominal surgery.[ 9 ]
We describe the first case of peritonitis due to Mycobacterium fortuitum after strict banding through laparoscopy for morbid obesity, in an immunocompetent woman.
CASE REPORT
The patient was a 36-year-old female who presented to our institution with a two-month history of fever, chills, upper abdominal pain, and anorexia. A few days prior to the onset of presenting symptoms, she underwent gastric banding for morbid obesity in another facility. She had no other symptoms referable to other systems and was not on any medications.
Examination revealed an ill-looking morbidly obese lady with a body mass index (BMI) of 45. Vital signs were as follows: temperature 39°C, respiratory rate 26/minute, pulse 115/minute, and blood pressure 130/92 mmHg. Examination of the abdomen revealed a scar for the prior gastric banding, tenderness with rebound phenomenon especially on the left upper quadrant with clinical signs of ascites. Bowel sounds were normal. Examination of other systems was unremarkable.
Initial investigations were as follows: white cell count 8.8×10 3 , hemoglobin 10.6 g/dl, platelets 412×10 3 , and erythrocyte sedimentation rate 112 mm/hr. Both chest radiograph and urinalysis were normal. Ultrasound scan and later, computed tomography (CT) scan of the abdomen confirmed the presence of free fluid in the peritoneal cavity, a gastric band in situ and an enlarged fatty liver [ Figure 1 ]. Ascitic tap revealed an exudative fluid with total protein of 44 g/l, cell count of 450 cells/cuL comprising 80% of lymphocytes. Culture of ascitic fluid was performed by inoculating the specimen into a liquid culture media BACTIC MGIT960 (MGIT: mycobacterial growth indicator tube) a rapid growth was detected in six days. Fungal and other bacterial cultures revealed no growth.
Further identification was established and the diagnosis of Mycobacterium fortuitum complex was made. Mycobacterium fortuitum was differentiated from Mycobacterium chelonei by nitrate reduction and iron uptake, in addition to growth of the organism on MacConkey medium without crystal violet. In the specimen, antimicrobial susceptibility testing was done using broth microdilution and the result showed that the organism is resistant to rifampicin, ciprofloxacin and cefitriaxone, and sensitive to amikacin, gentamycin, clarithromycin, and tetracycline. Culture of gastric biopsy taken at endoscopy also grew the same organism. Gastrografin study did not demonstrate any leak into the peritoneal cavity. Blood cultures were sterile and tuberculin test was negative.
Hospital course: Patient was started on clarithromycin 500 mg orally twice daily, doxycycline 100 mg orally twice daily, and gentamycin 320 mg intravenously once daily. She remained febrile and symptomatic after a week of treatment. A decision to remove the gastric band was taken followed by rapid defervescence. Histology of the tissue around the band revealed foreign body type granuloma and culture grew the same organism with identical antiobiogram. Patient was discharged after three weeks of triple antibiotics. She was subsequently maintained on clarithromycin and doxycycline for a period of four months. She made complete and uneventful recovery. | CC BY | no | 2022-01-12 15:21:46 | Saudi J Gastroenterol. 2010 Apr; 16(2):113-115 | oa_package/4c/48/PMC3016499.tar.gz |
||||
PMC3016500 | 20339183 | DISCUSSION
Cavernous hemangiomas are the most common benign neoplasm of liver and are often incidentally detected during abdominal imaging done for unrelated clinical indications.[ 4 ] Most of them are small and asymptomatic and do not require any intervention. Giant hemangiomas are the ones that are larger than 4 cm in diameter.[ 5 ] Absolute surgical indications for hepatic hemangioma are spontaneous or traumatic rupture with hemoperitoneum, intratumoral bleeding, and consumptive coagulopathy (Kasabach-Merritt syndrome). Persistent abdominal pain, obstructive jaundice, portal hypertension, superficial location of tumors larger than five cm with a risk of trauma, and an uncertain diagnosis are relative surgical indications.[ 6 7 ] Rupture of hemangioma with hemoperitoneum, as in our case, is the most dreaded complication and is often fatal if not promptly managed.[ 1 – 3 ] The first case of spontaneous rupture of a hepatic hemangioma was described by Van Haefen in 1898 in an autopsy case.[ 8 ] In 1961, Swell and Weiss[ 9 ] reviewed 12 cases of spontaneous rupture of hemangiomas from literature and reported the mortality rate to be as high as 75%.
Recent studies have emphasized the role of TAE in the effective treatment of symptomatic hemangiomas, progressively growing hemangiomas and those at risk of bleeding.[ 10 11 ] However, the use of TAE as an alternative to surgery in the management of ruptured liver tumors is controversial because of the fear of causing ischemia, intracavitary bleeding or infection.[ 12 ] Successful use of TAE before surgical resection of ruptured hepatic hemangioma was first reported by Yamamoto et al in 1991.[ 13 ] Since then, three more such cases have been reported in the literature, with no patient mortality.[ 3 14 15 ] The present case adds to the list of such patients.
In our case, the lesion was subcapsular and located in segment VII of right lobe. The likelihood of spontaneous ruptures is unknown but large, subcapsular lesions are considered to be at greater risk.[ 12 ] Abdominal CECT not only established the diagnosis of ruptured hemangioma but also demonstrated the site of rupture and suggested the possibility of active intra-abdominal bleeding which was confirmed on angiography. Perihepatic packing done prior to referral may have contributed to reduction in the rate of bleeding, thus facilitating angiography. The choice of embolic material depends upon the initial angiographic appearance and the experience of the radiologist. Various materials like gelfoam, polyvinyl alcohol (PVA) particles, steel coils, as well as isobutyl cyanoacrylate have been used.[ 10 ] We used PVA particles for initial embolization of vascular interstices and the principal arteries were embolized using steel coils, without any significant complication.
Tumor resection after TAE resulted in minimal per-operative blood loss in our patient. In cases of rupture, TAE results in stanching or reducing the hemorrhage, thereby improving the general condition of patient, thus making subsequent hepatic resection a safer procedure.[ 5 ] Suzuki et al. observed a significant improvement in coagulative factors and a decrease in intraoperative blood loss, in patients with consumptive coagulopathy related to intravascular coagulation in hemangioma, that were treated with preoperative TAE.[ 16 ]
To conclude, in a patient presenting with acute abdominal pain, with hitherto no known abdominal problem, spontaneous rupture of a hepatic tumor like hemangioma should also be kept as a rare differential. In addition, our case substantiates the limited available literature regarding validity of preoperative transcatheter arterial embolization and its usefulness in significantly improving the outcome in patients undergoing surgery for a ruptured hepatic hemangioma. | Hemangioma is the most common benign tumor of liver and is often asymptomatic. Spontaneous rupture is rare but has a catastrophic outcome if not promptly managed. Emergent hepatic resection has been the treatment of choice but has high operative mortality. Preoperative transcatheter arterial embolization (TAE) can significantly improve outcome in such patients. We report a case of spontaneous rupture of giant hepatic hemangioma that presented with abdominal pain and shock due to hemoperitoneum. Patient was successfully managed by TAE, followed by tumor resection. TAE is an effective procedure in symptomatic hemangiomas, and should be considered in such high risk patients prior to surgery. | Liver hemangiomas have prevalence between 0.4–7.4% in autopsy studies.[ 1 ] One of the uncommon (1–4%) but often fatal complications is rupture of the tumor with hemoperitoneum, which has a high mortality rate ranging from 60–75%.[ 1 2 ] In a recent review of literature, the operative mortality rate of ruptured hemangioma was reported to be 36.4%.[ 3 ] Preoperative transcatheter arterial embolization (TAE) has shown excellent results with no operative mortality in four patients with spontaneous rupture of liver hemangioma reported so far in the literature.[ 3 ] We report a case of giant hemangioma of the liver who presented with abdominal pain and shock due to hemoperitoneum as a result of spontaneous rupture; that was managed successfully with TAE followed by tumor resection.
CASE REPORT
A 31-year-old man experienced sudden onset of abdominal pain and tenderness in the right hypochondrium. There was no history of trauma, and no history of significant disease in the past. He underwent laparotomy at a primary health care centre, owing to a misdiagnosis of ruptured liver abscess with peritonitis. On exploration, hemoperitoneum with active bleeding from the liver surface was found. Due to lack of diagnosis of the possible cause of bleeding, perihepatic packing was done with sponges, and the patient was referred to our tertiary care institute. The patient received three units of packed cells prior to shifting to our center. On arrival, patient was pale and in hemorrhagic shock. The blood pressure was 76/50 mmHg, pulse was 124 beats per min and feeble. Abdomen was distended and diffusely tender. Laboratory investigation revealed hemoglobin of 6.8 g/dl, hematocrit 26%, and high transaminase levels. After initial resuscitation with intravenous fluids and packed red blood cells (RBC), the patient underwent a contrast enhanced computerized tomography (CECT) scan on a spiral computerized tomography (CT) scanner. CT scan demonstrated a peripherally enhancing lesion in right lobe of liver, located in segments VI and VII [ Figure 1 ]. The largest dimension of the lesion was 11 cm in the craniocaudal direction, and transverse and antero-posterior measurements were 9.7 and 7.3 cm respectively. The site of rupture was clearly seen as a rent in the enhancing margin of the lesion at the postero-superior aspect [ Figure 1 – arrows]. On delayed scans, the lesion showed characteristic centripetal filling-in of contrast, typical of a hepatic hemangioma, with a small central non-enhancing fibrous scar [ Figure 2a ]. Delayed images also revealed hyperdensity tracking along the liver surface and settling in the dependent part [ Figure 2b ], indicating the presence of active contrast extravasation. There were associated hemoperitoneum and mild bilateral pleural effusions. Remnant liver parenchyma was normal with no other focal lesion. The sponges from previous laparotomy were also identified. Based on CT scan, a diagnosis of giant hepatic hemangioma with rupture and hemoperitoneum was made and patient was shifted to angiography suite within 12 hours of arrival at our center after adequate resuscitation. The patient was normotensive but tachycardic at the time of shifting to the angiography suite.
Digital Subtraction Angiography (DSA) revealed typical angiographic findings of a liver hemangioma with pooling and puddling of contrast in the right lobe lesion. Arterial contrast extravasation was noted on the celiac angiogram, after which a selective right hepatic arteriogram was done that confirmed active contrast extravasation from the hemangioma [ Figure 3 ].
The hemangioma was embolized in a single session, first using PVA particles for embolisation of tumor interstices, and then the principal arteries were occluded using two steel coils [ Figure 3 ]. PVA particles 500–750 microns (Cook Bloomington Inc U.S.A) were used for embolization and were delivered using a Cobra catheter. Deep catheterization obviated the need for using a microcatheter. The vitals of patient immediately stabilized, and a check angiogram showed successful embolisation with no contrast leak.
The patient received a total of six transfusions perioperatively. When the general condition of the patient stabilized, he underwent laparotomy for removal of surgical packs. After pack removal, surgeons did not note any active bleed, and a large hemangioma involving segments VI and VII, which was ruptured on the posterior surface was noted [ Figure 4 ]. The hemangioma could be completely enucleated without significant blood loss. Histological examination revealed a cavernous hemangioma of 11.5×10 cm diameter. The postoperative course was largely uneventful except for mild fever and right pleural effusion which necessitated tube thoracostomy. The patient fully recovered and discharged three weeks following surgery. Patient was asymptomatic on follow-up at two months. | CC BY | no | 2022-01-12 15:21:46 | Saudi J Gastroenterol. 2010 Apr; 16(2):116-119 | oa_package/e6/fc/PMC3016500.tar.gz |
|||||
PMC3016501 | 20339184 | DISCUSSION
Percutaneous endoscopic colostomy (PEC) has been used in other conditions[ 6 – 8 ] such as acute colonic pseudo-obstruction and fecal constipation. The few contraindications to PEC are similar to PEG and include, patients who are unfit for endoscopy; inability to pass the colonoscope; failure to transilluminate, and patients with ascites owing to the concomitant potential risk of infection.[ 9 ]
Recurrent sigmoid volvulus in elderly patients who are not fit for definitive surgery is difficult to manage. The placement of two PEC tubes is a simple and relatively safe procedure. The major risk of the procedure is peritonitis, which carries almost 100% mortality since most of these patients are unfit for surgery.
Preoperative CT scan is useful to outline the anatomy of the sigmoid colon in relation to the small bowel, but is not mandatory. The two tubes should be left open to act as vents for the colon preventing overdistention. In our opinion, this is the key for success. It keeps the sigmoid colon deflated until adhesions form between the colon and the abdominal wall. The tube should be left in situ indefinitely.
Our case has the longest follow-up in published literature and we believe it can be offered as an alternative to open surgery in high-risk patients. | Sigmoid volvulus is a common cause of large bowel obstruction in western countries and Africa. It accounts for 25% of the patients admitted to the hospital for large bowel obstruction. The acute management of sigmoid volvulus is sigmoidoscopic decompression. However, the recurrence rate can be as high as 60% in some series. Recurrent sigmoid volvulus in elderly patients who are not fit for definitive surgery is difficult to manage. The percutaneous endoscopic placement of two percutaneous endoscopic colostomy tube placement is a simple and relatively safe procedure. The two tubes should be left open to act as vents for the colon from over-distending. In our opinion, this aspect is key to its success as it keeps the sigmoid colon deflated until adhesions form between the colon and the abdominal wall. | Sigmoid volvulus is a common cause of large bowel obstruction in western countries and Africa. It accounts for 25% of the patients with large bowel obstruction (LBO) admitted to the hospital. The acute management of sigmoid volvulus is sigmoidoscopic decompression. However, the recurrence rate can be as high as 60% in some series.[ 1 ]
We describe a new technique called percutaneous endoscopic colostomy (PEC) with two size 20 Fr PEC tube catheters. There are two case series and three case reports describing the technique.[ 2 – 6 ] All describe the use of one PEC tube of a small size. The success rate was low and therefore the technique did not gain popularity.
CASE REPORT
A 93-year-old man with a background of atrial fibrillation (AF) on Warfarin, ischemic heart disease (IHD) and asthma. Due to the associated risk from general anesthesia (GA), non-surgical intervention was sought for the patient. He had nine documented, radiologically proven, recurrent sigmoid volvulii over a six months period. Preoperative computerized tomography (CT) scan was performed to outline the relationship of the colon to the abdominal wall and ensure no intervening small bowel. An informed consent was obtained.
The procedure was performed under sedation (2 mg of midazolam and 25 mcg of fentanyl) administered by anesthetists. The patient was placed in modified Lloyd Davies position. A single dose of intravenous antibiotics was given (gentamicin and metronidazole). The abdomen was prepped with aqueous Betadine. The colonoscope was introduced to the splenic flexure and then withdrawn to the level of proximal sigmoid. The lighted tip of the endoscope was seen pressing outward against the abdominal wall. Local anesthetic was infiltrated into the skin and the sheath. Using the PEG (percutaneous endoscopic gastrostomy) kit, a needle was used to enter the colon under direct vision. The wire was grasped by a snare (a polypectomy snare was used as the snare provided in the PEG tube kit was short) [ Figure 1 ]. A 20F PEG tube was then trailed into the colon, through the abdominal wall. The position was checked with the colonoscope [ Figure 2 ]. The same procedure was performed on the sigmoid 15 cm distally. This was slightly difficult as translumination was not obtained easily but was ultimately successful.
The two tubes were left open to vent the colon and a colostomy bag was used to cover both tubes. Postoperative CT scan was done to check the position of the tubes and to ensure the absence of free gas beyond the tube insertion [ Figure 3 ]. The patient was discharged on the second day. A year and eight months later the proximal tube eroded from the skin and was required removal in the clinic. Six months later he developed recurrence of his volvulus and a proximal tube was replaced. At two years he had no further recurrence and had a good quality of life. | CC BY | no | 2022-01-12 15:21:46 | Saudi J Gastroenterol. 2010 Apr; 16(2):120-121 | oa_package/ce/23/PMC3016501.tar.gz |
|||||
PMC3016502 | 20339186 | Sir,
I read with great interest the article written by Dr. Gamal Khairy[ 1 ] titled ‘Acute Appendicitis: Is Removal of a Normal Appendix Still Existing and Can We Reduce Its Rate?’ published in the Jul-Sep 2009 issue of the Saudi Journal of Gastroenterology. The author highlights the importance of clinically based diagnosis of acute appendicitis. The author comments about the futility of routine CT scan in the diagnosis of acute appendicitis based on the reports of 3 (5.5%) out of 54 patients of negative appendectomy who had undergone preoperative CT scan. Is it justified to comment on the diagnostic value of CT scan based on reports of only 3 patients? Literature has abundant studies having proponents as well as opponents regarding utility of CT scan for appendicitis. I agree with the author that clinical judgment is of paramount importance in the diagnosis and management of acute appendicitis even today and liberal use of CT scan should be strongly discouraged. The pitfalls of CT scan need to be understood,[ 2 ] and the potential radiation burden to the patient need not be overemphasized. However, I would like to mention here that imaging has a definite and well-established role in the diagnosis and management of appendicitis. A clinician will always face the dilemma of balancing between early appendectomy (to prevent perforation) and negative appendectomy. Laboratory investigations, though useful, are nonspecific. Here comes the role of definitive imaging studies. It is surprising that the article does not even mention about ultrasonography being done in any of the patients. Despite relatively low sensitivity of ultrasonography, it provides a very high specificity, which possibly could have addressed the author's primary concern of how to reduce negative appendectomy rates. Ultrasonography can be carried out with minimum delay and can yield pertinent and surgically relevant information like confirmation of diagnosis of appendicitis, status of inflamed appendix, location and orientation of appendix, presence or absence of any associated complications, adhesions, free fluid, etc., besides suggesting alternative diagnoses, which helps in appropriate surgical planning. Today, what we need is a proper diagnostic algorithm and triage of patients of appendicitis with regard to the choice of investigations. Dedicated appendiceal ultrasonography by an experienced sonologist should be the first imaging modality in suspected cases of acute appendicitis.[ 3 4 ] In case the sonographic study is equivocal or negative and the clinical suspicion is very strong, then the patient may undergo selective CT scan as per protocol or may even be taken up for surgery at the discretion of the treating surgeon. | CC BY | no | 2022-01-12 15:21:46 | Saudi J Gastroenterol. 2010 Apr; 16(2):122a | oa_package/28/41/PMC3016502.tar.gz |
|||||||
PMC3016503 | 20339185 | Sir,
Anatomical variation of the Hepatic Artery (HA) is seen in 20.4% of liver donors. The common variations include a replaced or an accessory right HA originating from the superior mesenteric artery (6.67%) and a replaced or an accessory left HA originating from the left gastric artery (6.41%).[ 1 ]
A 46-year-old male patient presented with painless and progressive jaundice of 1 month's duration, with severe generalized itching and white stools. Clinically, he was deeply jaundiced, with serum bilirubin of 26mg% and elevated alkaline phosphatase. The whole biliary tree was seen to be dilated on Ultrasonography (USG). Side-viewing endoscopic biopsy from an ulcer at the ampulla was reported as adenocarcinoma. There was no metastasis on evaluation. While doing the classical Whipple procedure the HA was dissected free in the normal location in front of and left of the portal vein. While dissecting toward the gastroduodenal artery, a much bigger vessel was seen crossing the portal vein anteriorly. The main artery was seen to the right of the portal vein, crossing in front of the portal vein at the superior border of the pancreatic neck and dividing into the gastroduodenal and the common HA, which further divided into the right and left HA [Figures 1 and 2 ]. There was no HA arising from the celiac trunk. As the artery lay in front of the portal vein, along the line of the pancreatic neck transection, the chance of injury was high. The main trunk and the artery beyond the gastroduodenal were protected and an uneventful Whipple procedure was done.
The arterial system of the liver in humans presents wide variability and knowledge of the different variations is important when operating in this region.[ 2 ] In a cadaveric dissection, a similar anatomical variation has been reported, with a gastrosplenic and hepatomesenteric trunk, and with the HA having a similar course to that found in our case[ 3 ]; we, of course, could not dissect till the origin of the artery in our live patient. In another cadaveric dissection, two anomalous HAs were described; one of these was similar to the HA seen in the present case, which can be classified as type VI of Adachi's classification and type IV of Morita's classification.[ 4 ] Ours is the first report of such an anomaly in a live jaundiced patient. Such cases require careful dissection and ligature of the gastroduodenal artery, with care being taken to preserve the common hepatic trunk. | CC BY | no | 2022-01-12 15:21:46 | Saudi J Gastroenterol. 2010 Apr; 16(2):122b-123 | oa_package/7f/3b/PMC3016503.tar.gz |
|||||||
PMC3016504 | 20339187 | CONCLUSION
The ideal test for liver histologic assessment should have high sensitivity and specificity, be relatively inexpensive, incur minimal risk for the patient and be convenient to perform with reproducible and easily interpreted results. LB entails significant complications toward liver histologic assessment. It also suffers serious shortcomings in diagnostic accuracy. A large liver sample size is required to achieve an ideal diagnostic accuracy, which is clinically infeasible and even dangerous to pursue. On the other hand, a number of noninvasive biomarkers have evolved, each with an impressive range of diagnostic certainty approaching that achieved with LB. These pose no danger to the patient, are reproducible, and yet easily interpretable. Invasive assessment of the liver can no longer be cited as a “gold standard,” and at best can only be considered as an imperfect standard. Neither LB nor any single alternative option represents an absolute assessment of liver disease. | Percutaneous liver biopsy (LB) remains an important tool in the diagnosis and management of parenchymal liver diseases. In current practice, it is most frequently performed to assess the inflammatory grade and fibrotic stage of commonly encountered liver diseases, with the diagnostic role relegated to secondary importance. The role of LB remains a vastly controversial and debated subject, with an ever-increasing burden of evidence that questions its routine application in all patients with liver dysfunction. It remains, essentially, an invasive procedure with certain unavoidable risks and complications. It also suffers shortcomings in diagnostic accuracy since a large liver sample is required for an ideal assessment, which in clinical practice is not feasible. LB is also open to observer interpretation and prone to sampling errors. In recent years, a number of noninvasive biomarkers have evolved, each with an impressive range of diagnostic certainty approaching that achieved with LB. These noninvasive tests represent a lower-cost option, are easily reproducible, and serve as suitable alternatives to assess hepatic inflammation and fibrosis. This article aims to debate the shortcomings of LB while simultaneously demonstrating the diagnostic accuracy, reliability and usefulness of noninvasive markers of liver disease thereby making the case for their utilization as suitable alternatives to LB in many, although not all, circumstances. | Liver biopsy (LB) is an important diagnostic tool that assists determination of specific diagnoses and directs therapeutic decisions in patients with acute and chronic liver diseases. Over one hundred years ago, Paul Ehrlich introduced the procedure as a means of studying liver histology.[ 1 ] Since then LB has gained widespread acceptance for the assessment of liver abnormalities associated with many types of liver diseases. The popularity of LB was facilitated by the Menghini “one-second liver biopsy” technique,[ 2 ] which provides samples suitable for various morphological studies, including histochemical, immunohistochemical, ultrastructural and, more recently, molecular biology studies.
The examination of an LB specimen under the microscope is a direct way to identify changes in hepatic tissue and either make a specific diagnosis or determine the grade and stage of chronic liver disease. When it was initially developed, LB primarily served as a diagnostic aid to determine the etiology of liver dysfunction. However, with expansion of knowledge in relation to pathogenesis and natural history of various liver diseases, and the availability of more sensitive and accurate serologic, virologic, genetic and immunologic laboratory tests as well as radiographic techniques, the role of LB in clinical practice has undergone a major change. LB remains a key test to provide a diagnosis, especially in the presence of significant hepatic dysfunction and lack of diagnosis in spite of a comprehensive laboratory evaluation for viral, genetic and autoimmune diseases. In current practice, however, LB is most often performed to assess the degree of necroinflammatory and fibrotic changes, thereby providing essential prognostic information on which to base therapeutic decisions. LB has remained the “gold standard” mainly because of the absence of better alternatives.
However, at long last, substantial progress has been made to break the monopoly that LB has maintained on the evaluation of hepatic tissue. Alternatives to this invasive test have been proposed and are deemed to be as good as biopsy and less damaging to the patient, and include predictive tests for assessment of steatosis, inflammation and fibrosis.[ 3 ] Additionally, it has become apparent that LB, far from being a “gold standard,” is at best an imperfect standard that has attracted criticism over its general application. Increasing evidence challenges the notion of LB as the reference against which all other techniques must be measured.[ 4 – 10 ] Its role remains a controversial subject, and an ever-increasing number of authors have questioned the need for its routine application in all patients with liver dysfunction.[ 11 – 13 ]
Simultaneously, evidence has accumulated promoting the use of noninvasive means of assessing liver histology. While investigators initially focused on a combination of laboratory tests such as reversal of aspartate aminotransferase (AST)/alanine aminotransferase (ALT) ratio or AST/platelet ratio index (APRI), more recently there has been a concerted effort to identify novel markers of fibrosis, necroinflammation and steatosis.[ 14 – 18 ] A recent systematic review of noninvasive biomarkers by Poynard et al [ 19 ] identified a total of 2237 references between 1991 and 2008 to novel biomarkers of liver fibrosis, of which 14 have been validated. This clearly represents an escalating interest in the study of noninvasive markers of liver disease.
The initial international guidelines, consensus statements and expert panel opinions on the management of chronic viral hepatitis were unanimous in their recommendation of LB for pretreatment evaluation of the disease.[ 20 – 26 ] However, more recently, the European Association for the Study of the Liver guideline statement for the management of chronic hepatitis B (CHB) published in 2009 supports the use of noninvasive markers for disease stratification, providing credibility to their reliability and reproducibility.[ 27 ] Moreover, the use of such tests is rapidly evolving in practice. A recent survey of 546 hepatologists in France revealed that 81% used the noninvasive biomarker FibroTest–ActiTest (Biopredictive, Paris, France) and 32% used transient elastography, resulting in an impressive reduction in the use of LB by 50% for chronic hepatitis C (CHC) patients.[ 28 ]
In this article we will discuss the shortcomings of LB. In the same vein, we aim to demonstrate the diagnostic accuracy, reliability and usefulness of noninvasive markers of liver disease and make the case for their utilization as suitable alternatives to LB in the evaluation of chronic liver diseases.
LIVER BIOPSY
Complications of biopsy
Percutaneous LB is an invasive procedure and is associated with a significant risk of complications. These complications may vary from minor events, such as pain and transient hypotension, to major complications, including (i) hemorrhage (intraperitoneal, intrahepatic, hemothorax); (ii) puncture of viscus (gallbladder, colon, pleura); (iii) inadvertent biopsy of the kidney or the pancreas; and (iv) intrahepatic arteriovenous fistula formation.[ 29 30 ] Significant bleeding and bile peritonitis are serious complications and may lead to mortality [ Table 1 ]. The mortality rate from LB is reported to range from 0.01% to 0.1%.[ 30 31 ]
It is significant to note that while pain is dismissed as a trivial complication, it is experienced in 84% of individuals during LB,[ 32 ] is severe in 20% and may persist beyond the day of procedure.[ 32 ] The pain immediately following LB can be highly distressing and a major ordeal for patients, thereby serving to discourage future LB. A recent French survey of 1177 general practitioners showed that 59% of patients infected with hepatitis C virus refused LB, an opinion concurred by 22% of the general practitioners.[ 33 ]
Likewise, LB-induced bleeding is often asserted to be an extremely rare complication although major bleeding is reported in up to 4.5% of procedures.[ 34 ] In fact, the most common cause of death associated with LB is due to bleeding, which may occur in up to 1.6% of patients undergoing the procedure.[ 34 ] Various predictors of bleeding have been identified (coagulopathy, multiple passes, cirrhosis, tumor), and while extreme precautions are taken to avoid biopsy in the face of abnormal coagulation, most bleeding cases (>90%) occur with an international normalized ratio (INR) less than 1.3.[ 35 – 39 ]
LB is frequently cited as a simple procedure that may be performed safely at the bedside by relatively junior personnel. As an ever-increasing number of LBs are being performed for a widening spectrum of indications, LB may more often be performed by less skilled individuals. In a study demonstrating the relevance of the learning curve, major morbidity (4.7%) and mortality (2.2%) both arose in the setting of personnel inexperience.[ 34 ] Similar studies in the past have shown that complication rates are markedly higher when the procedure is performed by less-experienced individuals.[ 39 40 ] Thus, it is anticipated that an inordinate number of complications of LB may arise in the foreseeable future. Given these complications, there is an understandable reluctance on the part of patients to undergo repeated biopsies that may be required to monitor disease progression, especially in the context of antifibrotic therapy development.
Finally, much has been made of the transvenous approach to LB as a means of reducing serious complications. In a recent systematic review of 7649 transjugular LB, minor and major complications were reported in 6.5% and 0.6% of interventions, respectively, along with an accompanying mortality rate of 0.09%.[ 41 ] Similarly, ultrasound guidance is unlikely to reduce the complication rate of LB, since imaging fails to identify small intrahepatic arteries, which are the usual causes of serious bleeding.[ 42 ] In deference to this rationale, data from a retrospective study showed that in biopsies performed with ultrasound guidance, the risk of major hemorrhage was somewhat higher than nationally published figures.[ 43 ] This suggests that, as yet, there are no definitive means of avoiding the usual major complications of LB.
Inadequacy of biopsy specimens
A number of studies have shown that sampling errors occur when the samples obtained from a target population (or tissue) fail to be adequately representative. Considering that an adult biopsy sample corresponds to a fraction of just 1/50,000 th of the entire liver, a biopsy specimen would seem to be insufficient in diseases such as viral hepatitis, where the liver changes may be unevenly distributed. At present, the most common indication for LB occurs in the setting of chronic viral hepatitis where biopsy is performed to grade and stage histological disease.[ 44 ] Therefore, the question that needs to be addressed is whether the sample size affects the histological assessment of chronic hepatitis in terms of grade and stage.
Studies have shown that LB performed with a single pass can miss the diagnosis of cirrhosis in 20%-50% of patients.[ 4 6 45 – 48 ] It has been previously suggested that even a biopsy length of 4 cm may not be the perfect “gold standard,” which is examination of the entire liver or at least a sample longer than 10 cm.[ 9 ] Various studies have evaluated the role of LB specimen size that would provide a representative sample for accurate disease estimation.[ 49 – 51 ] A specimen at least 1.5 cm long is needed for an acceptable accuracy in the diagnosis of chronic hepatitis, but larger biopsy samples are mandated when cirrhosis is suspected.[ 50 ] The role of biopsy size was further quantified when it was ascertained that diagnostic accuracy depended on the number of complete portal tracts within the biopsy samples. Nevertheless, the number of complete portal tracts required for adequacy of disease differentiation is controversial, with different investigators advocating varying number of portal tracts, ranging from 6 to 11.[ 44 50 51 ]
Colloredo et al [ 51 ] evaluated the effect of core length and diameter on the grading and staging of chronic viral hepatitis. Similar to previous studies,[ 49 50 ] the methodology consisted of progressively reducing the length and width of the original samples, which were all at least 2.5-3 cm long. These studies provided robust evidence that both the length and the diameter of the biopsy core affect the grading and staging, and that examining shorter and thinner samples leads to an underestimation of disease severity. Disease activity and fibrosis were underestimated in thin biopsies (i.e., 1 mm wide) regardless of the length of the biopsy, suggesting that the main problem lies in the lower number of complete portal tracts in the smaller samples. The same authors[ 51 ] further demonstrated that 11-15 complete portal tracts was the critical number below which disease grade and stage were significantly underestimated, and that a liver biopsy 2 cm long and 1.4 mm wide guaranteed this number of portal tracts in 94% of cases. One recent study using computer-generated modeling estimated that a 2.5-cm biopsy sample yielded an error rate of 25% and that optimal results were obtained with specimens measuring 4 cm.[ 9 ] Thus, it is now clear that the four to six portal tracts requirement frequently used by pathologists in clinical practice as well as research protocols, is not sufficient for grading and staging.
In addition, in clinical practice few LB specimens reach the desired length of the biopsy specimen. This also seems to be true in clinical research. A prospective French study revealed that even when performed by an experienced practitioner, about 84% of biopsy samples are smaller than 2 cm.[ 52 ] A recent systematic review of 32 studies incorporating 10,027 LB specimens by Cholongitas et al [ 53 ] reported that the mean±SD length and number of portal tracts were 17.7±5.8 mm and 7.5±3.4 mm, respectively. In this review comprising all documented series of percutaneous LB in the literature, the biopsy specimens had an average length and number of portal tracts well below the published minimum sample size requirements[ 9 51 ] in more than half the cases. Since multiple passes would be required to obtain a minimum specimen length of 2 cm, it may potentially increase the complication rate which in turn is based on needle size and number of passes.[ 54 – 57 ] Rocken et al [ 58 ] demonstrated that irrespective of the method used, LB resulted in an insufficient sample size in a significant proportion of patients. The study showed that only 42% of LB samples with a large 17-gauge needle contained 10 or more portal tracts. Therefore, a minimum requirement for a routine LB specimen to be of 2 cm length could be unrealistic and hazardous for the patient on one hand; on the other hand, the realization that inadequate samples are unreliable would make LB histopathologic examination irrelevant at best and dangerous at worst.
Studies have also shown that differences in grading and staging arise in the setting of different sites of biopsy, suggesting that a random sample may not necessarily reflect damage to the liver as a whole. In a study by Regev et al [ 4 ] 124 patients with CHC underwent LB of the right and left hepatic lobes during laparoscopy. The comparison between right and left lobes showed a 2-point difference (Scheuer) in grade in 1.6% and a 1-point discordance in 24.2%. As for the stage, discordance in fibrosis scores was observed in 33% of cases. In 2003, Siddique et al [ 59 ] reported a high variability in the samples amounting to 69% and 62% for activity and fibrosis, respectively. This study analyzed 29 paired biopsies using the Knodell histological activity index, where 69% showed discordance in grade ≥2, and 34.5% revealed a discordance ≥4; the difference in fibrosis score was ≥1 in 38% of cases and ≥2 in 21%. Thus, these findings emphasize that histologic findings may vary according to the site of LB amounting to under-or over-representation of the underlying grade and stage of disease when biopsies are obtained from one lobe only, as is the common practice.
Variability of histopathologic interpretation
Grading and staging of liver disease are essentially subjective. Several studies have evaluated the interobserver and intraobserver variability in the histologic and pathologic diagnosis of liver fibrosis based on biopsy specimens.[ 4 60 – 65 ] Staging scores for fibrosis such as the METAVIR, Ishak and Scheuer systems were created to standardize the evaluation of liver biopsies to minimize observer variation.[ 66 – 68 ] Although not as great as the errors attributed to sampling variability, interpreter errors may account for 15%–33% of variability[ 4 9 62 ] in staging of fibrosis, and 10% of grading of necroinflammation.[ 4 60 ] A recent systematic review evaluating observer variation in pathologic scoring systems of LB showed that the widely used Knodell scoring system had a less-than-optimal agreement for grading of liver disease.[ 53 ] While the published literature evaluating observer variation in LB interpretation is limited, its scope as a potential confounder to disease stratification is huge. In clinical practice, we frequently encounter the problem of inter-and intraobserver variation and believe that the published literature only represents a small percentage of actual occurrences.
Furthermore, diagnostic errors made by nonspecialist pathologists were reported in more than 25% of patients undergoing LB at academic centers.[ 69 70 ] Another study evaluated the rate of concordance between academic hepatopathologists and community pathologists and found that there was 50% interobserver agreement between the pathologists, whereas the community pathologist understaged fibrosis by 73% in patients with chronic hepatitis C virus.[ 71 ] This suggests that potentially treatable patients may not receive proper treatment. And, since it is unrealistic to expect the availability of such specialist pathologists in every center performing LB, we can only suspect that the accuracy of the test would be vastly compromised, especially in nonacademic centers.
Lastly, categorization of the extent of inflammation and fibrosis is complicated by the complexity of liver histology scoring systems. These scoring systems, although describing the same histologic parameter, allocate distinctly different numerical scores within different scoring systems. It is also not uncommon that different pathologists within the same institution would not be familiar with the same scoring systems. However, since these scores are not wholly interconvertible or superimposable, a clinician would potentially have to be acquainted with all scoring systems in order to properly interpret histology reports. It must be also noted that the Knodell and Ishak scoring systems[ 66 72 ] along with a similar scoring system for steatohepatitis,[ 73 ] are not highly reproducible, being only appropriate for statistical analysis of large cohorts of patients in clinical trials.
NONINVASIVE ASSESSMENT OF LIVER HISTOLOGY
Ease of performing noninvasive assessment
Noninvasive tests are relatively easy to perform and by extension become easily reproducible. This aspect of noninvasive markers makes them ideally suitable for liver histologic assessment [ Table 1 ]. Moreover, since the clinical course of chronic liver diseases is significantly dependent on the progression rate and the extent of fibrosis, the monitoring of this course with periodic liver histologic assessments is imperative in the overall assessment of the disease.
Simple numeric scores or values as representative of an underlying disease process are intuitively more appealing than the more complex descriptive or semiquantitative scoring methods that are inherent to liver histology assessments. The commonly used noninvasive markers of liver disease utilize a combination of simple biochemical, hematological and demographical parameters. These include laboratory-based tests such as ±2-macroglobulin, total bilirubin, gammaglutamyl transpeptidase (GGT), apolipoprotein A1, haptoglobin, ALT, AST, platelets, age, sex and weight. A composite of various tests calculated according to a patented formula given online, or simple ratios between different parameters, offer easily readable mathematical scores that help distinguish between different levels of histologic disease.[ 74 ] Similarly, transient elastography renders simple numerical values in order to distinguish between different stages of fibrosis. For instance, recommended cut-off values for F2, F3 and F4 fibrosis in CHB are 7.2 kilopascals (kPa) (positive predictive value [PPV]=80%, and negative predictive values [NPV]=73%), 8.1–8.4 kPa (PPV=65%–77%, NPV=84%–95%) and 9–11 (PPV=38%–57%, NPV=98%–99%), respectively.[ 75 ] Values less than 7 kPa suggest absent or minimal fibrosis.[ 76 ]
Training for clinician utilization of transient elastography (FibroScan) is achieved in a simple training schedule extending over a few hours. The ultrasonography-based machine utilizes liver stiffness measurements (10 shots) that are each obtained over duration of few seconds. After rapid training where a minimum experience of 50 shots is recommended, FibroScan provides a reasonable performance for the diagnosis of fibrosis that is not influenced substantially by any other feature.[ 77 ] These results emphasize that FibroScan may be used even in nonspecialized units.
Accuracy of noninvasive markers
Initially, simple noninvasive indexes, such as AST/ALT ratio, platelet count, age-platelet index and APRI were evaluated and found to have moderate diagnostic accuracy for the prediction of significant fibrosis or cirrhosis.[ 14 78 79 ] APRI, which is the more accurate of these simple indexes, was reported to provide a moderate to high degree of accuracy (55%-80% agreement with liver biopsy) in identifying the presence of significant fibrosis and cirrhosis in patients with chronic hepatitis C or B.[ 79 80 ]
More recently, the next generation of noninvasive markers was developed resulting from multivariate analysis models. These evolved from the basic premise that these markers had to be simple, practical and reasonably accurate in predicting liver fibrosis (85%-95% agreement with liver biopsy).[ 52 81 – 83 ] Table [ 2 ] shows a list of the common noninvasive markers of liver histological assessment. Among these, FibroTest is the most widely tested index, and has been validated in several groups of patients with CHB or CHC.[ 52 81 – 83 ] In addition, FibroTest has also been shown to predict the severity of necroinflammation (ActiTest) with the addition of aminotransferase levels.[ 52 82 83 ] Transient elastography or FibroScan (Echosens, Paris, France), has shown 85%-90% agreement with liver biopsy for the prediction of significant fibrosis or cirrhosis.[ 84 ] In fact, the combined application of FibroScan and FibroTest was suggested to offer the best performance for the assessment of fibrosis in CHC patients with areas under the receiver operator curve (ROC) of 0.88 for ≥F2, 0.95 for ≥F3 and 0.95 for F4.[ 84 ]
In a recent systematic review of eight CHC studies incorporating 1503 subjects assessing FibroTest, the sensitivity, specificity and area under the summary ROC curve were reported as 47%, 80% and 0.81, respectively, for significant fibrosis (F2-4).[ 85 ] The same review evaluated four studies (504 subjects) reporting the utility of FibroScan, and reported the sensitivity, specificity and area under the summary ROC curve as 64%, 87% and 0.83, respectively, for fibrosis (F2–4). Similarly, a recent study in CHB patients calculated the area under the ROC curve for three different fibrosis stage thresholds (in relation to F0-1).[ 86 ] The reported area under the ROC curves for ≥F2, ≥F3 and F4 fibrosis were 0.81, 0.93 and 0.93, respectively. Halfon et al [ 87 ] showed that FibroTest–ActiTest can distinguish between little or no fibrosis (F0–1) and bridging fibrosis (≥F2) with a specificity of 72%. More significantly, in a prospective study, Poynard et al [ 88 ] estimated that 18% of discordances between FibroTest–ActiTest and histology were attributable to biopsy failure and just 2% to test failure. Thus, these studies demonstrate that FibroTest–ActiTest and FibroScan have excellent utility for the identification of CHC-and CHB-related minimal and advanced fibrosis.
In addition to demonstrating accuracy in viral hepatitis, noninvasive markers have also been validated in patients with alcoholic and nonalcoholic fatty liver disease (NAFLD). Poynard et al [ 89 ] have demonstrated the utility of SteatoTest/ NashTest, a biomarker combining FibroTest–ActiTest with body mass index, cholesterol, triglycerides and glucose, in subjects with NAFLD and showed excellent diagnostic accuracy. Likewise, FibroScan has been validated for biliary fibrosis in patients with cholestatic liver diseases.[ 90 91 ] Thus, a wide variety of liver diseases have been assessed by noninvasive markers and their adequate validations performed.
Cost-effectiveness of the procedure
In a French survey, general anesthesia is reported to be used in 11% of LB cases, benzodiazepine in 16% and atropine with benzodiazepine in 15%.[ 57 ] In the United States, 54% of gastroenterologists/hepatologists and 96% of radiologists use conscious sedation.[ 92 ] LB requires admission to the hospital, and the administration of conscious sedation requires a high level of hemodynamic monitoring and skilled nursing staff for safe post-biopsy care. Another survey of 260 randomly selected members of the American Association for the Study of Liver Diseases (AASLD) showed that 62% utilized an ultrasonographer to mark the biopsy site, while 18% had the biopsy performed by the radiologist with real-time ultrasound guidance.[ 93 ] Because of the monitoring, processing and interpretation required, the cost of percutaneous LB is significant. An LB at most hospitals in the United States costs approximately $2200,[ 94 ] while in Britain the average cost for an inpatient biopsy is $1000[ 95 ] and in Australia is $1032.[ 96 ] This cost does not include the additional expenses of hospitalization and treatment for patients who develop complications of the procedure. The cost of noninvasive markers, FibroTest–ActiTest and FibroScan, although variable, is vastly lower than LB, amounting to an estimated cost of $150-450 per test.[ 97 ] | CC BY | no | 2022-01-12 15:21:46 | Saudi J Gastroenterol. 2010 Apr; 16(2):124-132 | oa_package/9f/2b/PMC3016504.tar.gz |
|||||
PMC3016505 | 20339188 | Liver biopsy (LB) is the gold standard method for assessment of liver histology. It provides valuable, otherwise unobtainable information, regarding the degree of fibrosis, parenchymal integrity, degree and pattern of inflammation, bile duct status and deposition of materials and minerals in the liver. This information provides immense help in the diagnosis and prognostication of a variety of liver diseases. With careful selection of patients, and performance of the procedure appropriately, the complications become exceptionally rare in current clinical practice. Furthermore, the limitations of sampling error and inter-/ intra-observer variability may be avoided by obtaining adequate tissue specimen and having it reviewed by an experienced liver pathologist. Current noninvasive tools are unqualified to replace LB in clinical practice in the face of specific limitations for each tool, compounded by a poorer performance towards the assessment of the degree of liver fibrosis, particularly for intermediate stages. | Most liver diseases are usually silent except in the extremes of presentation; i.e. the clinical presentation of acute hepatitis or liver failure, and in-between exist a large spectrum of relatively silent chronic hepatitis due to different etiologies and pathophysiological processes.
Laboratory and radiological investigations may help to figure out these categories; however in many situations performing liver biopsy (LB) is essential. The value of LB is not merely to determine the degree of fibrosis, rather it draws a detailed map for many important histological findings such as the degree of inflammation, nature of inflammatory cells, distribution of inflammation, status of bile ducts, vasculature, presence of steatosis and deposition and infiltration of liver with different materials like iron, copper, etc.
Undoubtedly, this otherwise unobtainable information regarding the structural integrity of liver parenchyma, degree and type of injury and the host response, has a clear impact on the diagnosis, prognosis and response to treatment. Thus, LB has, for decades, been considered as the gold standard method for assessing liver histology.
Since the first LB performed by Paul Ehrlich in Germany in 1883,[ 1 ] this technique has proved to be a revolution in the field of hepatology. As a time-honored procedure, it has rendered landmark developments and a comprehensive understanding of various aspects of liver pathology.
LB is performed via a percutaneous approach in most clinical situations; however transjugular or laparoscopic approaches are still used in specific situations. It is usually done in an outpatient setting, requiring a few hours of post-procedure observation, and on most occasions, the patient can go back to the work on the second day.[ 2 ] Over decades, the records of LB provide evidence of simplicity and safety; however, as in the case of many clinical procedures, it is not without rare complications and limitations.
For these reasons, there has been a recent interest in developing alternative methods to study the liver histology. These methods fall broadly into two categories, either imaging techniques or serum markers. However, despite the huge resources invested in the active development and refinement of these techniques, these tests are only able to offer discrimination for the extremes of fibrosis range, with a negligible ability to provide information on other details of the pathological process.
The argument of the supporters of the use of noninvasive fibrosis markers to replace LB is summarized mainly in a few points-concerns regarding the safety of LB and the possible limitations such as sampling and interpreter variation. On the other hand, the evolving data on noninvasive markers has been accepted with an overly optimistic approach, vastly lacking the rigorous criticism reserved for new techniques and methods. We will address herein these issues in order to ascertain the validity of these arguments in the context of evolving evidence.
ROLE OF LIVER BIOPSY
Over many decades, LB played a crucial role as a diagnostic tool for various liver diseases. Subsequently, with a better understanding of the natural history of many liver diseases and the availability of more treatment options, this role has expanded whereby LB gives valuable information for utilization in treatment decisions, and prognostication of a wide variety of liver diseases.
Liver biopsy in diagnosis
When LB is used for diagnostic purposes, it is usually considered in conjunction with other clinical and laboratory data. Many liver diseases are diagnosed based on biochemical, serologic and sometimes genetic testing. However, some patients with conflicting or overlapping test results may still need LB for definitive diagnosis. Non-alcoholic fatty liver disease, autoimmune and cholestatic disorders, infiltrative or storage diseases, drug-induced liver injury, and some infectious, vascular and granulomatous diseases may have characteristic histological features that are helpful in diagnosis.[ 3 ] Diagnosis of overlap syndrome of primary biliary cirrhosis (PBC) and autoimmune hepatitis (AIH) requires examination of liver histology.[ 4 ] Furthermore, in daily clinical practice, the need for LB is clearer in situations where the possibility of co-existing disorders such as steatosis with hepatitis C (HCV) and hepatitis B virus (HBV) are present. The likelihood of such co-existent disorders cannot be underestimated with the current epidemic of obesity in many parts of the world.[ 5 – 7 ]
In a post liver transplant setting, an abnormal liver test is a frequent clinical scenario, and identifying the underlying cause is essential to the decision-making process where treatment options are being considered. Allograft rejection, drug-induced injury, bile duct or vascular injury and the recurrence of the original disease are some examples of such scenarios. Although the timing and pattern of liver test abnormalities in addition to modification of immunosuppressive regimen may help in managing some patients in this context, LB is frequently needed to resolve the ambiguity and guide further management.[ 8 – 11 ]
Unexplained abnormal liver enzymes
LB is a valuable diagnostic tool in patients with chronic (>6 months) unexplained abnormal liver tests in the absence of diagnostic serology. In a study of 354 patients with abnormal liver tests and absence of diagnostic serology, 6% had a normal LB while 26% were found to have some degree of fibrosis, and another 6% were cirrhotic. Thirty four and 32% of biopsies suggested non-alcoholic steatohepatits (NASH) or fatty liver, respectively. Other diagnoses included cryptogenic hepatitis, drug toxicity, primary and secondary biliary cirrhosis, AIH, alcohol-related liver disease, primary sclerosing cholangitis, haemochromatosis, amyloid and glycogen storage disease. The management was directly altered because of LB in 18% of patients and 3 families were entered into screening programs for inheritable liver disease.[ 12 ]
In another study of 365 patients, 411 diagnoses were carried out before biopsy, 84.4% were confirmed by biopsy but in 8.8%, 6.8% and 10.5% the diagnosis was specified, changed, or a diagnosis added, respectively. In this study the authors found that LB led to change in management for 12.1% of patients.[ 13 ]
Finally, LB can help in understanding the etiology of cryptogenic cirrhosis. This category, i.e. cirrhosis of unknown etiology, is found in 3-30% of patients with cirrhosis.[ 14 15 ] NASH is considered the commonest cause of cryptogenic cirrhosis.[ 16 – 18 ] Other possible causes are silent or “burnt out” AIH, occult viral infection and covert alcoholism. The so-called residual histological findings such as foci of autoimmune-like inflammatory infiltrates versus NASH-like foci of steatosis, cellular ballooning, and glycogenated nuclei may help in defining the underlying cause of cryptogenic cirrhosis.[ 19 20 ]
Liver biopsy to assess severity of liver disease and when to initiate therapy
Hepatitis B: LB is not mandatory in typical HBV cases meeting treatment criteria; however, because of the absence of curative therapy and the possible commitment of the patient to long-term therapy, it is generally advisable to assess liver histology before starting treatment to support treatment criteria by determining the degree of inflammation and fibrosis, as well as to rule out the co-existence of other conditions contributing to high transaminases. More importantly, LB has a strong influence in treatment decisions relating to HBV cases with persistently borderline, normal or slightly elevated alanine aminotransferase (ALT) levels, particularly if the patient is above the age of 40 with raised or fluctuating HBVDNA.[ 21 22 ]
Hepatitis C: Treatment can be initiated for HCV patients without LB, however, in addition to the important information obtained by LB regarding the degree of inflammation and fibrosis, examination of liver histology gives valuable information about two common non-HCV conditions which in-turn may affect disease progression and response to treatment, namely steatosis and excess hepatocellular iron deposition.[ 2 23 ]
The American Association for the Study of Liver Disease (AASLD) guidelines state that “a liver biopsy should be considered in patients with chronic hepatitis C infection if the patient and health care provider wish information regarding fibrosis stage for prognostic purposes or to make a decision regarding treatment".[ 24 ] Since the evidence is still evolving for non-invasive tests, these guidelines suggest that currently available noninvasive tests should not replace the LB in routine clinical practice.
Safety of liver biopsy
Data on LB complications is heterogeneous with wide variation on reported rate of complications. However, experience gathered over decades from many centers, has shown a very low rate of complications [ Table 1 ].[ 25 – 34 ]
Pain is the commonest complication and rarely requires analgesia or readmission.[ 35 36 ] Bleeding is the most serious complication and rarely requires intervention or causes death. Most of the associated mortality cases are reported in patients with malignancy or advanced cirrhosis.[ 26 32 ]
Complications are related to the experience of the operator and selection of patients. However, there appears to be no effect on the rate of complications by the type of needle used.[ 25 37 ] Some evidence shows that there is no difference in the rate of complications between biopsies performed in community practice and academic institutes.[ 32 ] In addition, it is unclear whether the routine use of ultrasound, suggested by some investigators to reduce the rate of complications,[ 38 ] is cost-effective, since the value of the added benefit must be weighed against the added cost of ultrasonographic guidance.[ 39 ]
Therefore, the current evidence indicates that LB is a safe outpatient procedure, provided that patients are selected carefully and the procedure performed properly, and patient monitoring is adequate after the procedure.
Potential sampling errors and observer variations
Since LB involves a small part of the whole liver organ, there is a risk that this might not be representative of the whole liver.[ 40 ] This risk is partly theoretical, since the inflammation and fibrosis that occurs during the course of the disease is usually diffuse and homogeneous in most of the liver diseases. Furthermore, extensive literature accumulation has shown that increasing the length of LB decreases the risk of sampling error.[ 41 – 43 ]
Another potential limitation of LB is the observer variation which is related to the discordance between pathologists in biopsy interpretation.[ 44 ] In addition to the small biopsy size, several factors can contributes to variation in interpretation of LB, although the level of experience (specialization, duration, and location of practice) appears to have more influence on agreement than the characteristics of the specimen (length, fibrosis class number). Hence, training and specialization of pathologists is of major importance for reducing observer variation.[ 45 ] Furthermore, the current use of histological scoring systems for evaluation of fibrosis and necro-inflammation has limited this drawback.[ 46 47 ] Thus, although LB has its limitations, appropriate precautions may reduce the flaws inherent in this method.
LIMITATIONS OF AVAILABLE NONINVASIVE TESTS
Great efforts and strides are being made in the development of accurate noninvasive methods for the determination of fibrosis.[ 48 ] However, no single noninvasive test developed to date can provide information to match that obtained from actual histology (such as, inflammation, fibrosis, steatosis, etc).
Efforts to increase the yield of noninvasive models by combining two models of noninvasive markers have led to some increase in the accuracy of estimation of fibrosis between minimal and significant fibrosis, however, accuracy of estimation of the intermediate stages of fibrosis is still weak.[ 49 ]
In order to see if these methods are ready to replace LB, we will discuss herein two examples of noninvasive methods which have been extensively studied in different populations of liver disease and have a generally better reputation and popularity than others i.e. FibroTest and FibroScan.
FibroTest
FibroTest (FT) is a mathematical score derived from a of group of serum markers including alpha-2-macroglobulin, gamma-glutamyl-transpeptidase, haptoglobin, apolipoprotein-A1, and total bilirubin, in addition to patient age and gender using a patented algorithm.[ 50 ] This score has been shown to have a higher diagnostic area under the receiver operating characteristic (AUROC) curve than other biochemical markers, including hyaluronic acid (HA), the Forn's index, and the AST/platelet ratio index (APRI).[ 51 ] A study by a French group showed AUROC for significant fibrosis (F2-F4), and severe fibrosis (F3-F4) were 0.79 [0.75-0.82], and 0.80 [0.76-0.83], respectively.[ 52 ] The same conclusion was not reached upon by an Australian group. In their study, Rossi et al found 33 of the 125 patients had FT scores <0.1 and were therefore deemed unlikely to have fibrosis, but 6 (18%) of these had significant fibrosis on histology. Conversely, of the 24 patients with scores >0.6 who were likely to have significant fibrosis, 5 (21%) had mild fibrosis on histology.[ 53 ]
FT has been assessed mainly in HCV, with recent reports arising in other liver diseases including HBV, NAFLD and ALD. In a metaanlysis of 38 diagnostic studies which pooled 7985 subjects who had undergone both FT and biopsy (4600 HCV, 1580 HBV, 267 NAFLD, 524 ALD and 1014 mixed). The mean standardized AUROC for the diagnosis of bridging fibrosis (F2/F3/F4 vs. F0/F1) was 0.84 (95% CI, 0.83-0.86), with no differences in terms of causes of liver disease: HCV 0.84 (0.82-0.87); HBV 0.81 (0.78-0.83); NAFLD 0.84 (0.76-0.92); ALD 0.87 (0.82-0.92); and mixed 0.85 (0.81-0.89).[ 54 ]
The inter-laboratory variations which are in the quality controlled, analytically acceptable range may have impact on the result of FT and can lead to significant discordance between histology and FT score.[ 55 ] In addition, FT has some other limitations in several medical conditions, such as Gilbert's syndrome, hemolysis, renal failure, inflammatory conditions and biliary obstruction.[ 56 ] More independent validation of FT in Hepatitis C and other less studied diseases is required since there are only a few studies that have assessed FT other than the French study group pioneering research in this area. FT has no up-front cost, but is associated with recurring cost for each use, whereupon the test may prove costly if utilized for follow up and monitoring. Finally, there is a significant delay entailed in reporting of the results of the biochemical tests that form part of the mathematical model of FT.
Concerns regarding serum markers
It is worthwhile to consider some of the concerns pertaining to FT and other serum markers regarding adoption in routine clinical practice in the assessment of liver fibrosis instead of LB.[ 57 58 ] Firstly, none of these serum markers is liver specific, and reflect the inflammatory process and not fibrosis.
Secondly, these markers have a relatively good prediction for the extremes of fibrosis stage, which may help sometimes in the treatment decision, but nonetheless, have a poor performance in correctly classifying the intermediate stages. Therefore, this factor will limit their use in longitudinal studies for the purpose of studying liver fibrosis progression, especially if their results were accepted for treatment decision.
Additionally, these markers need to be validated in different clinical settings. The rationale is that the predictive value of any test is affected by the prevalence of the disease. So the clinical utility of these markers is critically affected by the prevalence of fibrosis in the population being investigated and almost all published studies have been performed in a tertiary-care setting.
Moreover, it is likely that the result of these markers, whether used individually or in combination, will be affected by the difference in the assays used. At present, the recommendation is that the methods used to measure serum markers should be identical to those reported in the original publication. However, as these tests become routinely available and regularly performed in real-life practice, it is possible that these aspects will be brought into greater focus.
Finally, a major shortcoming of the existing literature is the failure to demonstrate the cost-effectiveness of these measures. This is vital since erroneous results may lead to unnecessary treatment of patients with mild disease or withholding of treatment from those with advanced disease who may go on to develop costly complications. Moreover, costs are likely to be inflated due to the tendency of physicians to perform repeated assessments using these noninvasive measures (e.g., up to every 6-12 months as recommended by the FT developers).
FibroScan
FibroScan (FS) or transient elastography is a novel technique for measurement of liver stiffness. This is a rapid, non- invasive technique that utilises low frequency vibration and ultrasound to assess the stiffness of liver tissue. FS has good prediction of cirrhosis however, performance is less in mild to intermediate stages of fibrosis.[ 59 ]
A recent systematic review evaluated the accuracy of FT and FS in HCV patients. The AUROCs for the prediction of significant fibrosis (stages 2-4) for FT and FS were 0.81 (95% [confidence interval] CI: 0.78-84) and 0.83 (95% CI: 0.03-1.00), respectively. The sensitivity and specificity of FT at a threshold of 0.60, were 47% (35-59%) and 90% (87-92%). For FS (threshold approximately 8 kPa), corresponding values were 64% (50-76%) and 87% (80-91%), respectively. Better result was obtained for prediction of cirrhosis, with the AUROCs for FS and FS were 0.90 (95% CI not calculable) and 0.95 (0.87-0.99), respectively.[ 60 ] Discordance of at least two stages between transient elastography and histological assessment were observed in 28 (11%) of the 254 consecutive patients with liver biopsy of at least 15 mm, in a multivariate analysis, fibrosis stage (F0-F2 versus F3-F4) and the ratio interquartile range/median value of liver stiffness measurement (IQR/M) were associated with discordances ( P <0.05).[ 61 ]
The cause-specific cut-off values need to be ascertained, since the best predicting cut-off values vary according to the etiology. For instance, the cut-off value for the diagnosis of HCV cirrhosis is 12.5 kPa, 19 to 21.5 kPa in alcoholic cirrhosis, 17.5 kPa in NASH cirrhosis and 17.3 kPa for cirrhosis secondary to primary sclerosing cholangitis or primary biliary cirrhosis.[ 62 ]
FS has some limitations in addition to its poorer performance in patients with mild to moderate fibrosis. This technique cannot be used in patients with ascites because the poor propagation of elastic shear waves through liquid. It also performs poorly in morbidly obese patients, since adipose tissue attenuates both shear waves and ultrasound waves. Other conditions such as steatosis, cholestasis and liver congestion due to heart failure may affect the accuracy of the result.[ 59 63 ]
The failure rates range between 2.4% and 9.4%.[ 64 65 ] this being mainly in obese patients and in those with narrow intercostal spaces. In multivariate analysis, the only factor associated with failure of FS was a body mass index >28 kg/m 2 (odds ratio 10.0; 95% CI: 5.7-17.9, P =0.001).[ 66 ]
FS frequently yields pathologically high values in patients with acute liver damage, or at the time of an ALT flare, and is unsuitable for detecting cirrhosis/fibrosis in this scenario. In 15 of 20 patients with acute liver damage mostly due to HBV and drugs, with serum ALT activities ranging from 151 to 5382 U/L (mean: 1355±1217 U/L), initial liver stiffness values measured by FS during the acute phase of the liver damage were suggestive of liver cirrhosis. However, none of these 15 patients showed any signs of liver cirrhosis in the physical examination, ultrasound examination, or liver histology (performed in 11 of 15 [73%] patients). Six patients with initially high liver stiffness were followed up to abatement of the acute hepatitic phase; in all of them, liver stiffness values decreased to values below the cut-off level for liver cirrhosis.[ 67 ]
Furthermore, the position of the probe may impact the accuracy and reproducibility of this technique. Among 268 patients with both anterior and reference positions, the FS measurements estimated at the reference position (9.0 kPa; [0.5]) was significantly higher in comparison to the anterior position (8.5 kPa [0.5]; P <0.0001).[ 68 ]
Reproducibility of FS is generally good, however the interobserver agreement is significantly reduced in some situations such as lower degrees of hepatic fibrosis (intraclass correlation coefficients (ICC) for F0-F1 0.60 versus 0.99 for F≥2), hepatic steatosis (ICC for steatosis ≥25% of hepatocytes 0.90 versus 0.98 for <25%) and increased BMI (ICC for BMI≥25 kg/m 2 0.94 versus 0.98 for <25 kg/m 2 ).[ 64 ] Although, there is no recurring cost of the FS machine, the upfront cost is relatively high, which will limit its widespread use through many secondary and tertiary care centers.
In conclusion, considering the value and safety of LB, and the current limitations of noninvasive tests, LB will continue to remain, in the foreseeable future, as the cornerstone and the gold standard test in the assessment of liver fibrosis and histology. While great efforts and advances have been made to improve noninvasive markers, nevertheless, major steps remain ahead before these tests can replace LB in both the information obtained and the accuracy in diagnosis. | CC BY | no | 2022-01-12 15:21:47 | Saudi J Gastroenterol. 2010 Apr; 16(2):133-139 | oa_package/f1/a4/PMC3016505.tar.gz |
||||||
PMC3016506 | 20339189 | The patient, a 20-year-old obese male, complained of epigastric pain of two years duration with no associated nausea or vomiting. He underwent upper gastrointestinal endoscopy in a local hospital and received treatment for H. pylori infection, with no significant improvement. He was referred to King Faisal Specialist Hospital and Research Centre (KFSHRC) for further evaluation. Laboratory investigations showed: WBCs: 7.3×10 9 /L. Hb: 133 g/L. MCV: 69.6 fL. Platelets: 500×10 9 /L. ESR: 45 mm/hour. CRP: 10.8 mg/L. Urea: 4.0 mmol/L. Creatinine: 72 umol/L. K + : 3.9 mmol/L. Na + : 140 mmol/L. Cl-:103 mmol/L. CO 2 : 22 mmol/L. Albumin: 44 g/L. Bilirubin: 8 umol/L. LD: 177 U/L. ALT: 108 U/L. AST: 61 U/L. ALP: 64 U/L. GGT: 161 IU/L. Negative antinuclear antibody screen as well as celiac disease serology.
Follow-up endoscopy after six months showed normal esophageal mucosa, nodular congested gastric mucosa and nodular duodenal mucosa with variable-sized polypoid lesions [Figures 1a and b ], which were biopsied and sent for pathological examination [Figure 2 a – c ].
QUESTIONS
Q1. What is the diagnosis?
Q2. What are the histopathological findings of duodenal polypoidal lesions?
Q3. What other sites can be affected by this pathology?
Q4. What is the clinical significance of this abnormality?
ANSWERS
A1. The diagnosis: Reactive follicular lymphoid hyperplasia (FLH).
A2. Histopathological examination of the first part of duodenum polypoidal lesions biopsies showed focal villous atrophy with prominent reactive FLH in the lamina propria with flattened overlying mucosa, and no evidence of intraepithelial lymphocytes, dysplasia, or neoplasia [Figures 2a – c ]. Gastric antral biopsy showed moderate chronic active gastritis, and Giemsa stain showed the presence of H. pylori organism.
A3. FLH can develop wherever lymphoid tissue is present. The mostly reported sites of FLH include hard palate and oral cavity, entire gastrointestinal tract, nasopharynx, larynx, bronchi, parotid gland, breasts, skin, spleen, peripheral nerves, and the thymus gland.
A4. FLH is an uncommon benign proliferation of lymphoid follicles, a poorly understood entity, which may be confused clinically and histologically with malignant lymphoma. It has been alternatively named benign lymphoid hyperplasia, reactive lymphoid hyperplasia, and pseudolymphoma. The cause of FLH is unknown, but may be associated with common variable hypogammaglobulinemia, primary immunodeficiency states and Epistein Barr virus infection. The course of FLH in children is benign, but the outcome in adults is controversial. Histopathologic examination, immunohistochemical analysis and molecular studies are essential to achieve accurate diagnosis and to implement appropriate management. The patient under study was given H. pylori eradication therapy, and reassured. | CC BY | no | 2022-01-12 15:21:47 | Saudi J Gastroenterol. 2010 Apr; 16(2):140-141 | oa_package/5a/d5/PMC3016506.tar.gz |
|||||||
PMC3016507 | 20339172 | Pylorus preserving pancreaticoduodenectomy (PPPD) was first popularized by Traverso and Longmire in 1978.[ 1 ] Proponents of this technique believe that it decreases the incidence of post-gastrectomy complications with overall improvements in long-term gastrointestinal function, when compared to classic pancreaticoduodenectomy (PD). Pylorus preserving pancreaticoduodenectomy was initially advocated as an alternative to PD in the setting of chronic pancreatitis, and later utilized for treatment of peri-ampullary malignancy. Although the technique is widely adopted, pylorus preservation in the setting of cancer, remains controversial.
Several studies and reviews have examined outcomes of PPPD compared to classic PD, particularly relating to delayed gastric empting, oncologic safety, and morbidity and mortality.[ 2 ] The major conclusions are that PPPD reduces operating time, results in lower blood loss and reduces the need for blood transfusions, with no overall increases in perioperative morbidity or mortality, tumor recurrence or long-term survival. There is however no overall consensus that PPPD is a better technique than classic PD.
Some argue that PPPD increases postoperative morbidity. Warshaw et al. were the first to associate delayed gastric emptying with PPPD.[ 3 ] One randomized trial of 33 patients had 43% delayed gastric emptying after PPPD compared to zero cases after classic PD resection ( P < 0.05).[ 4 ] The reverse has however also been shown in a randomized trial of PPPD versus radical PD, including antrectomy and extended lymph node dissection showing a 6% delayed gastric emptying rate compared to 16% ( P = 0.006).[ 5 ] Many series indicate no difference in delayed gastric emptying between PPPD and standard PD.[ 6 7 ] Based on reviews and meta-analysis there does appear to be at least a trend towards increased delayed gastric emptying associated with PPPD.
Although the oncologic adequacy of PPPD has been a topic of concern, there has been no study that shows reduced mortality or early tumor recurrence following PPPD compared to PD.[ 2 8 ] Classic PD must however be considered the operation of choice in cases where there is tumor involvement of the first part of the duodenum or distal stomach. Some also advocate complete removal of the duodenum in cases of periampullary malignancy associated with hereditary syndromes such as familial polyposis coli. In such cases there is increased risk of malignant transformation within any duodenal remnant, due to a genetic field change throughout the duodenum.
The results of reviews and meta-analysis of studies on PPPD need to be interpreted with caution. A clear superiority of one technique over another has not been demonstrated. Trials so far suffer from relevant clinical heterogeneity, small sample size, and a lack of clearly defined outcome definitions. Reviews such as the one presented in this edition of the journal[ 9 ] reinforce the need for well-designed, multicenter, international trials, to clearly determine if there is a difference in mortality, morbidity and long survival between PPPD and classic PD. What can be determined from studies so far, is that there are no obvious clinically relevant differences between the techniques, but PPPD appears somewhat faster and causes less blood loss compared to classic PD. | CC BY | no | 2022-01-12 15:21:47 | Saudi J Gastroenterol. 2010 Apr; 16(2):65 | oa_package/4b/aa/PMC3016507.tar.gz |
|||||||
PMC3016508 | 20339173 | CONCLUSION
PIDs have a variety of manifestations, some of which involve the GI and hepatobiliary systems. Infectious and noninfectious GI and hepatic complications pose high risk of morbidity in patients with PIDs. Health care professionals specialized in gastroenterology are not usually involved in the presentation and diagnosis of patients with PIDs. Untrained clinicians may treat these only at the level of its presentation, leaving the PIDs dangerously undiagnosed. Early diagnosis of PIDs and accompanied GI and hepatic complications clearly improve the quality of life for affected patients and allow for appropriate treatments. Multicenter large clinical studies are needed to evaluate the nature of GI and hepatic manifestations in these rare PIDs. Likewise, further studies are needed to evaluate the preventive and therapeutic modalities of PID-associated GI manifestations. | Primary immune deficiency diseases (PIDs) are a heterogeneous group of inherited diseases characterized by variable genetic immune defects, conferring susceptibility to recurrent infections. They have a vast array of manifestations some of which involve the gastrointestinal and hepatobiliary systems. These complications can be the consequence of five different factors, namely, infection, autoimmune process, unregulated inflammation, malignancies and complications of therapeutic intervention. They may precede the PID diagnosis and, once developed, they pose high risk of morbidity. Untrained clinicians may treat these manifestations only at the level of their presentation, leaving the PIDs dangerously undiagnosed. In fact, early diagnosis of PIDs and accompanied gastrointestinal and hepatic complications clearly require appropriate treatment, and in-turn lead to an improved quality of life for the patient. To improve the awareness of gastroenterologists and related health care providers about these diseases, we have reviewed herein the complications of different PIDs focusing on gastrointestinal and hepatic manifestation. | Primary immune deficiency diseases (PIDs) are a heterogeneous group of inherited disorders with unique genetic defects in the immune system. They confer susceptibility to recurrent infections with virulent and non- virulent organisms. In addition to infections, they have a wide spectrum of clinical manifestations including autoimmune diseases, unregulated inflammation and predisposition to malignancies.[ 1 – 4 ] PIDs are generally considered uncommon diseases; however, recent data from the Middle East, including Saudi Arabia, indicated that PIDs are 10-20 times more common than worldwide reported figures,[ 5 – 8 ] probably secondary to high incidence of consanguinity, reaching up to 50% in some countries.[ 9 – 13 ]
Currently, more than 140 distinct genes have been identified, the abnormalities of which account for more than 200 different clinical phenotypes of PID.[ 2 ] Since this field is continuously revolutionized with unforeseen discoveries of novel PIDs and the characterization of their molecular defects, The International Union of Immunological Societies (IUIS) has recently updated the classification of PIDs [ Table 1 ]
It is important for clinicians to be aware of the various manifestations of PIDs. Early recognition and diagnosis is vital in improving the quality of life and wellbeing of these patients.
GENERAL CONCEPTS
The most common manifestation of PIDs is usually respiratory in nature; however, with advances in diagnostic tools, appropriate antimicrobial therapy and intravenous immunoglobulin replacement therapy, respiratory infections have been fairly controlled. The next system commonly affected in PIDs is the gastrointestinal (GI) system, serving as a primary barrier to infections and considered the largest immune organ of the body.[ 14 ]
The GI complications of PIDs can present in five different forms: (1) infection throughout the GI tract or hepatobiliary system such as giardiasis in humoral immune dysfunction; cytomegalovirus colitis and hepatitis in severe T cell dysfunction as well as hepatic abscess in phagocytic defect. (2) Autoimmune phenomena as seen in autoimmune hepatitis and enteropathy associated with some PIDs. (3) Unregulated inflammatory conditions such as granulomatous colitis in CGD. (4) Malignancies involving the GI tract and hepatobiliary system.(5) GI and hepatic complications secondary to therapeutic intervention, for example, liver or gut graft-versus-host-disease and veno-occlusive disease post hematopoietic stem cell transplantation in certain PIDs. Nevertheless, addressing the variable manifestations of PIDs is beyond the scope of this review. We will therefore focus on a few examples of PID diseases commonly present with GI and hepatic manifestation, which might be the initial presentation of the disease rather than as part of the constellation of symptoms PIDs are known for.[ 15 ] Further examples of PIDs presenting with GI and hepatic manifestation are summarized in Table 1 .
CHRONIC GRANULOMATOUS DISEASE
Chronic granulomatous disease (CGD) is a primary immunodeficiency caused by a genetic defect in one of the components of NADPH oxidase of the phagocytic cells. This important complex is responsible for the generation of superoxide and is involved in combating catalase producing organisms such as many bacteria and fungi.[ 16 – 19 ] Five genetic mutations involving the phagocytic oxidase system have been identified so far. The most common is an X-linked recessive defect in gP91phox, while three other autosomal recessive (AR) defects were reported in P22phox, P47phox and P67phox components of the NADPH oxidase system.[ 20 – 25 ] A novel mutation in NCF4 , the gene encoding P40phox, has also recently been reported in a boy who presented with granulomatous colitis, delineating the fourth AR form of CGD.[ 26 ] In addition to susceptibility to infections, CGD patients are prone to develop noninfectious complications characterized by unregulated inflammation such as granulomatous colitis, chorioretinal lesions and lupus-like disease.[ 27 – 29 ]
Gastrointestinal manifestations in CGD
GI manifestation is commonly encountered among CGD patients and might even precede the CGD diagnosis.[ 30 ] It usually arises from an abnormal inflammatory response leading to exuberant granuloma formation. In a study of 140 patients, GI manifestation was recorded in 46 patients (32.8%).[ 30 ] Abdominal pain, vomiting, diarrhea and weight loss, although nonspecific, were common GI symptoms among these CGD patients.[ 30 – 32 ] Moreover, we have observed that 8 (15%) of 55 CGD patients had colitis and/or GI obstruction (unpublished data).[ 33 ]
Proximal gastrointestinal tract manifestations
Granulomatous stomatitis, oral ulcers and dental abscesses are often found in these children.[ 30 32 ] Oral candidiasis has also been documented.[ 34 ] Affected individuals may present with granulomatous inflammation causing obstruction and stricture formation throughout the entire GI tract. Therefore, it is not unusual for these patients to present with obstructive upper GI symptoms such as dysphagia, dysmotility, delayed emptying and vomiting.[ 31 – 32 ] Other gastric complications include eosinophilic gastritis and abscesses have been rarely reported.[ 35 ]
Intestine and colon manifestations
Granulomatous colitis is prevalent among CGD patients, especially X-linked variant CGD.[ 29 ] The endoscopic finding is similar to inflammatory bowel disease (IBD), particularly, Crohn's disease (CD) characterized by transmural patchy inflammatory “skip lesions” with intact segments between the diseased ones [ Figure 1 ]. These inflammatory lesions might cause obstruction, stenosis and fistula.[ 36 ] The difference between CD and CGD colitis lies in the histopathology of the granuloma formation. CGD colitis is characterized by sharply defined aggregates of epithelioid histiocytes surrounded by a cuff of dense lymphocytic inflammation [ Figure 2 ]. In CD, granulomata are poorly defined.[ 30 ] One group claims that the main difference lies in the presence of pigment-laden macrophages within the lamina propria.[ 36 ] Of note, the inflammatory infiltrates of this form of colitis were mainly eosinophils and macrophages. There is also an increased expression of HLA-DR in the epithelium and vascular endothelium.[ 37 ] Nevertheless, the presentation is very similar to that of IBD, more specifically, CD, to an extent that it fulfills the Lennard–Jones criteria for CD. Furthermore, a subset of CGD patients may have IBD symptoms as their initial presentation.[ 30 32 34 ]
Distal gastrointestinal tract manifestations
Fistulae–in–ano is common probably due to the high bacterial load in the perianal area resulting from ineffective clearance of bacteria in CGD patients. It usually develops following infection of the perianal glands or granulomatous inflammation arising from the rectum. Steroids and interferon-γ have been shown to be effective in controlling the GI granulomatous inflammation among CGD patients, and may reduce the thickening and relieve the obstruction.[ 30 32 ] However, large scale prospective studies are required to validate their safety and efficacy.
Growth delay
Another consequence of chronic GI inflammation among children with CGD is growth delay.[ 30 ] In a large CGD cohort with GI complication, 32% had at least 1 height measurement below the fifth percentile and 22% had at least 1 weight measurement below the fifth percentile during their study period. Furthermore, CGD patients with GI involvement are more prone to suffer growth delay compared to those who are unaffected.[ 30 ]
Hepatobiliary manifestations in CGD
Transient elevation of liver enzymes is a common incidental finding in CGD.[ 38 ] One of the contributing factors is recurrent liver abscesses, which are frequently encountered among CGD patients.[ 29 ] Liver abscesses were reported in 25% to 45% of CGD patients, and were correlated with high rate of mortality reaching up to 27% in some studies despite appropriate antimicrobial therapies.[ 29 31 ] According to the national US registry, Staphylococcus aureus , Pseudomonas aeruginosa, and Burkholderia cepacia were the most common pathogens isolated from these abscesses. Other infections with Serratia, Aspergillus, Candida and Mycobacteria were also encountered.[ 29 38 – 39 ] Typically, these abscesses relapse frequently, but at a new location in the liver.[ 38 ] They are dense, fibrotic, caseous, difficult to drain and almost always require surgery.
The elevation in liver enzymes cannot be attributed only to infections. Hussain et al, reported 29 CGD patients who developed drug-related hepatotoxicity.[ 38 ] Further studies are needed to explore the substantial risk of drug hepatotoxicity in CGD patients, as a majority of CGD patients will need long-term antibiotic and antifungal prophylactic therapies.[ 39 – 42 ]
In summary, CGD has variable GI manifestations. In addition to infections, exuberant granulomatous inflammation may occur anywhere along the GI tract, which is very similar to CD in presentation. In certain circumstances this might be complicated with obstruction. These patients will be burdened with abdominal pain, nausea, vomiting, diarrhea and constipation. Eventually, GI complications lead to growth delay and failure to thrive. Chronic liver abscess is a significant risk for morbidity in CGD that warrants long-term antibiotic therapy and surgical intervention in some instances. Another complication that should be dealt with in these patients is liver injury due to drug-induced hepatitis.
CGD was formerly associated with high mortality but current practice of antimicrobial, IFN-γ prophylaxis, aggressive surgery and early hematopoietic stem cell transplantation or gene therapy have improved the outcome substantially.[ 39 – 47 ]
COMMON VARIABLE IMMUNODEFICIENCY
Common variable immunodeficiency (CVID) is the second most prevalent PID. It is characterized by dysfunction of the humoral immunity with low B lymphocytes, hypogammaglobulinemia and/or impaired antibody response to infection and vaccination. Affected individuals are therefore vulnerable to recurrent respiratory infections, chronic diarrhea and autoimmune manifestations. In fact, the immune defect is not limited to immunity cells; there is a clear crucial role for T cell defect in CVID as well. Moreover, most GI manifestation is probably related to T cell dysfunction explaining the poor response to intravenous immunoglobulin (IVIG) alone.[ 48 – 49 ] Despite its prevalence and the plethora of literature on CVID, it is not fully elucidated at the molecular level. The underlying genetic defects have been explored in only a small subset of CVID. Mutations in three genes, TACI (transmembrane activator and calcium-modulator [CAML] and cyclophilin ligand Interactor),[ 50 ] ICOS (an inducible costimulator on activated T cells)[ 51 ] and CD19 genes, were found to cause only 10% of CVID cases.[ 52 ]
Gastrointestinal manifestations
Several studies showed that the prevalence of GI manifestation in CVID is quiet significant, ranging from 20% to 50%.[ 48 53 – 54 ] Noteworthy to mention is that these GI complications pose high morbidity and come second only to respiratory complications.
Proximal gastrointestinal tract manifestations
Although the oral cavity and esophagus are not commonly affected in CVID, the prolonged antibiotic administration, with accompanying neutropenia, predispose to fungal infections, especially esophageal candidiasis.[ 49 55 ]
Atrophic gastritis and achlorhydria are common ailments, reported in about 50% of the cases among CVID patients.[ 48 53 ] Pathologic studies revealed mild to moderate infiltration of lymphomononuclear cells in the lamina propria, lack of plasma cells and increased apoptosis.[ 48 – 49 55 ] Reduced serum gastrin levels and pernicious anemia-like syndrome might develop eventually.[ 48 53 ] CVID disease significantly increases the risk for gastric carcinoma that might reach up to 50-fold. Concomitant Helicobacter pylori infection increases this risk substantially.[ 48 56 ]
Small intestine manifestations
Chronic diarrhea is the most common GI manifestation, and may be the solo presentation of CVID. Several factors contribute to chronic diarrhea, including: Celiac sprue- like disorder, Giardia infection, bacterial overgrowth and, rarely, small bowel lymphoma.[ 48 ] Histopathologic studies consistently report villous blunting; however, there is absence of plasma cell infiltration.
Giardia lamblia is a prominent pathogen in the small bowel disease among CVID patients, resulting in abdominal cramps, bloating and watery diarrhea.[ 48 ] Despite empiric therapy with metronidazole, diarrhea is minimally resolved due to high recurrence of infection.
Autoimmune enteropathy is another distinct CVID manifestation with auto-antibodies directed against the enterocyte, thus further contributing to chronic diarrhea.[ 48 – 49 ] Moreover, nodular lymphoid hyperplasia (NLH) resulting from failure of B cell follicle formation, occurs commonly in CVID. It manifests as multiple polyps or nodules in the small intestine, but could also be seen in the stomach and colon.[ 49 56 – 57 ]
Large intestine manifestations
IBD is frequently encountered in CVID. Colitis due to CVID is distinct from other forms of colitis. Nevertheless, it mimics ulcerative crypt destructive colitis with regard to certain features. On examining histopathological biopsies, one can usually observe an increase in lymphocyte and macrophage inflammatory infiltration as well as overexpression of apoptosis; however, granulomas and giant cells are usually absent [Figure 3a and b ].[ 48 55 ] The underlying mechanisms may involve T cell defect and autoimmune phenomenon; therefore, IVIG alone does not control the symptoms. Steroids and immunosuppressive therapies are often used with reasonable response.
Malignancy
CVID has substantial risk of GI malignancies. There is a 50-fold higher risk of gastric adenocarcinoma among CVID patients.[ 58 ] Contributing factors include achlorhydria, intestinal metaplasia and pernicious anemia. Early monitoring with endoscopies is therefore essential, especially for patients manifesting with constitutional symptoms such as weight loss or anorexia. Moreover, risk for B cell immunophenotype lymphoma triggered by Epstein-Barr virus is 30 times higher in CVID patients as well.[ 48 58 – 59 ]
HYPER IgM SYNDROME
Hyper IgM (HIGM) syndrome is a rare primary immunodeficiency disease caused by impaired Immunoglobulin class switch recombination (CSR) and characterized by normal or high IgM along with low or undetectable immunoglobulin subtypes including IgG, IgA and IgE.[ 60 ] Different molecular causes have been described that result in different HIGM phenotypes including defects of CD40 ligand (CD40L), CD40, nuclear factor-ĸB essential modulator (NEMO), activation-induced cytidine deaminase and uracil-DNA glycosylase.[ 61 – 67 ] Addressing different genotypes and phenotypes of HIGM is beyond the scope of this review and was well reviewed by Erdos et al .[ 60 ]
Infections dominate this type of immunodeficiency, which involve the upper and lower respiratory tract, commonly caused by Pneumocystis jiroveci pneumonia (PJP) .[ 68 ] The other most commonly documented clinical manifestations are chronic diarrhea, liver involvement and neutropenia. Lymphoid hyperplasia is also a noteworthy finding in HIGM.[ 69 ]
Gastrointestinal manifestations
HIGM patients present frequently with oral ulcers, gingivitis and rectal ulcers, which are probably attributed to neutropenia that commonly complicate HIGM. Fifty percent of HIGM patients experience GI problems, mainly protracted diarrhea, mostly due to infections with Cryptosporidium parvum, Giardia lamblia, Salmonella and Entamoeba histolytica. [ 68 – 69 ] Other noninfectious causes of diarrhea such as IBD and intestinal nodular bowel disease have been reported.[ 68 ]
Liver involvement
Hepatic involvement is a very prominent complication of HIGM. Although sclerosing cholangitis (SC) occurs rarely among children, 10% of these cases are more often associated with primary immunodeficiencies.[ 70 ] It is suspected that in sclerosing cholangitis, bile ducts undergo inflammation, which progress to fibrosis leading to cirrhosis and liver failure. C. parvum is the most common pathogen involved in SC.[ 68 ]
Malignancy
The exact underlying mechanism for the susceptibility of HIGM patients to biliary tract carcinoma remains elusive. It is hypothesized that long standing cryptosporidium infection may cause epithelium bile duct dysplasia.[ 71 ] Hepatitis B, C, and CMV infections were also documented along with autoimmune hepatitis, to possibly progress to hepatocellular carcinoma. SC and malignant tumors of the liver, biliary tree and pancreas are predictors of poor outcome in HIGM.[ 71 ]
In conclusion, diarrhea and sclerosing cholangitis are the most prominent findings among children with hyper IgM. Careful monitoring is warranted, especially with cryptosporidium infection, since this pathogen may result in grave consequences.
IPEX SYNDROME
IPEX comprises I mmunodeficiency, P olyendocrinopathy, E nteropathy, and occurs as an X -linked Mendelian trait.[ 72 ] It is a unique syndrome caused by mutation in FOXP3 , a gene encoding putative deoxyribonucleic acid (DNA)-binding protein of the forkhead family, which acts as transcriptional repressor and key modulator of regulatory T cell function.[ 72 – 75 ] Skin manifestations and other autoimmune phenomena are associated with this syndrome.[ 74 – 75 ] This rare immunodeficiency confers predisposition to infections commonly with Enterococcus and Staphylococcus .[ 75 ]
Gastrointestinal manifestations
The single most common manifestation of IPEX is intractable diarrhea, which was reported in almost all patients.[ 75 ] The diarrhea starts very early and may even precede the initiation of feeding of infants, while becoming worse with feeding. Gluten restriction and parenteral nutrition have been found to minimally improve the prognosis.[ 76 ] Histopathologic studies of the small intestine reveal severe villous atrophy and mucosal erosion with lymphocytic infiltrates of the submucosa or lamina propria.[ 77 ] Failure to thrive usually follows as a result of the enteropathy and malabsorption.[ 75 ] Growth retardation has been suspected to start prenatally, which would explain cachexia as a striking feature of this disease.[ 76 ] Currently, there is limited information on the disease profile of IPEX and most patients die within the first year of life. Most treatment strategies are directed toward immunosuppression and/or bone marrow transplantation.[ 78 ]
INFLAMMATORY BOWEL DISEASE AS A PRIMARY IMMUNODEFICIENCY
The examples discussed above represent a group of PIDs that may present with GI manifestations. Colitis resembling IBD is an evident example. On the other hand, there is a new concept postulating that IBD, particularly CD, is not considered solely an inflammatory disease but is also a bonafide genetic trait, resulting in an immune defect in the macrophage. It may result from impaired recruitment of granulocytes to the GI wall, causing impaired clearance of the microorganism, and hence contributing to granuloma formation.[ 79 – 81 ] Over the last 2 decades, the genetic cause of CD remains elusive despite the identification of up to 32 candidate loci in wide genome association studies, among which there is robust association with NOD2/CARD15 gene, encoding the cytosolic receptor that recognizes bacterial peptidoglycans, particularly mycobacterial N -glycolyl muramyl dipeptide.[ 82 – 83 ] The new hypothesis focuses on impaired macrophage function, caused by a single gene Mendelian defect, rather than polygenic disease. If this is proven to be correct, it will cause a paradigm shift in our understanding of IBD and shall pave the way for major discoveries to delineate genetic causes of CD.[ 79 ]
ABBREVIATIONS
PIDs, primary immune deficiency diseases; GI, gastrointestinal; CMV, cytomegalovirus; GvHD, graft versus host disease; CGD, chronic granulomatous disease; AR, autosomal recessive; CD, Crohn's disease; CVID, common variable immunodeficiency; IVIG, intravenous immunoglobulin; HIGM, hyper IgM; IPEX, immunodeficiency, polyendocrinopathy, enteropathy, x-linked; IBD, inflammatory bowel disease. | The author thanks Dr. Mosa Fagih, from the Department of Pathology at King Faisal Specialist Hospital, for his valuable comments on the figures of the histopathological findings, and also Dr. Rabih Halwani, from King Saud University, for his valuable editorial comments. | CC BY | no | 2022-01-12 15:21:47 | Saudi J Gastroenterol. 2010 Apr; 16(2):66-74 | oa_package/0d/a3/PMC3016508.tar.gz |
||||
PMC3016509 | 20339174 | CONCLUSION
PPPD in comparison to CPD for peri-ampullary carcinoma is at least as effective in terms of peri-operative morbidity or mortality, tumor recurrence, or long-term survival. It may have some advantages in terms of a shorter operating time, less blood loss, fewer blood transfusions, and a better quality of life.
Therefore, PPPD (a well established procedure) remains a good option for any patient with peri-ampullary carcinoma except if the first part of the duodenum or pylorus is grossly involved with tumor. | Pancreaticoduodenectomy (PD) is the standard surgical treatment for resectable peri-ampullary tumors. It can be performed with or without pylorus preservation. Many surgeons have a negative opinion of pylorus preserving PD (PPPD) and consider it an inferior operation, especially from an oncological point of view. This article reviews the various aspects of PD in the context of operative factors like blood loss and operation time, complications such as delayed gastric emptying and anastomotic leaks, and the impact on quality of life and survival. We aim to show that PPPD is at least as good as classic PD, if not better in some aspects. | Pancreaticoduodenectomy (PD) remains the standard surgical treatment for resectable peri-ampullary tumors. The first PD operation was reported by Codavilli in 1898 in a patient with an epithelioma of the pancreas, but the patient died from cachexia on the 21 st post-operative day.[ 1 ] In 1946, Whipple described a one-stage PD in which the pylorus was resected.[ 2 ] The first report of pylorus preserving PD (PPPD) was by Watson in 1944 for ampullary carcinoma[ 3 ] but it did not gain popularity at that time.
In both the classic PD and PPPD, the head of pancreas, duodenum, and distal bile duct are resected. The main difference is that in classic PD, the gastric antrum and pylorus are resected with the creation of a gastro-jejunostomy, while in PPPD, the gastric antrum and pylorus are preserved and the line of resection is through the first part of duodenum and a duodeno-jejunostomy is performed [Figure 1 a and b ].
Traverso and Longmire reintroduced the concept of PPPD for benign peri-ampullary lesions in 1978 as they thought it would decrease the incidence of post-gastrectomy complications.[ 4 ] In 1980, they published their experience in PPPD for malignant lesions which included 18 patients with peri-ampullary, duodenal, and pancreatic carcinomas with encouraging results of normal gastric emptying and acidity.[ 5 ] Since, PPPD has been applied widely to patients with peri-ampullary lesions, benign, or malignant.
In spite of the reported good outcomes of PPPD, many surgeons still question the benefit of this procedure especially the reported high incidence of delayed gastric emptying and, more importantly, the negative impact that pylorus preservation has on tumor clearance, recurrence, and long-term survival.
We will try in this review article to answer the question of the safety of PPPD as compared to classic PD (CPD) in terms of operative factors, peri-operative complications, tumor recurrence, survival, and long-term quality of life.
OPERATIVE FACTORS
Operating time
Sugiyama in 2000 compared 10 patients with PPPD to 14 patients with CPD.[ 6 ] Although there was a trend toward a shorter operative time in the PPPD group, it did not reach statistical significance and that was due to a low volume study. A large, multicentre, randomized, controlled trial of 170 patients comparing PPPD with PD also had found no significant difference in the operating time.[ 7 ] In a meta-analysis, Traverso had confirmed the previous observation where there had been a trend toward a shorter operating time in PPPD but also not statistically significant.[ 8 ]
Two large volume retrospective studies have looked at the operating time difference between PPPD and CPD and it had been clear that the PPPD operating time was significantly shorter than that of CPD.[ 9 10 ] That observation has been further supported by a meta-analysis by Karanicolas in 2006 and has found that PPPD was 72 min shorter than PD.[ 11 ] A more recent meta-analysis[ 12 ] has also shown that PPPD was 41.3 min shorter.
Blood loss and a need for blood transfusion
Several reports have indicated no significant difference in intra-operative blood loss and blood transfusion between PPPD and PD.[ 6 – 8 ] In a meta-analysis, however, although there has been no significant difference in blood loss, more patients in the PD group have required blood transfusions.[ 12 ]
Other studies with a larger patient volume, on the other hand, have shown significantly less blood loss and blood transfusions in the PPPD group[ 9 – 11 ] that could be partly due to the fact that there is less dissection in PPPD. This observation is very important, as blood transfusions in pancreatic cancer have been associated with a decreased survival rate.[ 13 ] So if an operative procedure results in less blood loss it should translate into a longer survival.
Operative mortality
In retrospective analyses, peri-operative mortality has been similar in PPPD and PD groups.[ 9 10 14 ] Two meta-analysis studies have shown a trend toward lower peri-operative mortality in the PPPD group.[ 11 12 ]
A randomized controlled trial comparing 13 patients with CPD to 14 patients with PPPD has shown no significant difference in mortality (15.4% and 28.6%, respectively, P -value 0.65) but these are very high mortality rates for any pancreaticoduodenectomy in comparison to the widely reported 3% in most studies.[ 15 ] In a multicentre, randomized, controlled trial involving 170 patients, mortality has been 7% in the CPD group vs. 3% in the PPPD group ( P -value 0.27)[ 7 ]
POST-OPERATIVE COMPLICATIONS
Delayed gastric emptying
DGE is probably one of the most studied complications following any type of pancreaticoduodenectomy. There has always been the thought that pylorus preservation would increase the chance of DGE. In a large series from Japan including 1066 patients who underwent PPPD, the incidence of DGE was 46%,[ 16 ] which supported the idea of higher DGE with PPPD. A small volume, randomized controlled trial has shown DGE to be 15% in PD vs. 64% in the PPPD group ( P -value 0.2).[ 15 ]
On the other hand, several other studies have not shown the same observation. A retrospective analysis of 113 patients has shown no significant difference in DGE but half of PPPD patients with DGE had co-existing intra-abdominal complications which could have attributed to DGE.[ 14 ]
Two retrospective studies have shown no significant difference in DGE between the two groups.[ 6 9 ] This was also confirmed in a multicentre, randomized, controlled trial.[ 13 ] A retrospective analysis of 239 patients showed that DGE in the CPD group was double that of the PPPD group (6 vs. 13%), but there was a higher percentage of T4 and more extensive resections in the CPD group.[ 10 ]
Several meta-analysis studies have also shown that DGE is not higher in the PPPD group.[ 8 11 12 17 ]
It seems that DGE is not increased by preservation of the pylorus rather, by other factors including postoperative complications especially intra-abdominal collections. The presence of portal venous hypertension and preoperative cholangitis also increases the chance of post-operative DGE.[ 18 19 ]
Shan[ 22 ] has made a distinction between subjective DGE and objective DGE as measured by cholescintography and has concluded that although subjective DGE was higher in the PPPD group, objective DGE was similar between the CPD and PPPD groups. He has proposed that loss of the distal stomach mechanoreceptors in the CPD group reduces the patient's sensation of subjective DGE.
Additionally, Kim[ 23 ] proposed that pylorospasm could be a cause of DGE in PPPD and has shown a decrease incidence of DGE with the addition of pyloromyotomy. On the other hand, other studies have shown that abnormal gastric motility post surgery is the main cause of DGE regardless of the type of reconstruction.[ 24 25 ]
Several methods have been tried to further decrease the incidence of DGE in PPPD. The drug erythromycin has been shown to increase contractility of the stomach and decrease the incidence of DGE.[ 26 27 ] On the other hand, somatostatin which is sometimes used to decrease the severity of pancreatic anastomosis leak increases the chance of DGE by more than 3-fold.[ 28 ]
An interesting observation was that the use of ante-colic doudeno-jejunostomy as opposed to a retro-colic reconstruction in PPPD decreased the incidence of DGE.[ 19 – 21 ]
Anastomotic leak
Anastomotic leak, especially from pancreatico-jejunostomy (PJ), is the main factor for morbidity post-PD. A review of 1066 PPPDs in Japan has revealed a leak rate of 16%.[ 16 ] In a randomized, controlled trial and two meta-analyses, there has been no difference between CPD and PPPD in terms of PJ leak rate.[ 11 12 15 ] Tani[ 29 ] has shown that the Traveso-type construction (Duodeno-jejunostomy (DJ) distal to PJ) has a lower leak rate than the Billroth I type reconstruction (DJ proximal to PJ).
Intestinal acidity and anastomotic ulceration
Not performing an antrectomy could, in theory, result in higher intra-gastric and intestinal pH in the PPPD patient in comparison to the CPD patient. Geenen et al. [ 30 ] has found that intra-gastric and intestinal pH was not reduced in the PPPD patient but in fact intestinal pH was increased.
Marginal ulceration in PPPD was increased with the use of Roux-en-Y jejuna loop which is not exposed to the diluted effect of pancreatic juice.[ 31 ]
Hospital stay
Usually, the reason for a prolonged hospital stay is either anastomotic leak or DGE. As indicated above, there is no significant difference between CPD and PPPD in terms of leak rate or DGE, so hospital stay should be no different.
Several retrospective, prospective and meta-analysis studies failed to indicate that PPPD causes an increase in hospital stay.[ 8 12 14 15 ] In fact, one meta-analysis and one retrospective study showed a trend toward a shorter hospital stay with PPPD[ 9 11 ]
TUMOR RECURRENCE AND LONG-TERM SURVIVAL
The adequacy of PPPD as a cancer operation has always been questionable especially after Sharp and his colleagues had reported three cases of PPPD where the duodenal resection margin was positive for carcinoma.[ 32 ] That observation, however, was not supported by other studies. In a retrospective study in 1993, pathological examination of all positive margins in PPPD for peri-ampullary carcinoma has shown that the most common site for a positive margin was peri-pancreatic soft tissue followed by the pancreatic resection line and then the bile duct resection line and no duodenal-positive margin was identified.[ 33 ]
The adequacy of resection was further supported by a Japanese group who has found that the number of lymph nodes retrieved was equal in PPPD and total pancreatectomy, which is even more radical than the classic CPD. The number of positive lymph nodes was also no different.[ 8 ]
One paper has shown that diffuse peritoneal seeding recurrence was higher in the PPPD group, while liver metastasis and retroperitoneal recurrence were higher in the CPD group.[ 34 ] Out of nine patients in the PPPD group who developed retroperitoneal recurrence, six developed obstruction of the first jejuna loop requiring bypass surgery to relieve the obstruction; therefore, retroperitoneal passage of jejuna loop should be avoided. In another study, however, there was no difference in type of recurrence.[ 35 ] Both studies revealed no difference in rate of recurrence in general between PPPD and CPD.
Long-term survival was studied extensively as it is the main measure of cancer surgery efficacy. Several retrospective studies have shown that the type of resection does not influence survival.[ 6 10 14 34 35 ] Two randomized, controlled trials have also shown no difference in survival.[ 7 15 ] A recent meta-analysis has found that a 5 year survival rate was higher in the PPPD group when all tumors where included ( P -value 0.002), but in the peri-ampullary tumor group there has been no difference in the survival rate.[ 12 ]
QUALITY OF LIFE
The main reason for adopting PPPD was to reserve the whole stomach and to improve digestive function. Post-operative weight gain was comparable between the PPPD and CPD groups[ 12 13 ] but patients in the PPPD group reported better gastrointestinal quality of life in terms of appetite, nausea, and diarrhea and an earlier return to work.[ 12 ] Although weight gain was comparable, pre-operative weight was reached faster in PPPD patients and they exhibited a better mixture of food with bile.[ 10 36 ]
Hyperalimintation for malnutrition was less and serum albumin was higher 6 months following surgery in the PPPD group.[ 6 ]
In patients receiving post-operative chemotherapy, a significant increase in body weight was seen with the preservation of the pylorus.[ 14 ] | CC BY | no | 2022-01-12 15:21:47 | Saudi J Gastroenterol. 2010 Apr; 16(2):75-78 | oa_package/60/49/PMC3016509.tar.gz |
|||||
PMC3016510 | 20339175 | PATIENTS AND METHODS
Subjects
The study consisted of a total of 148 subjects, 95 patients with liver diseases and 53 healthy controls. Based on the cause for liver disease, patients were divided into six groups [ Table 1 ]. Liver disease was diagnosed based on clinical evidence, radiography, and laboratory investigations. The healthy controls were not on any kind of prescribed medication or dietary restrictions. The demographic and other biochemical data are depicted in Tables 2 and 3 . Informed consent was taken from all subjects involved in the study and this study was approved by institutional review board, Manipal University, India.
Samples and reagents
Blood samples (5 mL) were drawn into plain vacutainers from the antecubital veins of healthy controls and patients. The blood was allowed to clot for 30 minutes and centrifuged at 2000g for 15 minutes for clear separation of serum. All the assays were performed immediately after the clear separation of serum. Paraoxon was obtained from Sigma Chemicals Company (St Louis, MO, USA). All other reagents were of analytical grade, obtained from Merck India.
Biochemical determinations
Paraoxonase assay
PON1 was estimated spectrophotometrically by the method described elsewhere with modifications.[ 19 ] Briefly, the assay mixture consists of 500 μl of 2.2 mmol/l paraoxon substrate in 0.1 mol/l Tris-HCl buffer, pH 8.0 containing 2 mmol/l CaCl 2 and 50 μl of fresh serum. After mixing the contents, kinetic measurements were taken immediately at every minute for five minutes, at 405 nm at 25°C. First absorbance reading is taken as 0-minute reading and subsequent absorbance readings were taken as one-minute to four-minute readings. Corrected absorbance readings were obtained by subtracting 1 minute reading with 0 minute reading, likewise, the latter minute reading was subtracted with the previous minute readings. The mean absorbance was calculated. Mean absorbance was used to determine PON1 activity, and standard graph plotted using 1 mM P -Nitrophenol. PON 1 activity was expressed in international units (IU). One IU was defined as 1 μmol of p-nitrophenol formed/min/L at 25°C.
Standard liver function and fasting lipid profile tests
Serum total and direct bilirubin, aspartate transaminase (AST) and alanine transaminase (ALT), alkaline phosphatase (ALP), total protein (TP), albumin, globulin total cholesterol (TC), high density lipoprotein-cholesterol (HDLC-C), triglycerides (TAG) levels were determined using clinical chemistry analyzer (Hitachi 912). All reagent kits were obtained from Roche Diagnostics, India. LDL-cholesterol (LDL-C) values were derived from Friedewald formula.[ 20 ] TC/HDL-C ratio was calculated.
Statistical analysis
Statistical analysis was performed using the statistical package for Social Sciences (SPSS-16, Chicago, USA). The results were expressed as mean±standard deviation (SD). A P -value <0.05 was considered statistically significant. One-way analysis of variance (ANOVA) was used to compare mean values in all groups, followed by multiple comparison post hoc tests. Pearson's correlation was applied to correlate between the parameters. | RESULTS
As shown in Table 2 , levels of total and direct bilirubin increased significantly in patients with viral hepatitis ( P <0.001) and cirrhosis with portal hypertension ( P <0.001), compared to normal controls. AST activity significantly increased in patients with viral hepatitis ( P <0.001), cirrhosis with portal hypertension ( P <0.001), sepsis ( P <0.001) and falciparum malaria ( P <0.001). Activity of ALT significantly increased in viral hepatitis ( P <0.001) and sepsis patients ( P <0.001). ALP activity significantly increased in patients with viral hepatitis ( P <0.001), cirrhosis with portal hypertension ( P <0.001), leptospirosis ( P <0.001), sepsis ( P <0.001) and falciparum malaria ( P <0.001). Levels of TP decreased significantly in sepsis ( P <0.001), falciparum malaria ( P <0.001), and moderately in left ventricular failure patients ( P <0.05). Albumin levels decreased significantly in falciparum malaria ( P <0.001), and moderately in sepsis ( P <0.05) and left ventricular failure patients ( P <0.05). Levels of globulins significantly increased in cirrhosis with portal hypertension ( P <0.001).
As shown in Table 3 , TC levels significantly decreased in patients with cirrhosis with portal hypertension ( P <0.001) and moderately in left ventricular failure patients ( P <0.05). Levels of HDL-C decreased significantly in patients with cirrhosis with portal hypertension ( P <0.001), leptospirosis ( P <0.001), left ventricular failure ( P <0.001) and falciparum malaria ( P <0.001) and moderately in sepsis ( P <0.05). TAG levels increased significantly in leptospirosis ( P <0.001) and falciparum malaria ( P <0.001) patients. The ratio TC/HDL-C increased significantly in patients with cirrhosis with portal hypertension ( P <0.001), leptospirosis ( P <0.001), sepsis ( P <0.001) and left ventricular failure ( P <0.001). PON1 activity significantly decreased in viral hepatitis ( P <0.001) and sepsis ( P <0.001) and moderately in falciparum malaria patients ( P <0.001). On applying Pearson correlation, serum PON1 activity correlated positively with HDL-C in patients with sepsis (r=0.633, P <0.05), left ventricular failure patients (r=0.814, P <0.05) and negatively with acute viral hepatitis patients (r=– 0.528, P <0.05). | DISCUSSION
When it comes to cardiovascular diseases, serum PON1 has been studied extensively.[ 21 – 25 ] To date, the gene expression of PON1 have been confined only to liver.[ 5 6 ] Thus measurement of serum PON1 defines hepatic synthetic function. As shown in Table 2 and discussed in the results section, a wide spectrum of changes in standard liver function indicates hepatocellular damage. In the present study, serum PON1 activity significantly decreased in patients with acute viral hepatitis, sepsis with multi organ failure and falciparum malaria. Previous authors have demonstrated a decrease in PON1 activity in chronic viral hepatitis.[ 1 15 ] In this study, we have measured PON1 activity in patients with various liver disorders including acute viral hepatitis. In line with the previous authors, there was decrease in PON1 activity in acute viral hepatitis patients.
Previous authors have proposed two mechanisms to explain the decrease in activity in PON1 in liver disorder patients. First, as there is hepatic dysfunction, it is obvious that there is defective gene expression, which contributes to decreased PON1 in these patients.[ 1 ] It has been reported that there was significant decrease in PON1 activity in CCl 4 induced liver cirrhosis secondary to increased free radical generation.[ 26 ] Second, as a consequence of an altered synthesis and/ or secretion of HDL-C, which may be due to impaired lecithin:cholesterol acyl transferase (LCAT) activity.[ 1 ] We have observed positive correlation between PON1 activity and HDL-C levels in patients with falciparum malaria and sepsis. In contrast, in patients with acute viral hepatitis, decrease in PON1 activity is not associated with a proportional decrease in HDL-C levels, as there is negative correlation between PON1 activity and HDL-C levels. Several workers have proposed earlier that viral hepatitis is associated with oxidative stress.[ 27 – 31 ] Further, PON1 activity associated with HDL-C in plasma is thought to protect LDL-C oxidation.[ 15 ]
A previous study has stated that decrease in PON1 activity in patients with chronic liver diseases such as chronic hepatitis and cirrhosis, was related to degree of liver damage.[ 1 ] Recently, Keskin et al. also have reported reduced baseline and stimulated PON1 and arylesterase (ARE) activities in patients with chronic liver disease.[ 32 ] Contradictory to the above study, we have observed that there was lowered PON1 activity in acute viral hepatitis but normal PON1 activity in patients with cirrhosis. Furthermore, Ferré et al. demonstrated that there was a significant change in PON1 activity in patients with cirrhosis but our results are contradictory, as we found no significant change in its activity.[ 1 ] In our study, there was no significant decrease in the PON1 activity in patients with cirrhosis, leptospirosis and left ventricular failure patients, but there was a significant decrease in HDL-C levels in these patients. To explain this observation we speculate that the extent of hepatocyte damage in these conditions may not be so severe to significantly decrease PON1 activity. Although the PON1 gene expression reported by previous studies has been confined only to liver,[ 5 6 ] significant presence of PON1 activity in these patients, despite largely decreased HDL-C, may raise the possibility of a second source of PON1 in the body, other than liver. However, further research is necessary to substantiate this possibility, prove or disprove it.
In conclusion, PON1 activity has decreased significantly in acute viral hepatitis, sepsis with multi organ failure and falciparum malaria patients. There was no significant change in PON1 activity in cirrhosis, leptospirosis and left ventricular failure patients, but there was a significant decrease in HDL-C levels in these patients. | Background/Aim:
Paraoxonase 1 (PON1) is an esterase, exclusively synthesized by liver. The present study has two objectives: to determine the PON1 activity status in various disorders associated with hepatocellular damage and to correlate the changes of PON1 activity with the standard liver function and fasting lipid profile tests in these disorders.
Patients and Methods:
The study groups consisted of 95 patients with liver diseases including acute viral hepatitis (14), cirrhosis with portal hypertension (33), leptospirosis (14), sepsis and multi organ failure (15), left ventricular failure (9), and falciparum malaria (10); and 53 healthy controls. Serum PON1 activity was measured manually using spectrophotometer. Liver function test parameters and fasting lipid profile were performed in clinical chemistry auto analyzer (Hitachi 912).
Results:
The serum PON1 activity in patients with acute viral hepatitis and sepsis decreased significantly ( P <0.001) and moderately in falciparum malaria ( P <0.05). However, in patients with cirrhosis, leptospirosis and left ventricular patients, its activity did not change significantly. On applying Pearson correlation, serum PON1 activity correlated positively with high-density lipoprotein-cholesterol (HDL-C) in patients with sepsis (r=0.633, P <0.05), left ventricular failure patients (r=0.814, P <0.05) and negatively with acute viral hepatitis patients (r=– 0.528, P <0.05).
Conclusion:
PON1 activity has decreased significantly in acute viral hepatitis, sepsis with multi organ failure and falciparum malaria patients. Determination of PON1 activity may serve as a useful additional test in assessing these conditions. | Paraoxonase (aryldialkylphosphatase, EC 3.1.8.1) is a serum esterase. It is a xenobiotic enzyme which hydrolyzes organophosphorous compounds such as paraoxon, unsaturated aliphatic esters, aromatic carboxylic esters etc.[ 1 ] PON1 , PON2 , and PON3 are the three members of paraoxonase family, located on chromosome 7q21.3-22.1.[ 2 ] Along with apolipoprotein A1 (apoA-I) and clusterin (apolipoprotein J), PON1 is associated with high-density lipoprotein (HDL).[ 3 ] This association contributes to the protection against low-density lipoprotein (LDL) oxidation.[ 1 ] Assessment of hepatocellular injury and biliary tract disorders include conventional markers such as, alanine and aspartate aminotransferases (AST, ALT), alkaline phosphatase (ALP). Further, hepatic synthetic function can be relatively measured by determining albumin levels in serum.[ 4 ] Due to longer plasma half-lives of aminotransferases, changes in hepatocellular damage are not associated with proportional changes in enzyme levels in plasma during acute liver diseases.[ 4 ] Hence in these cases more specific tests such as liver biopsy of the affected tissue have to be performed to confirm the suspected cause of the disease.[ 1 ]
The liver plays a key role in the synthesis of serum PON1 and the gene expression has been confined only to the liver.[ 5 6 ] Also, properties shared by hepatic and serum PON1 are identical, as elucidated by in vitro biochemical tests.[ 7 ] Hence these observations raise a question on the utility of the measurement of serum PON1 activity as an index of liver function status. Preliminary studies stated serum arylesterase activity decreased significantly in patients with liver cirrhosis.[ 8 – 10 ] Further, Sorenson et al . concluded that arylesterase and PON1 activities are functions of a single enzyme.[ 11 ] Previous investigators have measured PON1 activity in serum of patients with chronic liver diseases such as alcoholic liver disease, hepatitis, cirrhosis and found that its activity decreased significantly.[ 1 12 – 15 ] Kilic et al. have demonstrated that serum arylesterase activity decreased significantly in chronic hepatitis patients.[ 15 ] Xu et al . have shown previously decreased PON1 activity in chronic liver diseases was significantly increased after successful liver transplantation.[ 16 ] PON1 has been shown to protect liver damage by alleviating CCl 4 induced oxidative stress.[ 17 ] Further, Marsillach et al. have reported that PON1 protects hepatocytes against inflammation, fibrosis and liver disease.[ 18 ]
The current study was undertaken with two objectives. Firstly, to determine the PON1 activity status in various disorders associated with hepatocellular damage. Secondly, to correlate the changes of PON1 activity with the standard liver function and fasting lipid profile tests in these disorders. | We thank our Dean Dr. Sripathi Rao, and Dr. S Sudhakar Nayak, Professor and Head, Department of Biochemistry for financial support. | CC BY | no | 2022-01-12 15:21:47 | Saudi J Gastroenterol. 2010 Apr; 16(2):79-83 | oa_package/a6/9f/PMC3016510.tar.gz |
||
PMC3016511 | 20339176 | PATIENTS AND METHODS
The study included 515 patients presenting to the gastroenterology clinics of King Fahad Hospital Hofuf (a 500-bed, major tertiary-care center in the eastern region of Saudi Arabia) from November 2004 to November 2008.
Patient inclusion criteria were chronic heartburn as the presenting symptom and suspected GERD not treated at the time of the evaluation. Exclusion criteria were smoking, pulmonary malignancies, pregnancy, or laryngeal stenosis. There were 123 (23.9%) male patients and 392 (76.1%) female patients, with an age range of 24-58 years, and mean age of 41.6±7.4 years.
Modified four-grade Likert scale
Patient self-report symptom questionnaire was used for assessment of GERD. Modified Likert scale[ 8 ] with defined individual response options and structured patient self- report, rather than physician assessment, was considered to be the best approach.[ 9 ]
Respiratory symptoms
The participants were asked about various respiratory symptoms experienced during the past six months: (1) Whether daily cough was experienced, and if so, the duration and whether the cough was productive. (2) breathlessness, with three alternative answers: no symptoms, minor degree, or major degree; (3) attacks of heavy breathing or wheezing during the past 6 months; (4) occurrence of asthma; and (5) use of asthma medication, hemoptysis and hoarseness of voice
Pulmonary function tests
Lung function studies were performed for all the studied cases (515) before doing upper GIT endoscopies using a precalibrated Spirolab II with black/white LCD display spirometer; product from MIR (Medical International Research) Roma, Italy.
All of the following respiratory functions were considered and recorded; forced expiratory volume in the first second (FEV1), forced vital capacity (FVC), FEV1/FVC, peak expiratory flow (PEF) and FEF 25-75 (forced expiratory flow at 25% and 75% of the vital capacity). All of these respiratory functions are reported as percent predicted values.
Furthermore, spirometry results had to be characteristic: forced expiratory volume in one second/forced vital capacity ratio (FEV1/FVC) <70% of predicted, indicating airflow obstruction. Spirometric measurements were performed three consecutive times and the highest value was recorded. These tests were performed in accordance with the Pulmonary Function Test Guidelines established by the European Thoracic Society.[ 10 ]
Endoscopic assessment
Upper GIT endoscopies using Pentax EG – 2940 were done for all cases (515 patients). Based on endoscopic Los Angeles classification[ 11 ] of esophagitis, GERD was diagnosed endoscopically in 397 patients (77.1%) from which there were 79 males (19.9%) and 318 females (80.1%).
All patients were exposed to all of the following: medical history, measuring body mass index (BMI), pulmonary physical examination, chest X-rays, respiratory function tests and sleep studies for the suspected cases of sleep apnea. The patients also reported on the use of theophylline, corticosteroids per os, and acid-suppressive drugs such as proton-pump inhibitors and histamine-2-receptor antagonists. Subjects included in this study (515 patients) were categorized into two groups according to endoscopic findings as: (group 1) subjects with heartburn and normal endoscopic studies (NERD) 118 (22.9%) patients and (group 2) subjects with heartburn and abnormal endoscopic studies (ERD) 397 (77.1%), the demographics of the study population are shown in Table 1 .
Statistical analyses
Statistical analysis was performed using SPSS version 10.1. Continuous variables were described as mean±SD. Group comparisons were made using the Student's t test. A P value <0.05 was considered statistically significant. Correlation between degree of GERD and pulmonary manifestations was assessed by X2 analysis and Mann-Whitney U tests. | RESULTS
All the 515 patients answered the questions regarding reflux symptoms using modified Likert scale; 73 had mild reflux symptoms, 305 with moderate grade and 137 patients had severe reflux symptoms, females constituted (76.1%) of patients. All patients underwent upper endoscopy and were assessed based on Los Angeles Classification of esophagitis.
According to the endoscopic findings, patients were categorized into (group 1) cases with heartburn and normal endoscopic studies (NERD) 118 (22.9%) patients and (group 2) cases with heartburn and abnormal endoscopic studies (ERD) 397 (77.1%). Table 1 outlines some demographic characteristics of the study participants. The mean ages of NERD and ERD were 43.4 years and 40.3 years, respectively. The proportion of females was significantly higher in ERD groups (80.1%) than in the NERD group (62.7%; P <0.02). The duration of reflux symptoms was found to be significantly prolonged in ERD group ( P <0.03). ERD group was more likely to be overweight (BMI>25; P <0.02) otherwise no statistically significant difference was noticed between the two groups, with regard to history of diabetes and use of proton pump inhibitor.
Relations between respiratory disorders and reflux symptoms
Seventy seven patients (15%) (4.9% male and 10.1% female) reported history of pulmonary symptoms that precede GERD symptoms. There were 294 patients (57.1%) with different pulmonary manifestations. These manifestations were significantly higher among female group ( P <0.01) and significantly higher among obese, above 40 years old ( P <0.001,0.05 respectively). Prevalence of different respiratory symptoms among NERD and ERD groups is shown in Figure 1 .
Strong statistically significant associations between a number of respiratory symptoms and grade of reflux symptoms were found in both groups NERD and ERD [ Table 2 ]. The occurrence of all the respiratory symptoms was about two times more common among ERD patients than NERD patients, especially those complaining of daily cough. As regards disease duration, patients with wheezy breathing, daily cough, daily productive cough, or chronic cough showed a statistically significantly two-fold to three-fold increase in the risk of reflux symptoms.
Among all patients with respiratory manifestations, the commonest disorders diagnosed were chronic pharyngitis (50.3%), chronic bronchitis (15.8%), bronchial asthma (12.6%) and recurrent pneumonia (3.3%). Also, obstructive sleep apnea was diagnosed in 2.7%; 1.5% of the patients presented with recurrent hemoptysis. Further, there were three cases of chronic lung abscess.
There was a significant difference between ERD and NERD groups in their relations to respiratory disorders ( P <0.001). Significant positive correlation between endoscopic grading according to Los Angeles Classification and respiratory symptoms; grade C and D significantly correlated with symptoms of asthma and chronic bronchitis ( P <0.001, 0.003 respectively) Table 2 .
Pulmonary function tests in NERD and ERD groups
Results of pulmonary function studies were measured in both groups (NERD group and ERD group). There were statistically significant differences in FEV1, FVC and FEV1/ FVC ( P <0.02, P <0.05 and P <0.05) respectively in ERD group as compared with NERD group, while there were no significant differences between both groups in measuring PEF and FEF 25-27% [ Table 3 ].
In the ERD group, large airway narrowing is more commonly seen than small airway obstruction, which demonstrated more frequent bronchitis and bronchial asthma in ERD group as compared to NERD group. | DISCUSSION
Gastro-esophageal reflux disease (GERD) is a common disorder caused by the reflux of gastric contents into the esophagus. According to a recent global definition,[ 12 ] GERD can cause esophageal and extra-esophageal syndromes, which can co-exist, or not, in the same individual. Respiratory manifestations of GERD represent one of the most prevalent and challenging of these extra-oesophageal syndromes. However, the relationship between reflux and respiratory symptoms is frequently difficult to establish with a high degree of certainty.
GERD should be defined by the presence of reflux oesophagitis (Los Angeles grades A–D) and/or when it causes reflux symptoms that are sufficient to impair quality of life and/or when it is associated with a risk of long term complications.[ 13 ] The subjects of the present study were selected on the basis of their chronic heartburn and not respiratory pathology. In the measurement of symptom severity, self-assessed by the patient, we used Likert scale which is considered to be an optimal objective end point.[ 13 ] “Likert scale” is commonly used to describe symptom scales such as “none, mild, moderate, severe”. Also, all patients underwent upper endoscopy and were categorized based on Los Angeles classification.
Recent decades have witnessed a dramatic revision of the GERD landscape including manifestations of the disorder that could be seen with the naked eye through an endoscope (erosive reflux disease, ERD) or not seen (non-erosive reflux disease, NERD).[ 14 ] NERD is common and may comprise more than 60% of all chronic heartburn sufferers in the community.[ 15 16 ]
NERD constitutes (22.9%) of our study population. This is in complete disagreement with other studies, which showed a prevalence of 75% and 80% in two separate studies.[ 17 18 ] This may be due to difference in dietary habits. In our study, we depended only on endoscopic findings without studying pH difference. We studied prevalence and relation of respiratory disorders in relation to both ERD and NERD.
Epidemiologic studies show a moderate association between GERD and a range of pulmonary symptoms. A cross- sectional study of heartburn prevalence in 2,200 participants showed that incidence of pulmonary symptoms was slightly elevated among those with frequent GERD compared to those without GERD.[ 19 ]
Among all patients studied, 294 patients (57.1%) had different pulmonary manifestations. These manifestations were significantly higher among female group ( P <0.01) and significantly higher among obese, above 40 years old ( P <0.001, 0.05 respectively). This prevalence is much higher than what was reported in a previous study, which showed that chronic respiratory symptoms or diseases were present in only 18% of patients with GERD.[ 20 ]
In spite of the higher prevalence found in obese patients, this association was independent of BMI, which is consistent with a previous report.[ 21 ] Confounding could never be completely ruled out. Obesity, for instance, is a well-known risk factor for reflux that has recently been recognized as a risk factor for asthma as well.[ 22 23 ] To reduce the risk of confounding, we adjusted statistically for all plausible confounding variables, including obesity represented by BMI.
Asthma is a highly prevalent disease whose incidence has increased in the last decades, affecting 5% to 10% of the global population.[ 24 ] There is mounting epidemiological evidence of an association between GERD and asthma, as well as of a strong correlation between reflux episodes and respiratory symptoms. This association has been intensively studied; patients with oesophagitis are more likely to have asthma than patients without esophagitis.[ 25 – 27 ]
Our results showed a statistically significant difference of prevalence of all respiratory symptoms in ERD as compared with NERD groups (63% vs 37.2%, respectively). Moreover, we found significant positive correlation between asthma and chronic bronchitis and endoscopic grading grade C and D ( P <0.001, P <0.003 respectively). Microaspiration of gastric acid and increases in airway hyper-responsiveness due to esophageal acid are considered potential triggers for asthma.[ 28 ] A number of reviews reported beneficial results of medical and surgical antireflux therapy on asthma outcome.[ 29 30 ]
GERD is currently considered the third leading cause of chronic cough, affecting an estimated 20% of patients.[ 31 32 ] Our results showed that 27.2% of our study population complain of a chronic cough, with the frequency being higher for ERD compared to GERD (29% vs. 21.2%, respectively).
The cause-and-effect relationship between GERD and chronic cough is controversial for some authors, as is GERD- induced asthma. However, the principal evidence that GERD is the cause of chronic cough is based on the resolution of the symptoms after an efficacious antireflux treatment.[ 33 ]
Accordingly, some authors,[ 34 ] contend that neither endoscopy of the upper digestive tract, the gold standard for the diagnosis of GERD complications, nor contrast enhanced radiological examination of the esophagus, stomach and duodenum, are capable of establishing a causal relationship between GERD and chronic cough.[ 35 ] However, our study reflects a significant prevalence of chronic cough in patients with erosive esophagitis.
To our knowledge, pulmonary function tests were studied only in patients with chronic respiratory diseases complaining from reflux symptoms. In our study we did pulmonary function tests to all patients provisionally diagnosed as GERD and we studied the relation between pulmonary function tests and the ERD and GERD subgroups.
In this study, there were statistically significant differences in FEV1, FVC and FEV1/FVC ( P <0.05 and P <0.05 , respectively) in ERD as compared with NERD. On the other hand, there were non significant differences between both groups in measuring PEF and FEF 25-27%. Therefore, these results illustrated that the large airway obstruction is more involved in ERD group than small airway narrowing, which is supported by more frequent bronchitis and bronchial asthma in ERD group as compared to NERD group.
Vraney and Pokorny[ 36 ] measured pulmonary functions in patients with gastroesophageal reflux. Results of pulmonary function studies were grouped according to smoking history and demonstrated reflux. The difference noted between the smoker and nonsmoker groups was slightly greater than that between the reflux and non-reflux groups, one of the strength points in this study is exclusion of smokers.
Atalay et al. [ 37 ] evaluated respiratory function tests (RFTs) in acid reflux positive and -negative patients diagnosed with 24 h pH monitorization. RFTs of reflux-positive patients were not significantly different from those of reflux-negative patients. They concluded that both lung disease and GER have a high prevalence worldwide, and these conditions are frequently coexistent.
The difference between the two previous studies and our study is, firstly, the presence of a higher percentage of our patients with pulmonary symptoms, and secondly, our dependence on mainly endoscopic parameters for diagnosing GERD, and not on pH monitoring. These are mainly the causes for significant respiratory function values in our study as compared with other studies.
The strength of this study lies firstly, in sub-grouping the patients into ERD and GERD, and finding a correlation between respiratory symptoms and these groups, and secondly, in evaluating the respiratory functions in non- smokers.
In conclusion, this prospective study has revealed a strong link between gastroesophageal reflux symptoms and various respiratory disorders. Endoscopy of the upper digestive tract remains an important element in the evaluation of GERD. Respiratory symptoms are more prevalent among erosive esophagitis patients, with a positive correlation with the degree of severity. There is a direct relationship between the severity of airways obstruction as detected by FEV 1 and FEV1/FVC with GER symptoms. | Background/Aim:
The relation between respiratory disorders and reflux symptoms has been debated since the beginning of the last century and the interest in this question has increased during the last few decades. This study aims to investigate the relation between specified respiratory disorders and reflux symptoms and examine the correlations between respiratory disorders and endoscopic findings in patients with gastroesophageal reflux disease.
Patients and Methods:
This study included 515 patients evaluated for gastroesophageal reflux disease (GERD) by patient self-report symptom questionnaire; modified four grade Likert scale and endoscopic assessment using endoscopic Los Angeles Classification. All participants were asked about various respiratory symptoms experienced during the past six months and exposed to measuring body mass index (BMI), medical history, pulmonary physical examination, chest X-ray, respiratory function tests and available sleep studies.
Results:
A total number of 515 patients were categorized according to endoscopic findings into two groups; (group1) subjects with normal endoscopic studies (NERD) 118 (22.9%) patients and (group2) subjects with abnormal endoscopic studies (ERD) 397 (77.1%). The proportion of females was significantly higher in ERD group (80.1%) as compared with NERD group (62.7%) ( P <0.02). Duration of reflux symptoms found to be significantly prolonged in ERD group ( P <0.03). The cases of ERD group were more likely to be overweight (BMI > 25) P <0.02. History of pulmonary symptoms preceding GERD symptoms was found in 15% of patients. There were 294 patients (57.1%) with different pulmonary manifestations. These manifestations were significantly higher among female group ( P <0.01) and among obese, above 40 years old ( P <0.001, 0.05 respectively). Among all patients with respiratory manifestations the commonest disorders diagnosed were chronic pharyngitis (50.3%), chronic bronchitis (15.8%), bronchial asthma (12.6%) and recurrent pneumonia (3.3%). Obstructive sleep apnea and recurrent hemoptysis were present in 2.7% and 1.5% of the studied patients respectively. There were three cases of chronic lung abscess. There was a significant difference between ERD and NERD groups in their relations to respiratory disorders ( P <0.001). There were statistically significant differences in FEV1, FVC and FEV1/FVC ( P <0.02, P <0.05 and P <0.05) respectively in ERD group as compared with NERD group.
Conclusion:
The study confirms the strong link between gastroesophageal reflux symptoms and various respiratory disorders. Endoscopy of the upper digestive tract remains an important exam in the evaluation of GERD. Respiratory symptoms are more prevalent among erosive esophagitis patients with a positive correlation with degree of severity. There is direct relationship between the severity of airways obstruction as detected by FEV 1 and FEV1/FVC and GER symptoms. | Gastroesophageal reflux disease (GERD), defined as the presence of symptoms or lesions that can be attributed to the reflux of gastric contents into the esophagus, is one of the most common disorders affecting the gastrointestinal tract. Patients with GERD commonly have symptoms, with approximately 20% experiencing heartburn, acid reflux or both at least once a week and approximately 40% reporting that such symptoms occur at least one a month. If extra- esophageal manifestations are taken into consideration, it is believed that the real prevalence of pathological reflux might be underestimated.[ 1 ]
Unlike the distal esophagus, the airways are not protected by antireflux clearance mechanisms and intrinsic mucosal properties. It is therefore conceivable that even a single reflux episode extending beyond the esophagus may be sufficient to cause pharyngeal, laryngeal, and respiratory symptoms and signs. A second mechanism responsible for GERD is activation of reflexes involving the airways by reflux of gastric contents into the esophagus.[ 2 ]
The endoscopic esophageal changes caused by reflux disease are not only helpful diagnostically, but also identify patients exposed to a significant risk of disease chronicity.[ 3 – 6 ] Further, the severity of esophagitis gives useful guidance as to the likelihood of success of a particular treatment.[ 7 ]
Therefore, the aim of this study was screening and investigating the relation between specified respiratory disorders and reflux symptoms, and to examine the correlations between respiratory disorders and endoscopic findings in patients with gastroesophageal reflux disease in major tertiary hospital in the eastern region of Saudia Arabia. | CC BY | no | 2022-01-12 15:21:47 | Saudi J Gastroenterol. 2010 Apr; 16(2):84-89 | oa_package/8f/52/PMC3016511.tar.gz |
|||
PMC3016512 | 20339177 | MATERIALS AND METHODS
The present study included 34 children and adolescents (19 males and 15 females) suffering from cholestatic chronic liver disease. Patients were newly diagnosed and selected from Hepatology Clinic, New Children's Hospital, Cairo University, and were compared with a group of seven children who happened to have incidental “normal” liver biopsy (as a control group). Liver biopsy was indicated in these children as they exhibited persistent/intermittent elevations of liver enzymes for more than 6 months. Informed consent was obtained from the parents of children according to guidelines of the ethical committee of the National Research Centre, Dokki, Egypt.
The patients were divided into 3 groups:
Group I: Extrahepatic biliary atresia (EHBA) (n=13)
Group II: neonatal hepatitis (NH) (n=15)
Group III: paucity of intrahepatic bile ducts (PIBD) (n=6)
Exclusion criteria
Patients with acute viral hepatitis, immune hepatitis, Wilson disease, concurrent conditions in which free radical production is elevated like other inflammatory processes that occur outside the liver or intake of antioxidant drugs affecting free radical scavenging such as vitamins C, A and E at the time of biopsy.
All cases were subjected to: Full history taking including personal history details, past and family histories; vaccination history Thorough clinical examination including anthropometric measurements including height and weight , vital signs, general and local abdominal examination Routine laboratory investigations including urine and stool analysis, complete blood count using Coulter counter, assessment of liver functions (total and direct bilirubin, alanine transaminase, aspartate transaminase and gamma glutamyl transpeptidase; serum total proteins and albumin; prothrombin time and concentration) Other laboratory investigations when needed, e.g., IGg Abdominal ultrasonography GPx, SOD and CAT levels in fresh liver tissue (cell-free homogenates) using Enzyme-Linked Immunosorbent Assay.
Collection of samples
Liver tissue core was taken from each case using modified Minghini needle (sure cut). Each sample was divided into 3 portions, 1 for estimation of each enzyme and was stored at -80°C till assay.
GPx assay
Liver biopsy was washed in phosphate buffer, pH 7.4. Then, the tissue was homogenized in 5 ml/g cold buffer, which is consisted of 50 mM Tris-HCl, pH 7.5, 5 mM EDTA, 1 nM dithiothreitol. The homogenate was centrifuged at 10,000 g for 15 minutes at 4°C. The supernatant was removed for assay of GPx.
GPx activity was measured using Glutathione Peroxidase Assay Kit provided by Cayman Chemical Company, USA. GPx catalyzes the reduction of hydroperoxides, including H 2 O 2 , by reduced glutathione and functions to protect the cell from oxidative damage. With the exception of phospholipid-hydroperoxide GPx, a monomer, all GPx enzymes are tetramers of four identical subunits. Each subunit contains a selenocysteine in the active site, which participates directly in the two-electron reduction of the peroxide substrate. The enzyme uses glutathione as the ultimate electron donor to regenerate the reduced form of the selenocysteine.[ 10 ]
The Cayman Chemical Glutathione Peroxidase Assay Kit measures GPx activity indirectly by a coupled reaction with glutathione reductase (GR). Oxidized glutathione (GSSG), produced upon reduction of hydroperoxide by GPx, and is recycled to its reduced state by GR and NADPH.
Oxidation of NADPH to NADP+ is accompanied by a decrease in absorbance at 340 nm. Under conditions in which the GPx activity is rate limiting, the rate of decrease in the A340 is directly proportional to the GPx activity in the sample.[ 11 ]
SOD assay
Liver biopsy was washed with 0.9% NaCl to remove red blood cells. The tissue was then blotted dry, weighted followed by homogenization in 200 μL buffer (0.05 M potassium phosphate and 0.1 mM EDTA, pH 7.8) and centrifuged at 15,000 g for 30 minutes at 4°C.The supernatant was used for determination of SOD.
SOD was measured using Superoxide Dismutase assay kit provided by Oxis research, USA SOD-525 method. The BIOXYTECH is based on the SOD-mediated increase in the rate of autoxidation of 5,6,6a,11b-tetrahydro-3,9,10- trihydroxybenzo[c]fluorene R1 in aqueous alkaline solution to yield a chromophore with maximum absorbance at 525 nm.[ 12 ]
Interference due to mercaptans (RSH) such as reduced glutathione, is controlled by pretreating samples with 1-methyl-2-vinylpyridinium R2, which directly eliminates mercaptans by means of a fast alkylation reaction. The kinetic measurement of the 525 nm absorbance change is performed after the addition of R1.
The SOD activity is determined from the ratio of the autoxidation rates in the presence (Vs) and in the absence (Vc) of SOD. The Vs/Vc ratio as a function of SOD activity is independent of the type of SOD (Cu/Zn-SOD, Mn-SOD, Fe-SOD) being measured.[ 12 ] One SOD-525 activity unit is defined as the activity that doubles the autoxidation rate of the control blank (Vs/Vc=2).
CAT assay
Biopsy was washed with phosphate buffer, pH 7.4, to remove red blood cells. The tissue was then blotted dry, weighted followed by homogenization in 1.5 ml cold buffer (50 mM potassium phosphate and 1 mM EDTA, pH 7) and centrifugation at 10,000 g for 15 minutes at 4°C was done. The supernatant was used for the assay.
Catalase assay kit provided by Cayman Chemical Company, USA. CAT is a ubiquitous antioxidant enzyme that is present in most aerobic cells. CAT is involved in the detoxification of H 2 O 2 . This enzyme catalyzes the conversion of two molecules of H 2 O 2 to molecular oxygen and two molecules of water (catalytic activity). CAT also demonstrates peroxidatic activity, in which low-molecular-weight alcohols can serve as electron donors, while the aliphatic alcohols serve as specific substrates. In human beings, the highest levels of CAT are found in the liver, kidney and erythrocytes, where it is believed to account for the majority of H 2 O 2 decomposition.
The Cayman Chemical Catalase Assay Kit utilizes the peroxidatic function of CAT for determination of enzyme activity. The method is based on the reaction of the enzyme with methanol, in the presence of an optimal concentration of H 2 O 2 .The formaldehyde produced is measured spectrophotometrically with 4-amino-3-hydrazino-5- mercapto-1,2,4-trizazole as the chromogen. The assay can be used to measure CAT activity in plasma, serum, erythrocyte lysates, tissue homogenates and cell lysates.[ 13 ]
Statistical analysis
SPSS for Windows, version 7.0, computer program was used for statistical analysis. P <0.05 was considered statistically significant. One-way analysis of variance followed by post hoc comparison procedures were used to compare between three or more independent means. The t test was used to compare between two independent means. Pearson correlation coefficient ( r ) was used to measure the linear relationship between two quantitative variables. Data are represented as the mean±standard deviation. | RESULTS
Descriptive data of the children included in the study are represented in Table 1 . The mean levels of SOD, GPx and CAT enzymes in hepatic tissue of cholestatic patients and control children are shown in Table 2 . A significant increase in mean levels of SOD, GPx and CAT was found in hepatic tissue of cholestatic patients compared with that in control children. In Table 3 the mean levels of the three enzymes in the three cholestatic patient groups are shown and compared to control children. The three enzymes were significantly increased in the extrahepatic biliary atresia group, whereas in the groups of neonatal hepatitis and paucity of intrahepatic bile ducts, only GPX and CAT enzymes were significantly increased. Studying the different correlations between liver function tests (total bilirubin, direct bilirubin, alanine transaminases, aspartate transaminases, albumin, prothrombin time, prothrombin concentration, alkaline phosphatase and gamma glutamyl transferase) and the three enzymes, SOD, GPX and CAT, in cholestatic patients revealed the presence of a positive significant correlation only between ALT and SOD levels ( r =0.419, P ≤0.05). | DISCUSSION
Oxidative stress is a major pathogenetic event occurring in several liver disorders ranging from metabolic to proliferate ones, and is a major cause of liver damage in ischemia/ reperfusion during liver transplantation.[ 14 ] Many studies have shown that oxidative stress takes part in the pathogenesis of cholestasis by way of cytokines[ 15 – 18 ] and lipid peroxidation is responsible for the tissue injury in cholestasis.[ 19 ]
Our study showed a significant increase in hepatic tissue SOD, GPx and CAT activities in patients with cholestasis compared with control group [ Table 2 ]; SOD, GPx and CAT were significantly increased in the EHBA group, whereas GPx and CAT were significantly increased in both NH and PIBD groups [ Table 3 ]. Many studies in human beings and rats have been conducted to clarify the relationship between oxidative stress and antioxidants in the liver. In human beings, when suffering from free oxygen radicals, a complex defense system is activated. This system includes GPx, SOD, CAT, glutathione, glutathione reductase and vitamins.[ 20 ] Impairment of the bile flow is likely to result in the accumulation of toxic hydrophobic bile salts within the hepatocytes, with consequent injury caused by their detergent effect. Furthermore, bile salts can cause mitochondrial dysfunction by interfering with electron transport with consequent H 2 O 2 and superoxide formation.[ 21 ]
In agreement with our results, Ercin et al [ 20 ] found increase in the levels of erythrocyte GPx and CuZnSOD in adult patients with cholestasis versus controls; however, their results were statistically nonsignificant. Broide et al [ 9 ] showed a significant increase in SOD levels in the liver of extrahepatic biliary atresia and Alagille patients, whereas nonsignificant increase was noted among neonatal hepatitis patients. They explained that SOD, a key enzyme in free radical protection, increases significantly in the liver tissue of infants with cholestatic liver disease due to bile duct damage, suggesting that products of free radical reactions are involved in the pathogenesis of these disorders. A significant increase in serum SOD activity has also been reported in another study in adult patients.[ 22 ]
Contrary to our results, Togashi et al [ 23 ] studied the levels of ZnSOD, CuSOD and CAT in liver and found them to be low and it was suggested that this was related to the effect of oxidative stress. In another recent study, a marked decrease in the antioxidant status was observed in serum and neutrophil homogenate of patients with chronic liver diseases in comparison with healthy subjects.[ 24 ] This study concluded that deficient antioxidant defense mechanisms may lead to excess oxygen free radical formation that promotes pathological processes in the liver.
In conclusion, the current study demonstrates that GPx, CAT and SOD levels are increased in hepatocytes of patients with chronic cholestasis. The significant increase in their levels may point to their role as key enzymes in the protection of the liver from the hazardous products of free radical reactions, and may reflect an appropriate activity of antioxidant barrier enzymes as a response to increased oxidative stress. Better knowledge of the redox regulation may have important clinical ramifications in understanding the pathogenesis of liver diseases and developing therapeutic approaches. These preliminary results are encouraging to conduct more extensive clinical studies combining antioxidant therapy with various treatments of chronic liver diseases. | Background/Aim:
To study the oxidative stress status in children with cholestatic chronic liver disease by determining activities of glutathione peroxidase (GPx), superoxide dismutase (SOD) and catalase (CAT) in liver tissue.
Materials and Methods:
A total of 34 children suffering from cholestatic chronic liver disease were studied. They were selected from the Hepatology Clinic, Cairo University, and compared with seven children who happened to have incidental normal liver biopsy. The patients were divided into three groups: extrahepatic biliary atresia (n=13), neonatal hepatitis (n=15) and paucity of intrahepatic bile ducts (n=6); GPx, SOD and CAT levels were measured in fresh liver tissue using ELISA.
Results:
In the cholestatic patients, a significant increase was found in mean levels of SOD, GPx and CAT in hepatic tissue compared to control children. The three enzymes significantly increased in the extrahepatic biliary atresia group, whereas in the groups of neonatal hepatitis and paucity of intrahepatic bile ducts, only GPx and CAT enzymes were significantly increased.
Conclusion:
Oxidative stress could play a role in the pathogenesis of cholestatic chronic liver diseases. These preliminary results are encouraging to conduct more extensive clinical studies using adjuvant antioxidant therapy. | Reactive oxygen species (ROS) are well recognized for playing a dual role as both deleterious and beneficial species. ROS are normally generated by tightly regulated enzymes, such as nitric oxide synthase. Beneficial effects of ROS (e.g., superoxide radical) occur at low/moderate concentrations and involve physiological roles in cellular responses to noxia, such as in defense against infectious agents, in the function of a number of cellular signaling pathways, and the induction of a mitogenic response. In contrast, overproduction of ROS [arising either from mitochondrial electron transport chain or excessive stimulation of NAD(P)H] results in oxidative stress.[ 1 ]
Oxidative stress is an abnormal phenomenon occurring inside our cells or tissues when production of oxygen radicals exceeds their antioxidant capacity. Excess of free radicals damage essential macromolecules of the cell, leading to abnormal gene expression, disturbance in receptor activity, proliferation or cell death, immunity perturbation, mutagenesis, protein or lipofuscin deposition.[ 2 ] Antioxidant enzymes catalyze decomposition of ROS. The three major antioxidant enzymes, superoxide dismutase (SOD), glutathione peroxidase (GPx) and catalase (CAT), differ from each other in structure, tissue distribution and cofactor requirement.[ 3 ]
The SOD is a group of metalloenzymes whose function appears to be protection of cells from the toxic effects of the endogenously generated superoxide radicals.[ 4 ] GPx is a selenocysteine-dependent enzyme that protects against oxidative injury. Endogenous activity of GPx is dependent on an adequate supply of the micronutrient selenium.[ 5 ] GPx enzymes are the most important hydrogen peroxide (H 2 O 2 )-removing enzymes in mammalian cells.[ 6 ] CAT is an essential enzyme in the decomposition of intracellular H 2 O 2 . It promotes the breakdown of H 2 O 2 into water and oxygen without producing free radicals. CAT, which is located in peroxisomes, is found in essentially all aerobic cells.[ 4 5 ]
Disturbances in the antioxidant system could play a role in pathogenesis of chronic liver disease.[ 7 8 ] Liver disease in infancy has multiple etiologies. As reactive oxygen intermediates are involved in several types of tissue damage, it has been investigated whether different forms of liver disease in infancy are associated with increased free radical generation, using an indirect approach in which SOD (a free radical scavenger) activity is determined in the liver tissue.[ 9 ]
To our knowledge, no studies to evaluate the oxidative status in pediatric liver tissue have been performed so far; therefore, our aim was to study the oxidative status in children with cholestatic chronic liver disease using an indirect approach in which antioxidant enzymes, namely, GPx, SOD and CAT were determined in the liver tissue. | CC BY | no | 2022-01-12 15:21:47 | Saudi J Gastroenterol. 2010 Apr; 16(2):90-94 | oa_package/c2/d5/PMC3016512.tar.gz |
|||
PMC3016513 | 20339178 | MATERIALS AND METHODS
We performed a retrospective analysis of all patients diagnosed to have AIH, based on the criteria of the International Group of Autoimmune Hepatitis,[ 7 8 ] from the hepatology clinics of KAUH from 1994 to 2008. For each patient, age, sex, clinical presentation at diagnosis, laboratory and immunological data were obtained. Liver biopsy results were also included if available. The data of the responses to therapy and the outcome at the end of the follow-up period were also collected. Patients were excluded from the analysis if evidence with regard to a diagnosis of AIH was insufficient; if the medical record was not complete because of poor follow-up; if the patients had other coexisting liver disease, e.g., nonalcoholic fatty liver disease (NFLD) or chronic hepatitis C (CHC). Clinical presentations were categorized as asymptomatic, defined as abnormal liver enzymes for more than 6 months with positive immunological data; liver biopsy features suggestive of AIH; and absence of other causes of liver disease, including drugs. Acute hepatitis was defined as acute symptoms, including fever, jaundice and right upper abdominal pain, with serum alanine aminotransferase more than 500 U/L. Decompensated cirrhosis was defined as the presence of one of the following features: ascites, variceal bleeding, hepatic encephalopathy, bacterial peritonitis, low serum albumin and prolonged prothrombin time (PT). Presence of other coexisting autoimmune diseases (AIDs) was also researched.
Laboratory data included the following: Liver functions tests were assessed by the dimension clinical chemistry system Flex reagent cartridge (serum alanine aminotransferase [ALT; normal, 30–65 U/L], aspartate amino transferase [AST; normal, 15–37 U/L], alkaline phosphatase [ALP; normal, 50–136 U/L], gamma-glutamyl transferase [GGT; normal, 5–85 U/L], total protein [TP; normal, 64–82 g/L], albumin [Alb; normal, 35–50 g/L], total and direct bilirubin [normal, 0–17 and 0–5 μmol/L, respectively]). A complete blood count (CBC) was done at the time of diagnosis: (white blood cells [WBC; normal, 3–11 KU/L], hemoglobin [Hg; normal, 12–17 g/dL] platelet count [plate; normal, 100–400 KU/L]). Cytopenia caused by hypersplenism was considered to be present if the WBC count was less than 3 (109/ μ L) and/ or platelet count was less than 100 (103/ μ L) in the in the presence of splenomegaly on abdominal ultrasound or CT examination.
Hepatitis serology was performed by ELISA (enzyme-linked immunosorbent assay) for hepatitis B virus (HBSAg, HBeAg, HBeAb, HBcAb) and hepatitis C virus (HCVAb) in all patients; also in patients with acute presentation, the results of hepatitis A virus (HAVAb-IgM) and hepatitis E virus (HEVAb if available) were obtained.
Results of testing for Wilson's disease by 24-hour urine copper and serum copper were determined, and results of transferrin saturation for possible hemochromatosis were reviewed. We also obtained the result of immunological assessment for antinuclear antibody (ANA) performed by indirect immunofluorescence (IIF)—weakly positive- 1/40 and strongly positive- 1/1280; smooth muscle antibody (SMA) was detected by ELISA; liver kidney microsomal-1 (LKM-1) was detected by ELISA; antimitochondrial (AMA) was detected by ELISA; immunoglobulin-G (IgG) level was determined by the nephelometer method (normal range, 5.4–16.1); and anti-neutrophil cytoplasmic antibody (ANCA) was detected by IIF.
The results of upper abdominal ultrasound and/ or abdominal computed tomography (CT) scan at the time of diagnosis were also reviewed. Upper gastrointestinal endoscopic examination result at presentation was obtained (presence of esophageal or gastric varices and/ or portal hypertensive gastropathy was considered as decompensated portal hypertension) even in the absence of a history of bleeding. For patients who had cholestasis at presentation, the results of endoscopic retrograde cholangiopancreatography (ERCP) were included. Liver biopsy results if available were also obtained.
Data regarding treatment and follow-up
Data was collected on modes of treatment, starting and maintenance doses, duration of treatment, treatment response, treatment withdrawal, side effect of medication; and information about medication if any apart from prednisolone and azathioprine (AZA) added to control the disease was obtained.
ALT and serum bilirubin results at diagnosis and at 3, 6, 12 and 24 months after starting treatment (if available) and at the end of the follow-up period were obtained.
Response to treatment was considered complete if the serum ALT level dropped to normal range within 6-24 months of treatment, together with normalization of the serum bilirubin if it was elevated before the treatment. Reduction of ALT to a level below the normal range after 24 months of treatment or by the end of follow-up period if the follow-up period was less than 2 years was considered as incomplete response. Patients that failed to achieve reduction of the serum ALT level and bilirubin or had an elevation in these within the 24 months were considered as nonresponders.
Relapse was defined as elevation of ALT to above normal or to pre-treatment level after an initial response. Duration of follow-up for each patient was recorded. Patients who had progression to decompensated cirrhosis during the follow-up period were identified. Mortality was defined as death during the follow-up period.
Statistical method
Statistical package for social sciences (SPSS) for Windows version 15 was used. The means, standard deviations and frequencies were defined. The chi-square test was used to assess the relation between categorical variables. The independent sample t test was used to correlate the serum ALT and bilirubin to the severity of liver disease. P value of less than 0.05 was considered significant. | RESULTS
From July 1994 to June 2008, 41 patients were diagnosed to have AIH; 33 patients were included in the analysis. Eight patients were excluded. The mean age at presentation was 32.3 years (range, 10–65 years), and 22 (66.66%) patients were aged 35 years or younger. Twenty-five (75.7%) patients were female and 8 were male. Sixty percent of the patients were Saudi. Twelve (36.4%) patients had acute hepatitis at presentation. Jaundice was the most common symptom; however, 7 patients were asymptomatic. Table 1 lists symptoms at presentation. Fifteen (45.5%) patients had decompensated cirrhosis [ Figure 1 ]. Six patients had hypersplenism (thrombocytopenia or leukopenia or both). Liver functions varied from modest elevation in ALT and serum bilirubin to very high levels ( P =0.000 and .03, respectively), in patients with acute hepatitis in comparison to those with asymptomatic and chronic patients. Three patients had cholestasis (higher ALP and GGT compared to ALT and AST) at presentation. Twenty-six patients had serum IgG level result available at the time of diagnosis—1 had normal level, 9 had mild elevation (less than 1.5 times the normal) and 16 had levels more than 1.5 times the normal. Results of the baseline laboratory results are shown in Table 2 . ANA result at diagnosis was available for 31 patients; it was mildly positive in 12 patients, moderate in 11 and strongly positive in 8 patients. Smooth muscle antibody (SMA) result was available for 31 patients; it was negative in 7 patients, moderately positive in 17, and strongly positive in 7 patients. AMA was negative in all the patients. Result of LKM-1 was available for 27 patients; it was negative in all of them.
Hepatitis serology was negative in all patients for HBV and HCV and HAV (in patients with acute presentation); only 3 patients were tested for HEVAb, and it was negative. One third of the patients were tested for PANCA, and it was positive only in 2 patients. Abdominal ultrasound and/or CT scan results on presentation were available for 31 patients; 17 had normal examination, 6 had evidence of cirrhosis without ascites and 8 had advanced cirrhosis with ascites. Only 12 patients had liver biopsy performed at the time of diagnosis, which was consistent with AIH in all of them. In 15 patients with decompensated cirrhosis at presentation, liver biopsy was not performed. Three patients had liver biopsy performed at other centers, for which no results were available in our records; and 2 patients refused liver biopsy. Upper gastrointestinal endoscopy at diagnosis was performed in 29 patients: 21 were normal, 6 had esophageal varices and 2 had portal hypertensive gastropathy. Six patients had coexistent other autoimmune disease/diseases: 3 had systemic lupus erythematosus (SLE), 2 had rheumatoid arthritis and 1 had SLE and Hashimoto's disease.
Treatment
Thirty-one patients received treatment. Two patients were not treated due to persistently normal liver enzymes: one had only transient elevation of ALT (to 100 U/L) at the time diagnosis of AIH, and then dropped spontaneously to normal; and the other had decompensated cirrhosis at presentation. The starting dose of prednisolone was 25–40 mg/day, and maintenance dose was 5–15 mg/day. AZA was added when the response to prednisolone was incomplete or as a steroid-sparing agent after response to prednisolone. Starting dose of AZA was 50–100 mg/day, and maintenance dose was 50–100 mg/day. Seventeen (54.8%) of the treated patients (13 compensated and 4 decompensated cirrhosis) had complete response, 11 patients (35.48% [7 of them had decompensated cirrhosis]) had incomplete response and 3 decompensated cirrhosis patients had no response to the treatment. Duration to complete response varied from 1 to 20 months, with a mean of 5 to 6 months. In general, patients with advanced cirrhosis were less likely to respond to treatment ( P value, 0.016). Two patients with advanced cirrhosis did not respond and the treatment was withdrawn by the physician. One patient with complete response developed a disease flare after 5 years of maintenance therapy when treatment was interrupted, and was subsequently resumed on treatment. Nine patients stopped treatment on several occasions; 4 of them had one relapse and 2 had frequent relapses. Medication in 1 patient who had AZA-induced pancytopenia was shifted to mycophenolate mofetil (MMF) 750 mg twice daily after the second relapse. Four patients had a relapse while on maintenance treatment, and 3 of them had decompensated cirrhosis. Seventeen patients had side effects from prednisolone or AZA or both [ Table 3 ]. As expected, diabetes was the most common side effect. Two patients progressed to decompensated cirrhosis during the follow-up period; both of them had variceal bleeding, hypersplenism and encephalopathy, and both were referred for liver transplant. Four patients died from liver-disease–related complications, and 1 died during pregnancy as a result of pregnancy-related complication. Though not statistically significant, patients with advanced cirrhosis had higher chance of death during the follow-up period. The longest duration of follow-up was 14 years and the shortest was 2 months (1 patient died with fulminant acute AIH); while 69.7% of these patients were on follow-up for more than 2 years, with a mean of 4 years.
Twenty-four of the 28 survivors are still on active follow-up at our hepatology clinics. | DISCUSSION
The prevalence of AIH among patients with liver disease in Saudi Arabia is not known;[ 16 ] it may be much less as compared to North America and European countries. Kalaf and colleagues in their review of 112 liver transplantations (LTs) in Riyadh found that 14.3% of LT indications were due to AIH.[ 17 ] In our cohort, 22 patients were aged less than 35 years; 50% of them had decompensated cirrhosis at presentation. This is in contrast to previously reported data from Japan and the United States in which older patients were reported to have more advanced disease.[ 4 5 ] It is even higher than the rate reported from an Indian study (34.2%).[ 6 ] Acute hepatitis at presentation was observed in 36.4% of our patients, either newly diagnosed or a flare of established cirrhosis. This is comparable to the North American and European figures of 26% to 40%;[ 18 19 ] but on the other hand, significantly higher than the Indian rate of 13.1%.[ 6 ] Asymptomatic patients were found to be more likely to have lower serum ALT compared to symptomatic patients ( P value, 0.05). This is similar to the data reported by Feld and colleagues.[ 12 ] Three females had cholestasis at presentation, and they were thought to have primary biliary cirrhosis although their autoimmune profiles were consistent with AIH; and 2 of these patients had liver biopsy features of AIH. Those three female patients had good initial response to treatment and long-term follow-up compared to patients with hepatocellular pattern at presentation. Huang and colleagues from Taiwan reported similar data on AIH patients with cholestasis.[ 20 ] Present research reported positive ANA in 87.8% of patients and positive SMA in 72.7%. This is higher than the figure reported by Czaja, 67% for ANA and SMA together.[ 21 ] In the most recently published criteria for diagnosis of AIH by Hennes and colleagues, it was found that serum IgG of 1.44, the upper normal limit (UNL), is the best diagnostic predictor for AIH.[ 9 ] In our cohort, serum IgG level was more than 1.44 (UNL) in 61.5% of the patients. All our patients had type 1 AIH; none of them had positive LKM-1; but in previous reports, 4% to 20% of patients had LKM-1–positive type II AIH.[ 19 ] Older patients with AIH were more likely to have coexisting autoimmune diseases, similar to what has been reported by Czaja and Carpenter reported similar findings.[ 5 ] One patient had disease onset during pregnancy, and 2 other patients had flare of the disease during pregnancy; all of them had remission on prednisolone and favorable outcome of pregnancy. Similar outcome of AIH with an onset during pregnancy was reported by Floreani and his colleagues.[ 2 ] We had 54.8% complete response rate, which is lower than internationally reported complete response rate of 65% at 18 months and 80% at 3 years.[ 19 ] Neither the initial ALT level nor the duration of symptoms predicted the response, but decompensated patients were found to be less likely to respond as compared to compensated patients ( P value, 0.016). Fulminant forms of AIH have high mortality in untransplanted patients;[ 22 ] in our cohort, one 64-year-old woman had fulminant AIH. Data on MMF in AZA-intolerant AIH patients showed 43% remission rate;[ 23 ] 1 patient who had AZA-induced pancytopenia responded well to MMF. Genetic factors are thought to influence the disease severity in AIH.[ 24 ] This may be the reason behind different clinical patterns and severity levels of AIH in our patients as compared to patients from other countries.
Our study is limited by the relatively small number of patients; multicenter national data are needed. To overcome the other limitation that data were retrospectively collected, a prospective, well-structured study with complete data of patients would give more accurate local figures of AIH. | CONCLUSION
These results showed that many of our AIH patients are young, with advanced disease at presentation and poor response to treatment, as compared to patients in other countries. Early recognition and treatment of AIH in Saudi Arabia is essential to avoid complications of liver cirrhosis and to reduce the need for liver transplant, in medically treatable disease conditions. National data about AIH from different regions of the country are needed for better understanding of the local disease prevalence among patients with chronic liver disease; for identification of clinical and laboratory patterns; and for assessing the response to therapy. | Background /Aim:
Autoimmune hepatitis (AIH) is a common cause of end-stage liver disease worldwide. It is a disease prevalent in children and adults, with female predominance and variable clinical presentations. AIH has favorable responses to steroids and immunomodulators. Diagnosis of AIH is based on clinical and laboratory criteria, as suggested by the International Autoimmune Hepatitis Group. Data on the disease pattern of AIH from the Middle East countries is scarce.
Materials and Methods:
In this retrospective analysis, we studied clinical and laboratory features, immunological data, radiological findings, liver biopsy findings and response to therapy in patients with AIH from the hepatology clinics of King Abdul Aziz, University Hospital, Jeddah, from 1994 to 2008.
Results:
We diagnosed 41 patients with AIH, and 33 were included in the analysis. The mean age was 32.3 years, with female predominance of 75.7%. De-compensated cirrhosis at presentation was found in 45.5% of the patients. Acute hepatitis was associated with significantly higher levels of the serum ALT and bilirubin ( P =0.001 and P =0.03, respectively). All our patients had type 1 AIH. Treatment with prednisolone and azathioprine resulted in complete or partial remission in majority of the patients (54.8%). However, patients with advanced disease showed a poorer response to treatment ( P =0.016). Six patients with poor compliance had relapse of AIH. Two patients had a flare of the disease during pregnancy, and they responded well with prednisolone. The longest follow-up was 14 years and the shortest was 2 months. Four patients died from liver disease.
Conclusion:
AIH patients in Saudi Arabia are likely to present with advanced disease at a young age and would have a poorer response to therapy as compared with patients in other countries worldwide. | Autoimmune hepatitis (AIH) is one of the leading causes of liver cirrhosis and end-stage liver disease worldwide. The international prevalence of AIH among patients with liver disease is between 11% and 20%.[ 1 ] It is usually a disease of children and young-to-middle age females,[ 2 3 ] but it can also affect individuals with older age and males.[ 4 5 ] The clinical presentations of the disease are variable, from asymptomatic abnormal liver enzymes to fulminant liver failure or advanced decompensated cirrhosis.[ 2 5 6 ] Diagnostic scoring system for AIH has been established since 1993 by the International Autoimmune Hepatitis Group[ 7 ]; it was revised and updated in 1999,[ 8 ] and most recently a simplified criteria were established in 2008.[ 9 ]
Treatment of AIH with steroids and immunomodulators will result in remission in most patients.[ 2 4 10 ] The prognosis of AIH is variable according to the disease severity and progression, although generally the disease has favorable outcome, with 10 years survival to liver-related death or liver transplant of 83.3% and 89.5% in asymptomatic and symptomatic patients, respectively, and 23 years cumulative transplant-free survival of 73.5%.[ 2 11 12 ] Post liver transplant, AIH patients have a 5-year survival rate, similar to that in patients with genetic liver disease.[ 13 ] In Asia and the Middle East countries, including Saudi Arabia, AIH may be under-diagnosed or overlooked by the high prevalence of chronic hepatitis B and C.[ 14 15 ] Fewer reports have been published about AIH in Asia as compared to Europe and North America.[ 6 ] Apart from a few reported cases, there are no published local data about the clinical presentations and prognosis of AIH in Saudi Arabia. In the following retrospective cohort study, we reported clinical presentations, laboratory results, responses to therapy and prognostic outcomes of patients with AIH admitted to the King Abdul Aziz University Hospital (KAUH), which is the main university hospital in Jeddah, Saudi Arabia.
The aim of the study was to investigate the clinical patterns and laboratory and immunological features of AIH, and also to assess the responses to therapy and prognosis of AIH patients at KAUH in Jeddah, Saudi Arabia. | CC BY | no | 2022-01-12 15:21:47 | Saudi J Gastroenterol. 2010 Apr; 16(2):95-99 | oa_package/2d/59/PMC3016513.tar.gz |
||
PMC3016566 | 21224963 | The global economic constraints and increasing financial awareness of 1970s led to the increase in the incidence of ambulatory surgery. Recent advances in medical technology, anaesthesia and pain management have allowed a huge expansion of this modality of care with a consequent reduction in the need for hospitalization. These facilities of ambulatory anaesthesia may be attached to main hospital itself, or office based or free standing. The convenience and low overhead costs continue to attract more surgeries to be conducted in an ambulatory setting. Several experts predict that in the years to come, nearly 80% of all surgeries performed in the United States will be on an ambulatory basis. But the standard of anaesthesia care is the same as that applicable to hospital-based surgeries, even for the most minor surgeries. A backup emergency care, either at same place or at a nearby hospital, must be available.
The ambulatory surgical practice offers several advantages to patients, doctors, and insurance companies, and the cost is expected to be 25–75% lesser than that of a similar inpatient procedure. Studies have reported that the total cost of anaesthesia and recovery using spinal anaesthesia is similar to that for general anaesthesia (GA).[ 1 ] But for the successful conduct of anaesthesia, careful patient selection, type of surgeries based on the facilities available, appropriate preparation of patient and planning are essential. Federated Ambulatory Surgery Association (FASA) has observed that there is no significant relationship between pre-existing diseases and incidence of postoperative complications in an ambulatory setting. Even the extremes of age are not deterrent for ambulatory practice, provided proper attention is paid to discharge planning. Children are excellent candidates for ambulatory surgery as it provides minimal separation from parents and minimal exposure to potentially contaminated hospital environment.
PRE-ANAESTHETIC EVALUATION
In addition to reducing anxiety about the surgery and anaesthesia, pre-anaesthetic evaluation allows the anaesthesiologist to identify potential medical problems in advance, determine their aetiology, and if indicated, initiate appropriate corrective measures. The goals must be to resolve preoperative problems well in advance, thereby minimising the numbers of both cancellations and complications.
Presently, there are several commonly used approaches to screening patients for ambulatory surgery. These include facility visit or office visit prior to the day of surgery or preoperative screening with visit on the morning of surgery, among others. Each system has its own advantages and disadvantages. Ideally, the patient should visit Pre Anaesthetic Evaluation (PAE) clinic and have the assessment to avoid last minute cancellations. Certain disorders such as undiagnosed OSA may be relatively common in an ambulatory surgical population as these patients demand a vigilant perioperative care.[ 2 ] Basic minimum laboratory investigations can be conducted during the above period and appropriate counselling provided to the patient.
The patients are allowed clear liquids up to 2 hours before surgery, without increasing residual gastric volume. Administration of H 2 blockers and metoclopramide can reduce both residual gastric volume and acidity. The intake of oral fluids may actually dilute gastric secretions and stimulate gastric emptying, resulting in lower residual gastric volumes.[ 3 ]
The patients should take all chronic oral medications up to 1 hour before the procedure. Special care must be exercised to continue beta-blockers and calcium channel blockers on the day of surgery. Continuing angiotensin converting enzyme (ACE) inhibitors and angiotensin receptor blockers may increase the likelihood of intraoperative hypotension but it will respond to simple treatments without any apparent adverse outcomes.[ 4 ] As most ambulatory surgical procedures present low bleeding risk, the current attitude in ambulatory setting is to maintain aspirin therapy and possible antiplatelet drug inhibitors throughout the perioperative period.[ 5 ] Further, it is suggested that in all surgical situations under ambulatory setting, antiplatelet therapy should be maintained, and if bleeding is likely to threaten either the patient’s life or the success of the surgical procedure, the discontinuation protocol must be established in conjunction with the cardiologist and the antiplatelet therapy resumed as soon as possible. Bridging with low–molecular-weight heparins is not recommended.[ 6 ]
Preoperative sedation, amnesics and anxiolytic drugs can be administered safely without any clinically significant delay in recovery times even after short ambulatory procedures.[ 7 ] Even drugs such as ketamine are used for premedication effectively in mentally disabled patients undergoing major dental surgery, without any increase in the incidence of side effects.[ 8 ]
TECHNIQUES
In ambulatory practice, total intravenous anaesthesia (TIVA) provides advantages for all short surgical procedures and for ENT and ophthalmic surgeries, as even after prolonged infusion, children have a rapid recovery time, no agitation or other behavioural disorders.[ 9 ] Use of anaesthetic adjuvants like Dexmeditomidine are known to minimise sevoflurane-associated emergence agitation (EA) and postoperative pain in paediatric ambulatory surgery.[ 10 ] Newer technology is always of benefit to ambulatory anaesthesia practice. Ultrasound-guided interscalene and supraclavicular blocks are used effectively and safely for ambulatory shoulder arthroscopy compared to landmark based nerve blocks.[ 11 ] Newer spinal anaesthetic techniques for common ambulatory procedures highlight the success of combining subclinical doses of local anaesthetics and intrathecal opioid adjuncts.[ 12 ] The neuraxial block with shorter acting local anaesthetic agents, specific to the expected duration of surgery, may provide superior recovery profiles in the ambulatory setting.[ 13 ] Isobaric prilocaine has a longer duration of action than an equal dose of lignocaine and may be an alternative drug for spinal anaesthesia when intermediate or short duration of action is needed. Transient neurological symptoms (TNS) can occur after spinal anaesthesia with isobaric prilocaine also. Though ropivacaine has not shown benefits over spinal anaesthesia with bupivacaine, the “walk-in, walk-out” spinals with an extremely low dose of lignocaine and opioids for gynaecological laparoscopy have generated a concept of selective spinal anaesthesia.[ 14 ] Gynaecological laparoscopy done under spinal with lignocaine 10 mg + sufentanil 10 μg compared with GA with desflurane and N 2 O showed that with spinal anaesthesia, patients can walk from the operating room table to a stretcher on completion of surgery and recovery time was similar to that of the desflurane group.[ 15 ] Spinal anaesthesia in the outpatient is characterised by rapid onset and offset, easy administration, minimal expense, and minimal side effects or complications and offers advantages for outpatient lower extremity, perineal, and many abdominal and gynaecological procedures.[ 16 ]
The development of small-gauge, pencil-point needles are responsible for the success of outpatient spinal anaesthesia with acceptable rates (0–2%) of postdural puncture headache (PDPH), and compared with peripheral nerve blocks, spinal anaesthesia has a more predictable offset.
CONCERNS FOLLOWING SPINAL ANAESTHESIA
Postdural puncture headache
Use of small 25- or 27-gauze pencil point needle offers a very low incidence of PDPH. Contrary to perception, early ambulation does not appear to play a role in PDPH.
Transient neurological symptoms
The aetiology of TNS remains obscure though lignocaine has been implicated often. Interestingly, ambulatory surgery has been identified as a contributory factor. The incidence is higher in patient on lithotomy or knee arthroscopy position. Apart from lignocaine, even procaine and mepivacaine also have higher incidence of TNS, and bupivacaine has the lowest incidence.
Postoperative nausea and vomiting
The incidence of postoperative nausea and vomiting (PONV) following ambulatory anaesthesia varies between 2.2% and 4.6%, and there is a fivefold increase in PONV following GA compared to spinal anaesthesia. Since PONV is known to delay the patient discharge, a multimodal antiemetic treatment is more beneficial.
Pain
Postoperative pain is the most significant complaint following ambulatory spinal anaesthesia. Factors associated with severe pain in the post anaesthesia care unit (PACU) include younger adults, ASA grade I patients, patients with a larger body mass index (BMI), prolonged duration of surgery and orthopaedic, urologic and plastic surgeries.[ 17 ] Here again, a concept of multimodal analgesic technique with combination local anaesthetics, non-steroidal anti-inflammatory drugs (NSAIDs), opioids has shown better results.
CONTROVERSIAL ISSUES
Optimal evidence-based perioperative blood glucose control in patients undergoing ambulatory surgical procedures remains controversial. Therefore, the Society for Ambulatory Anesthesia (SAMBA) used the Grading of Recommendations, Assessment, Development, and Evaluation (GRADE) system for providing suggestions. In the absence of high-quality evidence, recommendations are made based on data from only inpatient surgical population, and management of diabetics.[ 18 ]
Whether to allow ambulatory patients to drink or withhold in the early postoperative period: resumption of oral intake and spontaneous voiding are no longer mandatory prerequisites for discharge after outpatient surgery. Mandatory drinking may in fact provoke nausea and vomiting.
Voiding prior to discharge: Blockade of sympathetic nerve supply to bladder and urethra may cause retention of urine. Micturation reflex returns on regression of subarachnoid block beyond S-3 level. Hence, in patients with low risk of retention, voiding before discharge appears unnecessary.[ 19 ]
RECOVERY AND DISCHARGE PROCESS
The original Aldrete Score is useful in evaluating initial patient recovery after ambulatory anaesthesia, but the patients’ “home readiness” is better assessed with modified Post Anaesthesia Discharge Scoring system (PADSS). These scores are useful in allowing documentation of objective measurements of clinical recovery. However, the following simple recovery criteria are beneficial in routine clinical practice:
simple psychomotor tests like memory and sensory motor coordination; recovery of motor and sensory functions: With spinal anaesthesia, it is generally accepted that motor and sensory functions return before recovery of sympathetic nerve system; two successive orthostatic MAP decrease of 10% or less; prior to ambulation, patients should have normal perianal sensation (S4–5), ability to plantarflex the foot, and proprioception of the big toe.
Following ambulatory anaesthesia, patients should be discharged home with an adult escort, who ideally will continue overnight supervision of patients’ recovery and patients should also have ready access to healthcare providers.
Hence, it is essential to define safe practice standards based on regional needs and economic considerations. The complications of anaesthesia outside the operating room still persist even in ASA status I patients and in accredited facilities. But adhering to practices such as documentation, guidelines preparation, equipment, standard monitoring, and collaboration with other institutional facilities, backup for the personnel in case of emergencies will enhance the safety, efficiency and reliability of office-based anaesthesia inside and outside the hospital.
Because outpatient anaesthesia is a breakaway from our traditional training, we are constantly being confronted with the need for change in our clinical practice patterns. It is obvious that there is much to learn about anaesthesia for ambulatory surgery. | CC BY | no | 2022-01-12 15:30:18 | Indian J Anaesth. 2010 Nov-Dec; 54(6):485-488 | oa_package/b1/2a/PMC3016566.tar.gz |
|||||||
PMC3016567 | 21224964 | INTRODUCTION
Major blood loss is a known complication in liver resection and liver transplantation, with a multi-factorial origin. Hyper-fibrinolysis plays a significant role in non-surgical blood loss requiring massive transfusion. Red blood cell (RBC) and platelet transfusions are independent risk factors for adverse outcomes after liver transplantation.[ 1 ] Primary hyper-fibrinolysis that occurs during liver surgery is the basis for the use of antifibrinolytic agents to reduce blood loss and transfusion requirements. Two groups of antifibrinolytics are available: lysine analogues (epsilon aminocaproic acid and tranexamic acid) and serine protease inhibitors (aprotinin). Of these drugs, aprotinin has been the most extensively studied but is now in disrepute as it is reported to increase mortality in cardiac surgery. Tranexamic acid is more commonly used and found to be effective in decreasing transfusion requirements. | DISCUSSION
Other than fibrinolysis, there may be multiple causative factors for excessive bleeding during OLT like thrombocytopenia, dilutional coagulopathy, hypothermia, bleeding due to technical difficulty and inadequate surgical expertise. Antifibrinolytics will decrease bleeding only in cases where it is caused by enhanced fibrinolysis. But they would be harmful in patients with prothrombotic states like Budd-Chiari syndrome, multiorgan transplantation, retransplantation, fulminant liver disease, primary biliary cirrhosis, primary sclerosing cholangitis, renal failure, malignant disease, preexisting thrombotic disease (portal vein thrombosis), and DIC and in paediatric patients. Literature on the use of antifibrinolytic drugs in liver resections is limited.
Among the three antifibrinolytics, aprotinin and TA are widely studied drugs, whereas with EACA, only one RCT is available,[ 17 ] which showed no benefit in comparison with placebo. Therefore, until more definitive studies are performed, the role of EACA in OLT will remain ill defined.
Aprotinin and TA have been shown to decrease RBC transfusion requirement in OLT. Aprotinin, but not TA, also reduces intraoperative use of FFP significantly. However, with the FDA warning in 2007 regarding the use of aprotinin,[ 15 ] physicians should consider the use of aprotinin in situations where the benefit of reduced blood loss outweighs the potential risks associated with its use.
TA has been shown equally effective as aprotinin in reducing blood loss.[ 20 ] When compared to aprotinin it will provide the advantage of being more cost-effective with fewer side effects.
As mentioned earlier, a hyper-coagulable state may also occur in OLT and the risk of thromboembolic complications will increase with use of antifibrinolytics. It is desirable to preoperatively identify the patients who will benefit from an antifibrinolytic drug, thus avoiding extra costs and side effects in patients who do not need the drug. There is no uniform definition of these high-risk cases, but patients with chronic hepatitis, cirrhosis and portal hypertension usually have higher blood loss and are more prone to hyperfibrinolysis. Antifibrinolytics are generally avoided in patients with pre-existing thrombosis, Budd-Chiari syndrome, hepatic artery or portal venous thrombosis.
Another concern is the optimal dosing of the drug. Various dosing schemes have been described in different studies reported so far with no consensus available regarding dosage of any of these three antifibrinolytic drugs. | CONCLUSION
Reduction in intraoperative bleeding and transfusion requirement with aprotinin and tranexamic acid has been well established in patients undergoing orthotopic liver transplantation. However, patient selection should be on an individual basis to avoid complications. Further large scale and systematic studies are required to draw a firm conclusion about the lowest effective dosages and the risk of thromboembolic complications with antifibrinolytic use. | Hyperfibrinolysis, a known complication of liver surgery and orthotopic liver transplantation (OLT), plays a significant role in blood loss. This fact justifies the use of antifibrinolytic drugs during these procedures. Two groups of drug namely lysine analogues [epsilon aminocaproic acid (EACA) and tranexamic acid (TA)] and serine-protease-inhibitors (aprotinin) are frequently used for this purpose. But uniform data or guidelines on the type of antifibrinolytic drugs to be used, their indications and correct dose, is still insufficient. Antifibrinolytics behave like a double-edged sword. On one hand, there are benefits of less transfusion requirements but on the other hand there is potential complication like thromboembolism, which has been reported in several studies. We performed a systematic search in PubMed and Cochrane Library, and we included studies wherein antifibrinolytic drugs (EACA, TA, or aprotinin) were compared with each other or with controls/placebo. We analysed factors like intraoperative red blood cell and fresh frozen plasma requirements, the perioperative incidence of hepatic artery thrombosis, venous thromboembolic events and mortality. Among the three drugs, EACA is least studied. Use of extensively studied drug like aprotinin has been restricted because of its side effects. Haemostatic effect of aprotinin and tranexamic acid has been comparable. However, proper patient selection and individualized treatment for each of them is required. Purpose of this review is to study various clinical trials on antifibrinolytic drugs and address the related issues like benefits claimed and associated potential complications. | SEARCH STRATEGY AND DATA ANALYSIS
A systemic literature search was conducted in PubMed and the Cochrane Library from 1966 till date. The search strategy was set up using the following single text words and combinations: aprotinin, ε-aminocaproic acid (EACA), tranexamic acid (TA), antifibrinolytic drug, antifibrinolytics and liver transplantation. Reference lists of relevant articles were cross checked for other potentially relevant articles.
In the systematic review all trials, both randomized and non-randomized, comparing antifibrinolytic drugs among each other or with placebo/controls were included.
The following data were considered. Red blood cell (RBC) and fresh frozen plasma (FFP) transfusion requirements during transplantation, perioperative hepatic artery thrombosis and venous thromboembolic events. We also compared the various drugs (TA, EACA, aprotinin), irrespective of the dosage used.
HAEMOSTATIC CHANGES DURING LIVER TRANSPLANTATION
During the anhepatic phase, circulating levels of plasminogen activator inhibitor (PAI), which is synthesized by the liver, are reduced leading to increase in tissue plasminogen activator (t-PA). t-PA is the major activator for the conversion of plasminogen to plasmin resulting in fibrinolysis. At reperfusion, there is an unpredictable but accelerated release of t-PA from the graft endothelium which causes generalized fibrinolysis and surgical bleeding.[ 2 – 5 ]
HAEMOSTATIC CHANGES DURING LIVER RESECTION
There may be a variable degree of hyperfibrinolytic state during liver resection. This event is more pronounced in patients with diseased liver or who undergo wider hepatectomy. The hyperfbrinolytic state develops immediately after liver resection with peak effect on the 1 st postoperative day and returns to normal only after 3 -7 days.[ 6 ]
ANTIFIBRINOLYTIC AGENTS
Two groups of drugs are used to inhibit fibrinolysis: lysine analogues (epsilon aminocaproic acid and tranexamic acid) and the serine protease inhibitor (aprotinin).
Epsilon aminocaproic acid
Epsilon aminocaproic acid (EACA) is a synthetic lysine analogue. It binds reversibly to the kringle domain of the enzyme plasminogen, and competitively inhibits the binding of plasminogen to lysine residue on the surface of fibrin and prevents conversion of plasminogen to plasmin. Some studies have shown that it also inhibits pro-urokinase-induced plasminogen activation and prevents plasmin degradation of platelet glycoprotein Ib receptors, thus preserving platelet function.[ 7 8 ] It is primarily metabolised and eliminated by kidney. Sixty-five percent of drug is found unchanged in urine. Its half-life is about 2 hours. EACA is associated with renal complications. Acute renal failure may be due to Acute tubular necrosis (ATN), renal infarction, myopathy, pigment-induced renal complications, glomerular capillary thrombosis and elevated excretion of beta-2 microglobulin.
Tranexamic acid
A 4-aminomethyl cyclohexane-carbolic acid, which is a synthetic derivative of the amino acid lysine. Tranexamic acid (TA) prevents plasmin-mediated conversion of fibrinogen to fibrinogen split products by competitively binding to the lysine binding sites on the plasminogen molecule. It also inhibits the action of plasminogen and plasmin on platelets and exerts a protective effect on platelets. At higher concentrations, TA may also act as non-competitive inhibitor of plasmin.[ 9 ] It is 6–10 times more potent than EACA and has a longer half-life, which is about 3.1 hours. As compared to EACA, its antifibrinolytic activity is higher in peripheral compartments like kidney, intestines, and prostatic tissues.[ 10 ] Kidney is the primary organ for its excretion where about 95% of drug is eliminated in unchanged form. TA is a well-tolerated drug with less adverse effects. However, it is reported to be associated with nausea, vomiting, diarrhoea, orthostatic reactions and retinal changes.
Aprotinin
It is a naturally occurring protease inhibitor derived from bovine and porcine lung. It inhibits various proteases like human plasmin, trypsin, kallikrein, chymotrypsin, activated protein C and thrombin.[ 11 12 ] It forms an aprotinin-enzyme complex on the active serine site of the enzyme which has a specific dissociation constant for aprotinin – highest with trypsin, moderate with plasmin and lowest with kallikrein. Inhibition of kallikrein requires a higher dose of aprotinin than inhibition of plasmin. Mechanism of action of aprotinin is complex including inhibition of plasmin, contact activation system (via kallikrein inhibition) and tissue-plasminogen activator production. In addition to antifibrinolytic effect, aprotinin also has antithrombotic effects, which may be due to selective blockade of proteolytically activated thrombin receptors (PAR1) on platelets.[ 13 ] The proteases are a part of various inflammatory cascades, which may explain the role of aprotinin in decreasing the inflammatory response during major surgery. The terminal half-life is 7–8 hours. There have been concerns about the safety of aprotinin. Side effects like anaphylaxis and thrombosis could lead to renal failure, myocardial infarction, heart failure, stroke and encephalopathy. Bayer withdrew aprotinin in November 2007 because Fergussion and others[ 14 ] showed an increased risk of death when used to prevent bleeding during cardiac surgery. The study compared aprotinin with lysine analogues (TA and EACA) in patients undergoing cardiac surgery. Although there was less bleeding with aprotinin, the trial was prematurely terminated because of higher death rates in patients receiving aprotinin. The FDA (USA) now recommends restricted use of aprotinin only in patients with increased risk of bleeding when no other acceptable alternative is available and that the physician using aprotinin in such a situation must outweigh the risk for the patient.[ 15 ]
Review of literature
EACA
EACA was first used in liver transplantation in 1966.[ 16 ] In a study of 97 patients undergoing OLT, 20 patients developed severe hyperfibrinolytic state and were treated successfully with 1 gm of EACA.[ 3 ] In another randomized placebo-controlled study using EACA 16 mg/kg/h, TA 10 mg/kg/h and placebo, EACA reduced RBC and FFP transfusion requirement, but this was not statistically significant, whereas TA significantly decreased fibrinolysis and intraoperative RBC requirements. The incidence of thrombosis did not differ among the 3 groups.[ 17 ]
TA
Use of TA in OLT was first reported in the 1980s.[ 18 ] Later on TA was compared with placebo in 45 patients of OLT, where TA (20 mg/kg) showed significantly less intraoperative blood loss and reduced transfusion requirements.[ 19 ] No patient had hepatic artery or portal vein thrombosis. But other investigators failed to demonstrate efficacy of small dose of TA (2 mg/kg/h) in reducing transfusion requirements and reported one case of hepatic artery thrombosis postoperatively with TA.[ 4 ] In another randomized controlled study with TA (10 mg/kg/h) and aprotinin (2 million KIU bolus followed by 500,000 KIU/h infusions), Dalmau and others. did not find any significant difference in blood loss, transfusion requirements or perioperative complications.[ 20 ]
Wu and others conducted a prospective randomized trial to examine the feasibility of a blood transfusion-free hepatectomy. They administered TA 500 mg before surgery followed by 250 mg 6 hourly for 3 days, and observed significantly less intraoperative blood loss, lower transfusion rate and shorter operative time.[ 6 ]
Aprotinin
Neuhaus and others[ 21 ] first reported the clinical use of aprotinin in OLT in 1989 with a dose of 2 million Kallikren inhibition unit (KIU), which reduced blood loss, transfusion requirements and duration of surgery. Subsequently, several other reports supported this finding.[ 22 – 27 ] A comparative study with two different doses of aprotinin (high dose, i.e., 2 million KIU followed by an infusion of 500,000 KIU/h vs. low dose, i.e., 500,000 KIU followed by infusion of 150,000 KIU/h) found no significant difference in the rate of red cell transfusion between the high and low dose groups.[ 28 ] Another study showed that low dose aprotinin decreased cryoprecipitate and FFP requirements but not PRBC and platelet requirement[ 24 ] Garcia-Huete and others challenged its efficacy in a prospective trial comparing aprotinin (2×10 6 KIU at induction followed by 5×10 6 KIU/h infusions) with placebo and found similar intraoperative requirements of RBCs, Fresh frozen plasma (FFP), platelets and cryoprecipitate in both groups.[ 29 ]
Aprotinin also has anti-inflammatory and antioxidant effects, which helps to provide significantly better haemodynamic stability and a lesser degree of reperfusion syndrome in OLT.[ 30 ] The European Multicentre Study of Aprotinin in Liver transplant (EMSALT) showed a decrease in red blood cell usage with both high dose and regular dose of aprotinin.[ 26 ]
Aprotinin was also showed useful to reduce intraoperative blood loss and transfusion requirement in elective liver resection without any venous thrombosis.[ 31 ] Lentschener and others[ 32 ] reviewed the use of aprotinin in liver transplantation and concluded that prophylactic use of large dose aprotinin decreases blood loss and transfusion requirement only when OLT is associated with significant blood loss and did not alter postoperative outcome.
There are many reviews on the use of antifibrinolytics in liver surgery.[ 33 34 ] In one review; aprotinin appears to be more effective than TA and EACA. But authors commented that it has been studied more extensively.[ 33 ] Molenaar and others[ 34 ] reviewed antifibrinolytics in liver transplantation and concluded that both aprotinin and TA significantly reduce RBC transfusion requirements. Aprotinin, but not TA, significantly reduces the intra-operative use of FFP. There was no evidence of an increased risk of hepatic artery thrombosis, venous thromboembolic events or mortality in patients who received antifibrinolytics.
In a review evaluating haemostatic effect of aprotinin with nafamostat mesilate, improvements in surgical technique and anaesthesiological care were found to be more important in reducing blood loss than the use of the antifibrinolytic drugs in partial hepatectomy. In liver transplantation, aprotinin reduced blood loss and transfusion requirements by 30-40%. They concluded that scientific support for the routine use of aprotinin or nafamostat mesilate in partial hepatectomy was insufficient, whereas the efficacy of aprotinin in liver transplantation was confirmed.[ 35 ] However, pharmacological measures to reduce bleeding and transfusion requirements must be based on clinical evidence.[ 36 ]
Gurusamy and others[ 37 ] reviewed pharmacological interventions to decrease bleeding in liver resection using aprotinin, desmopressin, recombinant factor VIIa, antithrombin III and TA. There was no significant difference in the perioperative mortality, survival at maximal follow-up, liver failure or other perioperative morbidity. Transfusion requirement was significantly lower in the aprotinin and tranexamic acid groups. Authors concluded that there was high risk of type I and type II statistical errors because of few trials, the small sample size in each trial and due to high risk of bias.
COMPLICATIONS
Thromboembolic phenomena are the most undesirable complications during liver transplantation manifesting as hepatic artery thrombosis, venous thromboembolism pulmonary thromboembolism. The pathogenesis of thromboembolism during OLT is complex. Several factors, which are inherent to the procedure of transplantation, can activate the coagulation system. Injury of a large capillary bed, venous stasis due to clamping (total or partial) of venacava or portal vein, ischemic insult of the intestine, activators released from the graft, massive blood loss, septic complications and use of venovenous bypass may all contribute to this increased risk for thromboembolic events. However, development of full-blown disseminated intravascular coagulation (DIC) and a consumption coagulopathy is rare.[ 38 ] In a recent review of thromboembolic complications in OLT identified seventy-four cases of intraoperative pulmonary embolism (PE) and/or intra cardiac thrombosis (ICT) during OLT; PE alone in 32 patients (43%) and a combination of PE and ICT in 42 patients (57%). PE and ICT occurred in every stage of the operation and were reported equally in patients with or without the use of venovenous bypass or antifibrinolytics. The authors commented that intraoperative PE and ICT during OLT have multiple aetiologies and may occur unexpectedly at any time during the procedure.[ 39 ]
Hepatic artery thrombosis is a serious complication, resulting in bile duct necrosis requiring re-transplantation. In a systematic review and meta-analysis, the incidence of hepatic artery thrombosis in placebo group was 2.5%, 4.6% with TA, 4.8% with EACA and 1.3% with aprotinin, which is lower than any other group.[ 34 ]
Venous thromboembolism is another concern, which has been reported by several authors.[ 3 40 ] There are at least 30 case reports of intraoperative thromboembolism. One case report did not mention the use of antifibrinolytics. In 5 out of 29 cases, antifibrinolytics were not used. Out of 24 patients receiving antifibrinolytics, 11 received aprotinin, 10 received EACA and 3 patients had received both EACA and aprotinin. No case of intraoperative thromboembolism was reported with TA.
In a recent review,[ 34 ] the incidence of venous thromboembolic events with TA was 0.7% (2/306, both postoperatively), with aprotinin 1.4% (5/349, 3 postoperative) and with placebo 1.5%.
Different studies are shown in tabular form [ Table 1 ] with different antifibrinolytic agents, time of their administration, and their outcome.[ 38 41 – 51 ] Post-transplant morbidity in the form of thromboembolic complications differs in the recipients of live donor liver transplant (LDLT) and deceased donor liver transplant (DDLT)/ split liver transplant. In a retrospective cohort study, incidence of hepatic artery thrombosis was higher in LDLT (6.5%) than DDLT (2.3%), while portal vein thrombosis was 2.9% in LDLT and 0% in DDLT.[ 52 ] In a single centre study with 224 patients, the incidence of vascular complications was significantly higher with LDLT compared to DDLT (hepatic artery thrombosis- 4.3% vs. 3.2%, portal vein thrombosis- 7.2% vs. 2.6%, respectively).[ 53 ] However, a recent systematic review reported no difference in the incidence of early hepatic artery thrombosis between on LDLT (3.1%) and DDLT (4.6%, P =0.1).[ 54 ] | CC BY | no | 2022-01-12 15:30:18 | Indian J Anaesth. 2010 Nov-Dec; 54(6):489-495 | oa_package/a8/3d/PMC3016567.tar.gz |